# Kimi k1.5: Scaling Reinforcement Learning with LLMs
**Authors**: Kimi Team
Abstract
Language model pretraining with next token prediction has proved effective for scaling compute but is limited to the amount of available training data. Scaling reinforcement learning (RL) unlocks a new axis for the continued improvement of artificial intelligence, with the promise that large language models (LLMs) can scale their training data by learning to explore with rewards. However, prior published work has not produced competitive results. In light of this, we report on the training practice of Kimi k1.5, our latest multi-modal LLM trained with RL, including its RL training techniques, multi-modal data recipes, and infrastructure optimization. Long context scaling and improved policy optimization methods are key ingredients of our approach, which establishes a simplistic, effective RL framework without relying on more complex techniques such as Monte Carlo tree search, value functions, and process reward models. Notably, our system achieves state-of-the-art reasoning performance across multiple benchmarks and modalitiesâe.g., 77.5 on AIME, 96.2 on MATH 500, 94-th percentile on Codeforces, 74.9 on MathVistaâmatching OpenAIâs o1. Moreover, we present effective long2short methods that use long-CoT techniques to improve short-CoT models, yielding state-of-the-art short-CoT reasoning resultsâe.g., 60.8 on AIME, 94.6 on MATH500, 47.3 on LiveCodeBenchâoutperforming existing short-CoT models such as GPT-4o and Claude Sonnet 3.5 by a large margin (up to +550%).
<details>
<summary>x1.png Details</summary>

### Visual Description
## Bar Chart: Model Performance Across Tasks
### Overview
This image presents a bar chart comparing the performance of five different models â Kimi k1.5 long-CoT, OpenAI ot, OpenAI ot-mini, QVG-T2B-Preview, and QwQ-32B Preview â across six different tasks: MATH 2024, MATH 500, Codeforces, LiveCodeBench v5, Math Vista, and MMMU. Performance is measured using different metrics for each task (Pass@1, EM, Percentile). Each task has a set of bars representing the performance of each model.
### Components/Axes
* **X-axis:** Represents the six different tasks: MATH 2024 (Pass@1), MATH 500 (EM), Codeforces (Percentile), LiveCodeBench v5 (24.12-25.2) (Pass@1), Math Vista (Pass@1), MMMU (Pass@1).
* **Y-axis:** Represents the performance score, with a scale ranging from approximately 40 to 100. No explicit scale is provided, but values are displayed on top of each bar.
* **Legend (Top-Left):**
* Kimi k1.5 long-CoT (Blue)
* OpenAI ot (Light Blue)
* OpenAI ot-mini (Gray)
* QVG-T2B-Preview (Dark Gray)
* QwQ-32B Preview (Medium Gray)
### Detailed Analysis
Here's a breakdown of the performance for each task and model:
**1. MATH 2024 (Pass@1)**
* Kimi k1.5 long-CoT: 77.5
* OpenAI ot: 74.4
* OpenAI ot-mini: 63.6
* QVG-T2B-Preview: 50
* QwQ-32B Preview: ~50 (visually estimated)
**2. MATH 500 (EM)**
* Kimi k1.5 long-CoT: 96.2
* OpenAI ot: 94.8
* OpenAI ot-mini: 90.6
* QVG-T2B-Preview: ~90 (visually estimated)
* QwQ-32B Preview: ~90 (visually estimated)
**3. Codeforces (Percentile)**
* Kimi k1.5 long-CoT: 94
* OpenAI ot: 94
* OpenAI ot-mini: 88
* QVG-T2B-Preview: 62
* QwQ-32B Preview: ~62 (visually estimated)
**4. LiveCodeBench v5 (24.12-25.2) (Pass@1)**
* Kimi k1.5 long-CoT: 67.2
* OpenAI ot: 62.5
* OpenAI ot-mini: 53.1
* QVG-T2B-Preview: 40.6
* QwQ-32B Preview: ~40 (visually estimated)
**5. Math Vista (Pass@1)**
* Kimi k1.5 long-CoT: 74.9
* OpenAI ot: 71.4
* OpenAI ot-mini: 70
* QVG-T2B-Preview: ~70 (visually estimated)
* QwQ-32B Preview: 73.3
**6. MMMU (Pass@1)**
* Kimi k1.5 long-CoT: 70.3
* OpenAI ot: 70
* OpenAI ot-mini: ~70 (visually estimated)
* QVG-T2B-Preview: ~70 (visually estimated)
* QwQ-32B Preview: 77.3
### Key Observations
* Kimi k1.5 long-CoT consistently performs well across all tasks, often achieving the highest scores.
* OpenAI ot and OpenAI ot-mini generally perform similarly, with OpenAI ot slightly outperforming ot-mini in most cases.
* QVG-T2B-Preview and QwQ-32B Preview consistently show the lowest performance across most tasks.
* MATH 500 shows the highest overall scores, while LiveCodeBench v5 shows the lowest.
* The performance gap between models is most significant on the MATH 2024 and LiveCodeBench v5 tasks.
### Interpretation
The data suggests that Kimi k1.5 long-CoT is the most capable model across the evaluated tasks, demonstrating strong performance in both mathematical reasoning (MATH 2024, MATH 500, Math Vista) and code generation (Codeforces, LiveCodeBench v5). OpenAI's models exhibit solid performance, but generally lag behind Kimi k1.5 long-CoT. QVG-T2B-Preview and QwQ-32B Preview appear to be less effective on these benchmarks.
The varying performance across tasks indicates that different models excel in different areas. The high scores on MATH 500 suggest that the models are proficient in solving well-defined mathematical problems, while the lower scores on LiveCodeBench v5 suggest challenges in more complex code generation scenarios. The use of different metrics (Pass@1, EM, Percentile) makes direct comparison across tasks difficult, but the relative ranking of models within each task provides valuable insights. The "K" symbol on top of each bar likely indicates a statistical significance marker, but without further context, its meaning is unclear.
</details>
Figure 1: Kimi k1.5 long-CoT results.
<details>
<summary>x2.png Details</summary>

### Visual Description
## Bar Chart: Large Language Model Performance on Various Benchmarks
### Overview
The image presents a bar chart comparing the performance of seven Large Language Models (LLMs) â Kimi k1.5 short-CoT, OpenAI 4o, Claude 3.5 Sonnet, Qwen2-VL, LLaMA-3.1 405B-Inst, DeepSeek V3, and Qwen2.5 72B-Inst â across eight different benchmarks. The benchmarks are categorized into Math, Code, Vision, and General reasoning tasks. Performance is measured as a percentage score, likely representing accuracy or pass rate.
### Components/Axes
* **X-axis:** Represents the eight benchmarks: AME 2024 (Pass@1), MATH-500 (EM), LiveCodeBench v4 24.08-24.11 (Pass@1-CoT), MathVista (Pass@1), MMMLU (Pass@1), MMLU (EM), CLUEWSC (EM), C-Eval (EM).
* **Y-axis:** Represents the performance score, ranging from approximately 0 to 100 (percentage). No explicit Y-axis label is present, but it is implied.
* **Bars:** Each benchmark has seven bars, one for each LLM.
* **Legend:** Located at the top of the chart, the legend maps colors to each LLM:
* Kimi k1.5 short-CoT: Dark Blue
* OpenAI 4o: Blue
* Claude 3.5 Sonnet: Light Blue
* Qwen2-VL: Orange
* LLaMA-3.1 405B-Inst: Red
* DeepSeek V3: Grey
* Qwen2.5 72B-Inst: Purple
* **Benchmark Categories:** The chart is visually divided into four sections: Math, Code, Vision, and General.
### Detailed Analysis or Content Details
**Math:**
* **AME 2024 (Pass@1):** Kimi k1.5 short-CoT: 60.6%, OpenAI 4o: 9.3%, Claude 3.5 Sonnet: 21.3%, Qwen2-VL: 39.2%, LLaMA-3.1 405B-Inst: 16%, DeepSeek V3: 23.3%, Qwen2.5 72B-Inst: 0.
* **MATH-500 (EM):** Kimi k1.5 short-CoT: 94.6%, OpenAI 4o: 74.6%, Claude 3.5 Sonnet: 76.3%, Qwen2-VL: 73.8%, LLaMA-3.1 405B-Inst: 90.2%, DeepSeek V3: 80%, Qwen2.5 72B-Inst: 0.
**Code:**
* **LiveCodeBench v4 24.08-24.11 (Pass@1-CoT):** Kimi k1.5 short-CoT: 33.4%, OpenAI 4o: 28.4%, Claude 3.5 Sonnet: 40.5%, Qwen2-VL: 31.1%, LLaMA-3.1 405B-Inst: 0%, DeepSeek V3: 0%, Qwen2.5 72B-Inst: 0.
**Vision:**
* **MathVista (Pass@1):** Kimi k1.5 short-CoT: 70.1%, OpenAI 4o: 63.6%, Claude 3.5 Sonnet: 65.3%, Qwen2-VL: 69.7%, LLaMA-3.1 405B-Inst: 0%, DeepSeek V3: 68.1%, Qwen2.5 72B-Inst: 64.6%.
* **MMMLU (Pass@1):** Kimi k1.5 short-CoT: 68%, OpenAI 4o: 66.4%, Claude 3.5 Sonnet: 69.1%, Qwen2-VL: 64.5%, LLaMA-3.1 405B-Inst: 0%, DeepSeek V3: 0%, Qwen2.5 72B-Inst: 0.
**General:**
* **MMLU (EM):** Kimi k1.5 short-CoT: 87.4%, OpenAI 4o: 83.2%, Claude 3.5 Sonnet: 86.8%, Qwen2-VL: 85.3%, LLaMA-3.1 405B-Inst: 88.5%, DeepSeek V3: 86.5%, Qwen2.5 72B-Inst: 84.1%.
* **IF-Eval (Prompt Strict):** Kimi k1.5 short-CoT: 87.2%, OpenAI 4o: 84.3%, Claude 3.5 Sonnet: 86%, Qwen2-VL: 84.1%, LLaMA-3.1 405B-Inst: 85.6%, DeepSeek V3: 86.6%, Qwen2.5 72B-Inst: 84.1%.
* **CLUEWSC (EM):** Kimi k1.5 short-CoT: 91.7%, OpenAI 4o: 85.4%, Claude 3.5 Sonnet: 90.4%, Qwen2-VL: 84.7%, LLaMA-3.1 405B-Inst: 0%, DeepSeek V3: 0%, Qwen2.5 72B-Inst: 0.
* **C-Eval (EM):** Kimi k1.5 short-CoT: 86.8%, OpenAI 4o: 79%, Claude 3.5 Sonnet: 76.7%, Qwen2-VL: 81.5%, LLaMA-3.1 405B-Inst: 86.1%, DeepSeek V3: 61.5%, Qwen2.5 72B-Inst: 88.1%.
### Key Observations
* **Kimi k1.5 short-CoT** consistently performs very well, often achieving the highest scores, particularly in Math and General reasoning tasks.
* **OpenAI 4o** shows moderate performance across all benchmarks, generally falling in the middle range.
* **Claude 3.5 Sonnet** demonstrates strong performance, often comparable to or slightly below Kimi k1.5 short-CoT.
* **Qwen2-VL** and **Qwen2.5 72B-Inst** show variable performance, with some strong results but also some scores of 0.
* **LLaMA-3.1 405B-Inst** and **DeepSeek V3** frequently score 0 on several benchmarks, indicating very poor performance on those specific tasks.
* There is a clear disparity in performance across different benchmarks. Some benchmarks (e.g., MATH-500, MMLU) show high scores for several models, while others (e.g., LiveCodeBench, CLUEWSC) have significantly lower scores.
### Interpretation
The chart provides a comparative analysis of the capabilities of several LLMs across a diverse set of reasoning tasks. Kimi k1.5 short-CoT emerges as a leading performer, particularly in mathematical and general knowledge domains. The significant variation in performance across benchmarks suggests that LLM capabilities are highly task-specific. The consistently low scores of LLaMA-3.1 405B-Inst and DeepSeek V3 on certain benchmarks indicate potential weaknesses in their architecture or training data for those specific tasks. The presence of zero scores highlights the challenges in achieving robust performance across all areas of reasoning. The data suggests that no single LLM excels in all areas, and the choice of model should be guided by the specific requirements of the application. The benchmarks used (AME, MATH-500, etc.) represent standardized tests designed to evaluate different aspects of LLM intelligence, and the results provide valuable insights into the strengths and weaknesses of each model.
</details>
Figure 2: Kimi k1.5 short-CoT results.
1 Introduction
Language model pretraining with next token prediction has been studied under the context of the scaling law, where proportionally scaling model parameters and data sizes leads to the continued improvement of intelligence. [19, 14] However, this approach is limited to the amount of available high-quality training data [50, 32]. In this report, we present the training recipe of Kimi k1.5, our latest multi-modal LLM trained with reinforcement learning (RL). The goal is to explore a possible new axis for continued scaling. Using RL with LLMs, the models learns to explore with rewards and thus is not limited to a pre-existing static dataset.
There are a few key ingredients about the design and training of k1.5.
- Long context scaling. We scale the context window of RL to 128k and observe continued improvement of performance with an increased context length. A key idea behind our approach is to use partial rollouts to improve training efficiencyâi.e., sampling new trajectories by reusing a large chunk of previous trajectories, avoiding the cost to re-generate the new trajectories from scratch. Our observation identifies the context length as a key dimension of the continued scaling of RL with LLMs.
- Improved policy optimization. We derive a formulation of RL with long-CoT and employ a variant of online mirror descent for robust policy optimization. This algorithm is further improved by our effective sampling strategy, length penalty, and optimization of the data recipe.
- Simplistic Framework. Long context scaling, combined with the improved policy optimization methods, establishes a simplistic RL framework for learning with LLMs. Since we are able to scale the context length, the learned CoTs exhibit the properties of planning, reflection, and correction. An increased context length has an effect of increasing the number of search steps. As a result, we show that strong performance can be achieved without relying on more complex techniques such as Monte Carlo tree search, value functions, and process reward models.
- Multimodalities. Our model is jointly trained on text and vision data, which has the capabilities of jointly reasoning over the two modalities.
Moreover, we present effective long2short methods that use long-CoT techniques to improve short-CoT models. Specifically, our approaches include applying length penalty with long-CoT activations and model merging.
Our long-CoT version achieves state-of-the-art reasoning performance across multiple benchmarks and modalitiesâe.g., 77.5 on AIME, 96.2 on MATH 500, 94-th percentile on Codeforces, 74.9 on MathVistaâmatching OpenAIâs o1. Our model also achieves state-of-the-art short-CoT reasoning resultsâe.g., 60.8 on AIME, 94.6 on MATH500, 47.3 on LiveCodeBenchâoutperforming existing short-CoT models such as GPT-4o and Claude Sonnet 3.5 by a large margin (up to +550%). Results are shown in Figures 1 and 2.
2 Approach: Reinforcement Learning with LLMs
The development of Kimi k1.5 consists of several stages: pretraining, vanilla supervised fine-tuning (SFT), long-CoT supervised fine-turning, and reinforcement learning (RL). This report focuses on RL, beginning with an overview of the RL prompt set curation (Section 2.1) and long-CoT supervised finetuning (Section 2.2), followed by an in-depth discussion of RL training strategies in Section 2.3. Additional details on pretraining and vanilla supervised finetuning can be found in Section 2.5.
2.1 RL Prompt Set Curation
Through our preliminary experiments, we found that the quality and diversity of the RL prompt set play a critical role in ensuring the effectiveness of reinforcement learning. A well-constructed prompt set not only guides the model toward robust reasoning but also mitigates the risk of reward hacking and overfitting to superficial patterns. Specifically, three key properties define a high-quality RL prompt set:
- Diverse Coverage: Prompts should span a wide array of disciplines, such as STEM, coding, and general reasoning, to enhance the modelâs adaptability and ensure broad applicability across different domains.
- Balanced Difficulty: The prompt set should include a well-distributed range of easy, moderate, and difficult questions to facilitate gradual learning and prevent overfitting to specific complexity levels.
- Accurate Evaluability: Prompts should allow objective and reliable assessment by verifiers, ensuring that model performance is measured based on correct reasoning rather than superficial patterns or random guess.
To achieve diverse coverage in the prompt set, we employ automatic filters to select questions that require rich reasoning and are straightforward to evaluate. Our dataset includes problems from various domains, such as STEM fields, competitions, and general reasoning tasks, incorporating both text-only and image-text question-answering data. Furthermore, we developed a tagging system to categorize prompts by domain and discipline, ensuring balanced representation across different subject areas [24, 27].
We adopt a model-based approach that leverages the modelâs own capacity to adaptively assess the difficulty of each prompt. Specifically, for every prompt, an SFT model generates answers ten times using a relatively high sampling temperature. The pass rate is then calculated and used as a proxy for the promptâs difficultyâthe lower the pass rate, the higher the difficulty. This approach allows difficulty evaluation to be aligned with the modelâs intrinsic capabilities, making it highly effective for RL training. By leveraging this method, we can prefilter most trivial cases and easily explore different sampling strategies during RL training.
To avoid potential reward hacking [9, 36], we need to ensure that both the reasoning process and the final answer of each prompt can be accurately verified. Empirical observations reveal that some complex reasoning problems may have relatively simple and easily guessable answers, leading to false positive verificationâwhere the model reaches the correct answer through an incorrect reasoning process. To address this issue, we exclude questions that are prone to such errors, such as multiple-choice, true/false, and proof-based questions. Furthermore, for general question-answering tasks, we propose a simple yet effective method to identify and remove easy-to-hack prompts. Specifically, we prompt a model to guess potential answers without any CoT reasoning steps. If the model predicts the correct answer within $N$ attempts, the prompt is considered too easy-to-hack and removed. We found that setting $N=8$ can remove the majority easy-to-hack prompts. Developing more advanced verification models remains an open direction for future research.
2.2 Long-CoT Supervised Fine-Tuning
With the refined RL prompt set, we employ prompt engineering to construct a small yet high-quality long-CoT warmup dataset, containing accurately verified reasoning paths for both text and image inputs. This approach resembles rejection sampling (RS) but focuses on generating long-CoT reasoning paths through prompt engineering. The resulting warmup dataset is designed to encapsulate key cognitive processes that are fundamental to human-like reasoning, such as planning, where the model systematically outlines steps before execution; evaluation, involving critical assessment of intermediate steps; reflection, enabling the model to reconsider and refine its approach; and exploration, encouraging consideration of alternative solutions. By performing a lightweight SFT on this warm-up dataset, we effectively prime the model to internalize these reasoning strategies. As a result, the fine-tuned long-CoT model demonstrates improved capability in generating more detailed and logically coherent responses, which enhances its performance across diverse reasoning tasks.
2.3 Reinforcement Learning
2.3.1 Problem Setting
Given a training dataset $\mathcal{D}=\{(x_{i},y^{*}_{i})\}_{i=1}^{n}$ of problems $x_{i}$ and corresponding ground truth answers $y^{*}_{i}$ , our goal is to train a policy model $\pi_{\theta}$ to accurately solve test problems. In the context of complex reasoning, the mapping of problem $x$ to solution $y$ is non-trivial. To tackle this challenge, the chain of thought (CoT) method proposes to use a sequence of intermediate steps $z=(z_{1},z_{2},...,z_{m})$ to bridge $x$ and $y$ , where each $z_{i}$ is a coherent sequence of tokens that acts as a significant intermediate step toward solving the problem [54]. When solving problem $x$ , thoughts $z_{t}\sim\pi_{\theta}(·|x,z_{1},...,z_{t-1})$ are auto-regressively sampled, followed by the final answer $y\sim\pi_{\theta}(·|x,z_{1},...,z_{m})$ . We use $y,z\sim\pi_{\theta}$ to denote this sampling procedure. Note that both the thoughts and final answer are sampled as a language sequence.
To further enhance the modelâs reasoning capabilities, planning algorithms are employed to explore various thought processes, generating improved CoT at inference time [58, 55, 44]. The core insight of these approaches is the explicit construction of a search tree of thoughts guided by value estimations. This allows the model to explore diverse continuations of a thought process or backtrack to investigate new directions when encountering dead ends. In more detail, let $\mathcal{T}$ be a search tree where each node represents a partial solution $s=(x,z_{1:|s|})$ . Here $s$ consists of the problem $x$ and a sequence of thoughts $z_{1:|s|}=(z_{1},...,z_{|s|})$ leading up to that node, with $|s|$ denoting number of thoughts in the sequence. The planning algorithm uses a critic model $v$ to provide feedback $v(x,z_{1:|s|})$ , which helps evaluate the current progress towards solving the problem and identify any errors in the existing partial solution. We note that the feedback can be provided by either a discriminative score or a language sequence [61]. Guided by the feedbacks for all $sâ\mathcal{T}$ , the planning algorithm selects the most promising node for expansion, thereby growing the search tree. The above process repeats iteratively until a full solution is derived.
We can also approach planning algorithms from an algorithmic perspective. Given past search history available at the $t$ -th iteration $(s_{1},v(s_{1}),...,s_{t-1},v(s_{t-1}))$ , a planning algorithm $\mathcal{A}$ iteratively determines the next search direction $\mathcal{A}(s_{t}|s_{1},v(s_{1}),...,s_{t-1},v(s_{t-1}))$ and provides feedbacks for the current search progress $\mathcal{A}(v(s_{t})|s_{1},v(s_{1}),...,s_{t})$ . Since both thoughts and feedbacks can be viewed as intermediate reasoning steps, and these components can both be represented as sequence of language tokens, we use $z$ to replace $s$ and $v$ to simplify the notations. Accordingly, we view a planning algorithm as a mapping that directly acts on a sequence of reasoning steps $\mathcal{A}(·|z_{1},z_{2},...)$ . In this framework, all information stored in the search tree used by the planning algorithm is flattened into the full context provided to the algorithm. This provides an intriguing perspective on generating high-quality CoT: Rather than explicitly constructing a search tree and implementing a planning algorithm, we could potentially train a model to approximate this process. Here, the number of thoughts (i.e., language tokens) serves as an analogy to the computational budget traditionally allocated to planning algorithms. Recent advancements in long context windows facilitate seamless scalability during both the training and testing phases. If feasible, this method enables the model to run an implicit search over the reasoning space directly via auto-regressive predictions. Consequently, the model not only learns to solve a set of training problems but also develops the ability to tackle individual problems effectively, leading to improved generalization to unseen test problems.
We thus consider training the model to generate CoT with reinforcement learning (RL) [34]. Let $r$ be a reward model that justifies the correctness of the proposed answer $y$ for the given problem $x$ based on the ground truth $y^{*}$ , by assigning a value $r(x,y,y^{*})â\{0,1\}$ . For verifiable problems, the reward is directly determined by predefined criteria or rules. For example, in coding problems, we assess whether the answer passes the test cases. For problems with free-form ground truth, we train a reward model $r(x,y,y^{*})$ that predicts if the answer matches the ground truth. Given a problem $x$ , the model $\pi_{\theta}$ generates a CoT and the final answer through the sampling procedure $z\sim\pi_{\theta}(·|x)$ , $y\sim\pi_{\theta}(·|x,z)$ . The quality of the generated CoT is evaluated by whether it can lead to a correct final answer. In summary, we consider the following objective to optimize the policy
$$
\displaystyle\max_{\theta}\mathbb{E}_{(x,y^{*})\sim\mathcal{D},(y,z)\sim\pi_{\theta}}\left[r(x,y,y^{*})\right]\,. \tag{1}
$$
By scaling up RL training, we aim to train a model that harnesses the strengths of both simple prompt-based CoT and planning-augmented CoT. The model still auto-regressively sample language sequence during inference, thereby circumventing the need for the complex parallelization required by advanced planning algorithms during deployment. However, a key distinction from simple prompt-based methods is that the model should not merely follow a series of reasoning steps. Instead, it should also learn critical planning skills including error identification, backtracking and solution refinement by leveraging the entire set of explored thoughts as contextual information.
2.3.2 Policy Optimization
We apply a variant of online policy mirror decent as our training algorithm [1, 31, 48]. The algorithm performs iteratively. At the $i$ -th iteration, we use the current model $\pi_{\theta_{i}}$ as a reference model and optimize the following relative entropy regularized policy optimization problem,
$$
\displaystyle\max_{\theta}\mathbb{E}_{(x,y^{*})\sim\mathcal{D}}\left[\mathbb{E}_{(y,z)\sim\pi_{\theta}}\left[r(x,y,y^{*})\right]-\tau\mathrm{KL}(\pi_{\theta}(x)||\pi_{\theta_{i}}(x))\right]\,, \tag{2}
$$
where $\tau>0$ is a parameter controlling the degree of regularization. This objective has a closed form solution
| | $\displaystyle\pi^{*}(y,z|x)=\pi_{\theta_{i}}(y,z|x)\exp(r(x,y,y^{*})/\tau)/Z\,.$ | |
| --- | --- | --- |
Here $Z=\sum_{y^{\prime},z^{\prime}}\pi_{\theta_{i}}(y^{\prime},z^{\prime}|x)\exp(r(x,y^{\prime},y^{*})/\tau)$ is the normalization factor. Taking logarithm of both sides we have for any $(y,z)$ the following constraint is satisfied, which allows us to leverage off-policy data during optimization
| | $\displaystyle r(x,y,y^{*})-\tau\log Z=\tau\log\frac{\pi^{*}(y,z|x)}{\pi_{\theta_{i}}(y,z|x)}\,.$ | |
| --- | --- | --- |
This motivates the following surrogate loss
| | $\displaystyle L(\theta)=\mathbb{E}_{(x,y^{*})\sim\mathcal{D}}\left[\mathbb{E}_{(y,z)\sim\pi_{\theta_{i}}}\left[\left(r(x,y,y^{*})-\tau\log Z-\tau\log\frac{\pi_{\theta}(y,z|x)}{{\pi}_{\theta_{i}}(y,z|x)}\right)^{2}\right]\right]\,.$ | |
| --- | --- | --- |
To approximate $\tau\log Z$ , we use samples $(y_{1},z_{1}),...,(y_{k},z_{k})\sim\pi_{\theta_{i}}$ : $\tau\log Zâ\tau\log\frac{1}{k}\sum_{j=1}^{k}\exp(r(x,y_{j},y^{*})/\tau)$ . We also find that using empirical mean of sampled rewards $\overline{r}=\mathrm{mean}(r(x,y_{1},y^{*}),...,r(x,y_{k},y^{*}))$ yields effective practical results. This is reasonable since $\tau\log Z$ approaches the expected reward under $\pi_{\theta_{i}}$ as $\tauââ$ . Finally, we conclude our learning algorithm by taking the gradient of surrogate loss. For each problem $x$ , $k$ responses are sampled using the reference policy $\pi_{\theta_{i}}$ , and the gradient is given by
$$
\displaystyle\frac{1}{k}\sum_{j=1}^{k}\left(\nabla_{\theta}\log\pi_{\theta}(y_{j},z_{j}|x)(r(x,y_{j},y^{*})-\overline{r})-\frac{\tau}{2}\nabla_{\theta}\left(\log\frac{\pi_{\theta}(y_{j},z_{j}|x)}{{\pi}_{\theta_{i}}(y_{j},z_{j}|x)}\right)^{2}\right)\,. \tag{3}
$$
To those familiar with policy gradient methods, this gradient resembles the policy gradient of (2) using the mean of sampled rewards as the baseline [20, 2]. The main differences are that the responses are sampled from $\pi_{\theta_{i}}$ rather than on-policy, and an $l_{2}$ -regularization is applied. Thus we could see this as the natural extension of a usual on-policy regularized policy gradient algorithm to the off-policy case [33]. We sample a batch of problems from $\mathcal{D}$ and update the parameters to $\theta_{i+1}$ , which subsequently serves as the reference policy for the next iteration. Since each iteration considers a different optimization problem due to the changing reference policy, we also reset the optimizer at the start of each iteration.
We exclude the value network in our training system which has also been exploited in previous studies [2]. While this design choice significantly improves training efficiency, we also hypothesize that the conventional use of value functions for credit assignment in classical RL may not be suitable for our context. Consider a scenario where the model has generated a partial CoT $(z_{1},z_{2},...,z_{t})$ and there are two potential next reasoning steps: $z_{t+1}$ and $z^{\prime}_{t+1}$ . Assume that $z_{t+1}$ directly leads to the correct answer, while $z^{\prime}_{t+1}$ contains some errors. If an oracle value function were accessible, it would indicate that $z_{t+1}$ preserves a higher value compared to $z^{\prime}_{t+1}$ . According to the standard credit assignment principle, selecting $z^{\prime}_{t+1}$ would be penalized as it has a negative advantages relative to the current policy. However, exploring $z^{\prime}_{t+1}$ is extremely valuable for training the model to generate long CoT. By using the justification of the final answer derived from a long CoT as the reward signal, the model can learn the pattern of trial and error from taking $z^{\prime}_{t+1}$ as long as it successfully recovers and reaches the correct answer. The key takeaway from this example is that we should encourage the model to explore diverse reasoning paths to enhance its capability in solving complex problems. This exploratory approach generates a wealth of experience that supports the development of critical planning skills. Our primary goal is not confined to attaining high accuracy on training problems but focuses on equipping the model with effective problem-solving strategies, ultimately improving its performance on test problems.
2.3.3 Length Penalty
We observe an overthinking phenomenon that the modelâs response length significantly increases during RL training. Although this leads to better performance, an excessively lengthy reasoning process is costly during training and inference, and overthinking is often not preferred by humans. To address this issue, we introduce a length reward to restrain the rapid growth of token length, thereby improving the modelâs token efficiency. Given $k$ sampled responses $(y_{1},z_{1}),...,(y_{k},z_{k})$ of problem $x$ with true answer $y^{*}$ , let $\mathrm{len}(i)$ be the length of $(y_{i},z_{i})$ , $\mathrm{min\_len}=\min_{i}\mathrm{len}(i)$ and $\mathrm{max\_len}=\max_{i}\mathrm{len}(i)$ . If $\mathrm{max\_len}=\mathrm{min\_len}$ , we set length reward zero for all responses, as they have the same length. Otherwise the length reward is given by
| | $\displaystyle\mathrm{len\_reward(i)}=\left\{\begin{aligned} \lambda&\quad\text{If}\ r(x,y_{i},y^{*})=1\\
\min(0,\lambda)&\quad\text{If}\ r(x,y_{i},y^{*})=0\\
\end{aligned}\right.\,,\quad\text{where }\lambda=0.5-\frac{\mathrm{len}(i)-\mathrm{min\_len}}{\mathrm{max\_len}-\mathrm{min\_len}}\,.$ | |
| --- | --- | --- |
In essence, we promote shorter responses and penalize longer responses among correct ones, while explicitly penalizing long responses with incorrect answers. This length-based reward is then added to the original reward with a weighting parameter.
In our preliminary experiments, length penalty may slow down training during the initial phases. To alleviate this issue, we propose to gradually warm up the length penalty during training. Specifically, we employ standard policy optimization without length penalty, followed by a constant length penalty for the rest of training.
2.3.4 Sampling Strategies
Although RL algorithms themselves have relatively good sampling properties (with more difficult problems providing larger gradients), their training efficiency is limited. Consequently, some well-defined prior sampling methods can yield potentially greater performance gains. We exploit multiple signals to further improve the sampling strategy. First, the RL training data we collect naturally come with different difficulty labels. For example, a math competition problem is more difficult than a primary school math problem. Second, because the RL training process samples the same problem multiple times, we can also track the success rate for each individual problem as a metric of difficulty. We propose two sampling methods to utilize these priors to improve training efficiency.
Curriculum Sampling
We start by training on easier tasks and gradually progress to more challenging ones. Since the initial RL model has limited performance, spending a restricted computation budget on very hard problems often yields few correct samples, resulting in lower training efficiency. Meanwhile, our collected data naturally includes grade and difficulty labels, making difficulty-based sampling an intuitive and effective way to improve training efficiency.
Prioritized Sampling
In addition to curriculum sampling, we use a prioritized sampling strategy to focus on problems where the model underperforms. We track the success rates $s_{i}$ for each problem $i$ and sample problems proportional to $1-s_{i}$ , so that problems with lower success rates receive higher sampling probabilities. This directs the modelâs efforts toward its weakest areas, leading to faster learning and better overall performance.
2.3.5 More Details on Training Recipe
Test Case Generation for Coding
Since test cases are not available for many coding problems from the web, we design a method to automatically generate test cases that serve as a reward to train our model with RL. Our focus is primarily on problems that do not require a special judge. We also assume that ground truth solutions are available for these problems so that we can leverage the solutions to generate higher quality test cases.
We utilize the widely recognized test case generation library, CYaRon https://github.com/luogu-dev/cyaron, to enhance our approach. We employ our base Kimi k1.5 to generate test cases based on problem statements. The usage statement of CYaRon and the problem description are provided as the input to the generator. For each problem, we first use the generator to produce 50 test cases and also randomly sample 10 ground truth submissions for each test case. We run the test cases against the submissions. A test case is deemed valid if at least 7 out of 10 submissions yield matching results. After this round of filtering, we obtain a set of selected test cases. A problem and its associated selected test cases are added to our training set if at least 9 out of 10 submissions pass the entire set of selected test cases.
In terms of statistics, from a sample of 1,000 online contest problems, approximately 614 do not require a special judge. We developed 463 test case generators that produced at least 40 valid test cases, leading to the inclusion of 323 problems in our training set.
Reward Modeling for Math
One challenge in evaluating math solutions is that different written forms can represent the same underlying answer. For instance, $a^{2}-4$ and $(a+2)(a-2)$ may both be valid solutions to the same problem. We adopted two methods to improve the reward modelâs scoring accuracy:
1. Classic RM: Drawing inspiration from the InstructGPT [35] methodology, we implemented a value-head based reward model and collected approximately 800k data points for fine-tuning. The model ultimately takes as input the âquestion,â the âreference answer,â and the âresponse,â and outputs a single scalar that indicates whether the response is correct.
1. Chain-of-Thought RM: Recent research [3, 30] suggests that reward models augmented with chain-of-thought (CoT) reasoning can significantly outperform classic approaches, particularly on tasks where nuanced correctness criteria matterâsuch as mathematics. Therefore, we collected an equally large dataset of about 800k CoT-labeled examples to fine-tune the Kimi model. Building on the same inputs as the Classic RM, the chain-of-thought approach explicitly generates a step-by-step reasoning process before providing a final correctness judgment in JSON format, enabling more robust and interpretable reward signals.
During our manual spot checks, the Classic RM achieved an accuracy of approximately 84.4, while the Chain-of-Thought RM reached 98.5 accuracy. In the RL training process, we adopted the Chain-of-Thought RM to ensure more correct feedback.
Vision Data
To improve the modelâs real-world image reasoning capabilities and to achieve a more effective alignment between visual inputs and large language models (LLMs), our vision reinforcement learning (Vision RL) data is primarily sourced from three distinct categories: Real-world data, Synthetic visual reasoning data, and Text-rendered data.
1. The real-world data encompass a range of science questions across various grade levels that require graphical comprehension and reasoning, location guessing tasks that necessitate visual perception and inference, and data analysis that involves understanding complex charts, among other types of data. These datasets improve the modelâs ability to perform visual reasoning in real-world scenarios.
1. Synthetic visual reasoning data is artificially generated, including procedurally created images and scenes aimed at improving specific visual reasoning skills, such as understanding spatial relationships, geometric patterns, and object interactions. These synthetic datasets offer a controlled environment for testing the modelâs visual reasoning capabilities and provide an endless supply of training examples.
1. Text-rendered data is created by converting textual content into visual format, enabling the model to maintain consistency when handling text-based queries across different modalities. By transforming text documents, code snippets, and structured data into images, we ensure the model provides consistent responses regardless of whether the input is pure text or text rendered as images (like screenshots or photos). This also helps to enhance the modelâs capability when dealing with text-heavy images.
Each type of data is essential in building a comprehensive visual language model that can effectively manage a wide range of real-world applications while ensuring consistent performance across various input modalities.
2.4 Long2short: Context Compression for Short-CoT Models
Though long-CoT models achieve strong performance, it consumes more test-time tokens compared to standard short-CoT LLMs. However, it is possible to transfer the thinking priors from long-CoT models to short-CoT models so that performance can be improved even with limited test-time token budgets. We present several approaches for this long2short problem, including model merging [57], shortest rejection sampling, DPO [40], and long2short RL. Detailed descriptions of these methods are provided below:
Model Merging
Model merging has been found to be useful in maintaining generalization ability. We also discovered its effectiveness in improving token efficiency when merging a long-cot model and a short-cot model. This approach combines a long-cot model with a shorter model to obtain a new one without training. Specifically, we merge the two models by simply averaging their weights.
Shortest Rejection Sampling
We observed that our model generates responses with a large length variation for the same problem. Based on this, we designed the Shortest Rejection Sampling method. This method samples the same question $n$ times (in our experiments, $n=8$ ) and selects the shortest correct response for supervised fine-tuning.
DPO
Similar with Shortest Rejection Sampling, we utilize the Long CoT model to generate multiple response samples. The shortest correct solution is selected as the positive sample, while longer responses are treated as negative samples, including both wrong longer responses and correct longer responses (1.5 times longer than the chosen positive sample). These positive-negative pairs form the pairwise preference data used for DPO training.
Long2short RL
After a standard RL training phase, we select a model that offers the best balance between performance and token efficiency to serve as the base model, and conduct a separate long2short RL training phase. In this second phase, we apply the length penalty introduced in Section 2.3.3, and significantly reduce the maximum rollout length to further penalize responses that exceed the desired length while possibly correct.
2.5 Other Training Details
2.5.1 Pretraining
The Kimi k1.5 base model is trained on a diverse, high-quality multimodal corpus. The language data covers five domains: English, Chinese, Code, Mathematics Reasoning, and Knowledge. Multimodal data, including Captioning, Image-text Interleaving, OCR, Knowledge, and QA datasets, enables our model to acquire vision-language capabilities. Rigorous quality control ensures relevance, diversity, and balance in the overall pretrain dataset. Our pretraining proceeds in three stages: (1) Vision-language pretraining, where a strong language foundation is established, followed by gradual multimodal integration; (2) Cooldown, which consolidates capabilities using curated and synthetic data, particularly for reasoning and knowledge-based tasks; and (3) Long-context activation, extending sequence processing to 131,072 tokens. More details regarding our pretraining efforts can be found in Appendix B.
2.5.2 Vanilla Supervised Finetuning
We create the vanilla SFT corpus covering multiple domains. For non-reasoning tasks, including question-answering, writing, and text processing, we initially construct a seed dataset through human annotation. This seed dataset is used to train a seed model. Subsequently, we collect a diverse of prompts and employ the seed model to generate multiple responses to each prompt. Annotators then rank these responses and refine the top-ranked response to produce the final version. For reasoning tasks such as math and coding problems, where rule-based and reward modeling based verifications are more accurate and efficient than human judgment, we utilize rejection sampling to expand the SFT dataset.
Our vanilla SFT dataset comprises approximately 1 million text examples. Specifically, 500k examples are for general question answering, 200k for coding, 200k for math and science, 5k for creative writing, and 20k for long-context tasks such as summarization, doc-qa, translation, and writing. In addition, we construct 1 million text-vision examples encompassing various categories including chart interpretation, OCR, image-grounded conversations, visual coding, visual reasoning, and math/science problems with visual aids.
We first train the model at the sequence length of 32k tokens for 1 epoch, followed by another epoch at the sequence length of 128k tokens. In the first stage (32k), the learning rate decays from $2Ă 10^{-5}$ to $2Ă 10^{-6}$ , before it re-warmups to $1Ă 10^{-5}$ in the second stage (128k) and finally decays to $1Ă 10^{-6}$ . To improve training efficiency, we pack multiple training examples into each single training sequence.
2.6 RL Infrastructure
<details>
<summary>x3.png Details</summary>

### Visual Description
\n
## Diagram: Reinforcement Learning System Architecture
### Overview
The image depicts a diagram of a reinforcement learning system architecture, illustrating the interaction between different components involved in training an agent. The diagram shows a cyclical flow of data and weights between Rollout Workers, Trainer Workers, a Master component, Reward Models, and a Replay Buffer. Arrows indicate the direction of data and weight flow.
### Components/Axes
The diagram consists of the following components:
* **Rollout Workers:** Represented by a rounded rectangle with multiple instances, suggesting parallel execution.
* **Trainer Workers:** A rounded rectangle containing two sub-components: "Policy Model" and "Reference Model".
* **Master:** A central rounded rectangle coordinating the process.
* **Reward Models:** A rounded rectangle containing four sub-components: "Code", "Math", "K-12", and "Vision".
* **Replay Buffer:** A rounded rectangle serving as a data storage.
* **Arrows:** Indicate the flow of data and weights. Two types of arrows are used: solid arrows with a filled arrowhead for weight flow, and arrows with an open arrowhead for data flow.
* **Labels:** "weight", "gradient update", "rollout trajectories", "eval request", "training data".
* **Legend:** Located in the bottom-right corner, defining the arrow types: "weight flow" (solid arrow) and "data flow" (open arrow).
### Detailed Analysis or Content Details
The diagram illustrates the following interactions:
1. **Rollout Workers to Reward Models:** Rollout trajectories are sent from the Rollout Workers to the Reward Models. This is a data flow.
2. **Rollout Workers to Trainer Workers:** Weights are sent from the Rollout Workers to the Trainer Workers. This is a weight flow.
3. **Trainer Workers to Master:** Training data is sent from the Trainer Workers to the Master. This is a data flow.
4. **Master to Replay Buffer:** Data is sent from the Master to the Replay Buffer. This is a data flow.
5. **Replay Buffer to Master:** An eval request is sent from the Replay Buffer to the Master. This is a data flow.
6. **Master to Trainer Workers:** A gradient update is sent from the Master to the Trainer Workers. This is a weight flow.
7. **Trainer Workers:** Contain a "Policy Model" and a "Reference Model".
8. **Reward Models:** Contain "Code", "Math", "K-12", and "Vision" models.
### Key Observations
The diagram highlights a distributed reinforcement learning setup. The use of multiple Rollout Workers suggests parallel environment interaction for faster data collection. The separation of Policy and Reference Models within the Trainer Workers indicates a potential architecture for learning from demonstrations or using a baseline model. The Reward Models represent different evaluation criteria or environments. The Replay Buffer is a standard component for off-policy reinforcement learning algorithms.
### Interpretation
This diagram represents a common architecture for reinforcement learning, particularly in scenarios where diverse reward signals are needed (as indicated by the multiple Reward Models). The system appears to be designed for continuous learning, with the Master component orchestrating the data flow and weight updates. The separation of concerns â rollout, training, and evaluation â allows for modularity and scalability. The inclusion of "Code", "Math", "K-12", and "Vision" within the Reward Models suggests the agent is being trained to perform tasks across a variety of domains or skill levels. The cyclical nature of the diagram emphasizes the iterative process of reinforcement learning: the agent interacts with the environment, receives rewards, updates its policy, and repeats. The diagram does not provide specific numerical data or performance metrics, but it clearly illustrates the system's structure and data flow.
</details>
(a) System overview
<details>
<summary>x4.png Details</summary>

### Visual Description
\n
## Diagram: Rollout Worker Iteration N
### Overview
The image is a diagram illustrating the process flow within a "rollout worker" during iteration N. It depicts how data flows from a "prompt set" and "partial rollout" into the worker, and how the worker interacts with a "Replay Buffer". The diagram uses lines and symbols to represent data flow and termination conditions.
### Components/Axes
The diagram consists of the following components:
* **Rollout Worker:** A rectangular box labeled "rollout worker" at the top of the diagram.
* **Prompt Set:** A line labeled "from prompt set" entering the rollout worker.
* **Partial Rollout:** A line labeled "partial rollout" entering the rollout worker.
* **Replay Buffer:** A rectangular box labeled "Replay Buffer" at the bottom of the diagram.
* **Iteration N:** A label at the top-right corner indicating the current iteration.
* **Save for partial rollout:** A dashed line with text indicating data is saved for partial rollout.
* **Legend:** Located in the bottom-right corner, defining the symbols used for different termination conditions:
* **Normal Stop:** Represented by a solid black circle.
* **Cut by Length:** Represented by a diamond shape with a line through it.
* **Repeat, Early Stop:** Represented by an "X" shape.
### Detailed Analysis or Content Details
The diagram shows the following data flow and termination conditions:
1. **From Prompt Set:** A line enters the rollout worker and terminates with a solid black circle (normal stop).
2. **Partial Rollout:** A line enters the rollout worker, connects to another line, and terminates with a diamond shape (cut by length).
3. **Combined Flow:** A line from the "prompt set" connects to a line from the "partial rollout". This combined line terminates with a diamond shape (cut by length).
4. **Early Stop:** A line from the "prompt set" connects to a line that terminates with an "X" shape (repeat, early stop).
5. **Replay Buffer Interaction:** A dashed line connects the rollout worker to the "Replay Buffer", labeled "save for partial rollout". Another dashed line connects the rollout worker to the "Replay Buffer" with a downward pointing arrow.
### Key Observations
* The diagram highlights multiple termination conditions within the rollout worker.
* The "Replay Buffer" appears to be a key component for storing data for subsequent partial rollouts.
* The diagram illustrates a complex interaction between the prompt set, partial rollout, and the rollout worker itself.
* The dashed lines indicate a saving or feedback mechanism to the Replay Buffer.
### Interpretation
This diagram likely represents a step in a reinforcement learning or iterative generation process. The "rollout worker" is performing actions based on a "prompt set" and potentially leveraging previous "partial rollout" data. The different termination conditions (normal stop, cut by length, early stop) suggest that the rollout process can be dynamically adjusted based on various criteria. The "Replay Buffer" is used to store experiences or intermediate results, which can then be used to improve future rollouts. The diagram suggests a system designed for efficient exploration and exploitation of a solution space, with mechanisms for both completing a rollout normally and terminating it early based on specific conditions. The use of "iteration N" implies this is part of a larger iterative process.
</details>
(b) Partial Rollout
Figure 3: Large Scale Reinforcement Learning Training System for LLM
2.6.1 Large Scale Reinforcement Learning Training System for LLM
In the realm of artificial intelligence, reinforcement learning (RL) has emerged as a pivotal training methodology for large language models (LLMs) [35] [16], drawing inspiration from its success in mastering complex games like Go, StarCraft II, and Dota 2 through systems such as AlphaGo [43], AlphaStar [51], and OpenAI Dota Five [4]. Following in this tradition, the Kimi k1.5 system adopts an iterative synchronous RL framework, meticulously designed to bolster the modelâs reasoning capabilities through persistent learning and adaptation. A key innovation in this system is the introduction of a Partial Rollout technique, designed to optimize the handling of complex reasoning trajectories.
The RL training system as illustrated in Figure 3(a) operates through an iterative synchronous approach, with each iteration encompassing a rollout phase and a training phase. During the rollout phase, rollout workers, coordinated by a central master, generate rollout trajectories by interacting with the model, producing sequences of responses to various inputs. These trajectories are then stored in a replay buffer, which ensures a diverse and unbiased dataset for training by disrupting temporal correlations. In the subsequent training phase, trainer workers access these experiences to update the modelâs weights. This cyclical process allows the model to continuously learn from its actions, adjusting its strategies over time to enhance performance.
The central master serves as the central conductor, managing the flow of data and communication between the rollout workers, trainer workers, evaluation with reward models and the replay buffer. It ensures that the system operates harmoniously, balancing the load and facilitating efficient data processing.
The trainer workers access these rollout trajectories, whether completed in a single iteration or divided across multiple iterations, to compute gradient updates that refine the modelâs parameters and enhance its performance. This process is overseen by a reward model, which evaluates the quality of the modelâs outputs and provides essential feedback to guide the training process. The reward modelâs evaluations are particularly pivotal in determining the effectiveness of the modelâs strategies and steering the model towards optimal performance.
Moreover, the system incorporates a code execution service, which is specifically designed to handle code-related problems and is integral to the reward model. This service evaluates the modelâs outputs in practical coding scenarios, ensuring that the modelâs learning is closely aligned with real-world programming challenges. By validating the modelâs solutions against actual code executions, this feedback loop becomes essential for refining the modelâs strategies and enhancing its performance in code-related tasks.
2.6.2 Partial Rollouts for Long CoT RL
One of the primary ideas of our work is to scale long-context RL training. Partial rollouts is a key technique that effectively addresses the challenge of handling long-CoT features by managing the rollouts of both long and short trajectories. This technique establishes a fixed output token budget, capping the length of each rollout trajectory. If a trajectory exceeds the token limit during the rollout phase, the unfinished portion is saved to the replay buffer and continued in the next iteration. It ensures that no single lengthy trajectory monopolizes the systemâs resources. Moreover, since the rollout workers operate asynchronously, when some are engaged with long trajectories, others can independently process new, shorter rollout tasks. The asynchronous operation maximizes computational efficiency by ensuring that all rollout workers are actively contributing to the training process, thereby optimizing the overall performance of the system.
As illustrated in Figure 3(b), the partial rollout system works by breaking down long responses into segments across iterations (from iter n-m to iter n). The Replay Buffer acts as a central storage mechanism that maintains these response segments, where only the current iteration (iter n) requires on-policy computation. Previous segments (iter n-m to n-1) can be efficiently reused from the buffer, eliminating the need for repeated rollouts. This segmented approach significantly reduces the computational overhead: instead of rolling out the entire response at once, the system processes and stores segments incrementally, allowing for the generation of much longer responses while maintaining fast iteration times. During training, certain segments can be excluded from loss computation to further optimize the learning process, making the entire system both efficient and scalable.
The implementation of partial rollouts also offers repeat detection. The system identifies repeated sequences in the generated content and terminates them early, reducing unnecessary computation while maintaining output quality. Detected repetitions can be assigned additional penalties, effectively discouraging redundant content generation in the prompt set.
2.6.3 Hybrid Deployment of Training and Inference
<details>
<summary>x5.png Details</summary>

### Visual Description
## Diagram: System Architecture for Model Training and Deployment
### Overview
The image depicts a system architecture diagram illustrating the interaction between two sidecars â Megatron and vLLM â within a pod, utilizing shared memory and a checkpoint engine. The diagram outlines the process of model training (Megatron) and subsequent rollout/deployment (vLLM). The system also interacts with `etcd` and other pods via RDMA.
### Components/Axes
The diagram consists of the following components:
* **Megatron Sidecar:** Represented by a light blue box.
* **vLLM Sidecar:** Represented by a light green box.
* **Checkpoint Engine:** Represented by a purple box, present in both sidecars.
* **Shared Memory:** A central block connecting the two sidecars.
* **etcd:** A key-value store, positioned at the bottom center.
* **Other Pods:** Represented by a gray box, positioned at the bottom right.
* **RDMA:** Indicates Remote Direct Memory Access, connecting "Other Pods" to the vLLM sidecar.
* **Pod:** A dashed gray rectangle encompassing both sidecars.
The diagram uses arrows to indicate the flow of data and control between these components. The following processes are depicted:
* **Megatron Sidecar Processes:** Convert HF, Train, Onload, Offload, Wait Rollout, Register Shard, Update Weight.
* **vLLM Sidecar Processes:** Rollout, Dummy Start, Update Weight, Start vLLM, Terminate vLLM, Terminate.
### Detailed Analysis / Content Details
The diagram illustrates a workflow as follows:
1. **Megatron Sidecar:**
* The process begins with "Convert HF".
* "Train" initiates the model training process.
* "Onload" and "Offload" represent data transfer operations.
* "Wait Rollout" waits for the vLLM sidecar to be ready.
* The "Checkpoint Engine" within the Megatron sidecar handles "Register Shard" and "Update Weight".
2. **Shared Memory:**
* Data is transferred between the Megatron and vLLM sidecars via "Shared Memory".
3. **vLLM Sidecar:**
* "Rollout" initiates the deployment process.
* "Dummy Start" is a placeholder or initialization step.
* "Update Weight" updates the model weights.
* "Start vLLM" starts the vLLM service.
* "Terminate vLLM" terminates the vLLM service.
* The "Checkpoint Engine" within the vLLM sidecar is involved in updating weights and managing the vLLM lifecycle.
4. **External Interactions:**
* The "Checkpoint Engine" in both sidecars interacts with `etcd`.
* "Other Pods" communicate with the vLLM sidecar via RDMA.
### Key Observations
* The diagram highlights a clear separation of concerns between model training (Megatron) and model deployment (vLLM).
* The use of "Shared Memory" suggests a high-performance data exchange mechanism.
* The "Checkpoint Engine" plays a crucial role in both training and deployment, likely managing model versions and updates.
* The interaction with `etcd` indicates a distributed coordination system.
* RDMA is used for efficient communication with other pods.
### Interpretation
This diagram represents a sophisticated system for training and deploying large language models. The architecture leverages sidecars to isolate the training and deployment processes, promoting modularity and scalability. The use of shared memory suggests an attempt to minimize data transfer overhead, crucial for large models. The checkpoint engine and `etcd` integration provide robust model management and coordination capabilities. The RDMA connection to other pods indicates a distributed deployment strategy.
The diagram suggests a workflow where the Megatron sidecar trains the model, periodically updating weights in shared memory. The vLLM sidecar then rolls out the updated model, utilizing the shared weights for inference. The `etcd` store likely maintains metadata about the model versions and their availability. The overall design emphasizes efficiency, scalability, and reliability in the context of large language model serving. The "Dummy Start" process in the vLLM sidecar is a curious element, potentially representing a pre-initialization step or a placeholder for future functionality.
</details>
Figure 4: Hybrid Deployment Framework
The RL training process comprises of the following phases:
- Training Phase: At the outset, Megatron [42] and vLLM [21] are executed within separate containers, encapsulated by a shim process known as checkpoint-engine (Section 2.6.3). Megatron commences the training procedure. After the training is completed, Megatron offloads the GPU memory and prepares to transfer current weights to vLLM.
- Inference Phase: Following Megatronâs offloading, vLLM starts with dummy model weights and updates them with the latest ones transferred from Megatron via Mooncake [39]. Upon completion of the rollout, the checkpoint-engine halts all vLLM processes.
- Subsequent Training Phase: Once the memory allocated to vLLM is released, Megatron onloads the memory and initiates another round of training.
We find existing works challenging to simultaneously support all the following characteristics.
- Complex parallelism strategy: Megatron may have different parallelism strategy with vLLM. Training weights distributing in several nodes in Megatron could be challenging to be shared with vLLM.
- Minimizing idle GPU resources: For On-Policy RL, recent works such as SGLang [62] and vLLM might reserve some GPUs during the training process, which conversely could lead to idle training GPUs. It would be more efficient to share the same devices between training and inference.
- Capability of dynamic scaling: In some cases, a significant acceleration can be achieved by increasing the number of inference nodes while keeping the training process constant. Our system enables the efficient utilization of idle GPU nodes when needed.
As illustrated in Figure 4, we implement this hybrid deployment framework (Section 2.6.3) on top of Megatron and vLLM, achieving less than one minute from training to inference phase and about ten seconds conversely.
Hybrid Deployment Strategy
We propose a hybrid deployment strategy for training and inference tasks, which leverages Kubernetes Sidecar containers sharing all available GPUs to collocate both workloads in one pod. The primary advantages of this strategy are:
- It facilitates efficient resource sharing and management, preventing train nodes idling while waiting for inference nodes when both are deployed on separate nodes.
- Leveraging distinct deployed images, training and inference can each iterate independently for better performance.
- The architecture is not limited to vLLM, other frameworks can be conveniently integrated.
Checkpoint Engine
Checkpoint Engine is responsible for managing the lifecycle of the vLLM process, exposing HTTP APIs that enable triggering various operations on vLLM. For overall consistency and reliability, we utilize a global metadata system managed by the etcd service to broadcast operations and statuses.
It could be challenging to entirely release GPU memory by vLLM offloading primarily due to CUDA graphs, NCCL buffers and NVIDIA drivers. To minimize modifications to vLLM, we terminate and restart it when needed for better GPU utilization and fault tolerance.
The worker in Megatron converts the owned checkpoints into the Hugging Face format in shared memory. This conversion also takes Pipeline Parallelism and Expert Parallelism into account so that only Tensor Parallelism remains in these checkpoints. Checkpoints in shared memory are subsequently divided into shards and registered in the global metadata system. We employ Mooncake to transfer checkpoints between peer nodes over RDMA. Some modifications to vLLM are needed to load weight files and perform tensor parallelism conversion.
2.6.4 Code Sandbox
We developed the sandbox as a secure environment for executing user-submitted code, optimized for code execution and code benchmark evaluation. By dynamically switching container images, the sandbox supports different use cases through MultiPL-E [6], DMOJ Judge Server https://github.com/DMOJ/judge-server, Lean, Jupyter Notebook, and other images.
For RL in coding tasks, the sandbox ensures the reliability of training data judgment by providing consistent and repeatable evaluation mechanisms. Its feedback system supports multi-stage assessments, such as code execution feedback and repo-level editing, while maintaining a uniform context to ensure fair and equitable benchmark comparisons across programming languages.
We deploy the service on Kubernetes for scalability and resilience, exposing it through HTTP endpoints for external integration. Kubernetes features like automatic restarts and rolling updates ensure availability and fault tolerance.
To optimize performance and support RL environments, we incorporate several techniques into the code execution service to enhance efficiency, speed, and reliability. These include:
- Using Crun: We utilize crun as the container runtime instead of Docker, significantly reducing container startup times.
- Cgroup Reusing: We pre-create cgroups for container use, which is crucial in scenarios with high concurrency where creating and destroying cgroups for each container can become a bottleneck.
- Disk Usage Optimization: An overlay filesystem with an upper layer mounted as tmpfs is used to control disk writes, providing a fixed-size, high-speed storage space. This approach is beneficial for ephemeral workloads.
| Docker | 0.12 |
| --- | --- |
| Sandbox | 0.04 |
(a) Container startup times
| Docker Sandbox | 27 120 |
| --- | --- |
(b) Maximum containers started per second on a 16-core machine
These optimizations improve RL efficiency in code execution, providing a consistent and reliable environment for evaluating RL-generated code, essential for iterative training and model improvement.
3 Experiments
3.1 Evaluation
Since k1.5 is a multimodal model, we conducted comprehensive evaluation across various benchmarks for different modalities. The detailed evaluation setup can be found in Appendix C. Our benchmarks primarily consist of the following three categories:
- Text Benchmark: MMLU [13], IF-Eval [63], CLUEWSC [56], C-EVAL [15]
- Reasoning Benchmark: HumanEval-Mul, LiveCodeBench [17], Codeforces, AIME 2024, MATH-500 [26]
- Vision Benchmark: MMMU [59], MATH-Vision [52], MathVista [29]
3.2 Main Results
K1.5 long-CoT model
The performance of the Kimi k1.5 long-CoT model is presented in Table 2. Through long-CoT supervised fine-tuning (described in Section 2.2) and vision-text joint reinforcement learning (discussed in Section 2.3), the modelâs long-term reasoning capabilities are enhanced significantly. The test-time computation scaling further strengthens its performance, enabling the model to achieve state-of-the-art results across a range of modalities. Our evaluation reveals marked improvements in the modelâs capacity to reason, comprehend, and synthesize information over extended contexts, representing a advancement in multi-modal AI capabilities.
K1.5 short-CoT model
The performance of the Kimi k1.5 short-CoT model is presented in Table 3. This model integrates several techniques, including traditional supervised fine-tuning (discussed in Section 2.5.2), reinforcement learning (explored in Section 2.3), and long-to-short distillation (outlined in Section 2.4). The results demonstrate that the k1.5 short-CoT model delivers competitive or superior performance compared to leading open-source and proprietary models across multiple tasks. These include text, vision, and reasoning challenges, with notable strengths in natural language understanding, mathematics, coding, and logical reasoning.
| | Benchmark (Metric) | Language-only Model | Vision-Language Model | | | |
| --- | --- | --- | --- | --- | --- | --- |
| QwQ-32B | OpenAI | QVQ-72B | OpenAI | Kimi | | |
| Preview | o1-mini | Preview | o1 | k1.5 | | |
| Reasoning | MATH-500 (EM) | 90.6 | 90.0 | - | 94.8 | 96.2 |
| AIME 2024 (Pass@1) | 50.0 | 63.6 | - | 74.4 | 77.5 | |
| Codeforces (Percentile) | 62 | 88 | - | 94 | 94 | |
| LiveCodeBench (Pass@1) | 40.6 | 53.1 | - | 67.2 | 62.5 | |
| Vision | MathVista-Test (Pass@1) | - | - | 71.4 | 71.0 | 74.9 |
| MMMU-Val (Pass@1) | - | - | 70.3 | 77.3 | 70.0 | |
| MathVision-Full (Pass@1) | - | - | 35.9 | - | 38.6 | |
Table 2: Performance of Kimi k1.5 long-CoT and flagship open-source and proprietary models.
| | Benchmark (Metric) | Language-only Model | Vision-Language Model | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Qwen2.5 | LLaMA-3.1 | DeepSeek | Qwen2-VL | Claude-3.5- | GPT-4o | Kimi | | |
| 72B-Inst. | 405B-Inst. | V3 | | Sonnet-1022 | 0513 | k1.5 | | |
| Text | MMLU (EM) | 85.3 | 88.6 | 88.5 | - | 88.3 | 87.2 | 87.4 |
| IF-Eval (Prompt Strict) | 84.1 | 86.0 | 86.1 | - | 86.5 | 84.3 | 87.2 | |
| CLUEWSC (EM) | 91.4 | 84.7 | 90.9 | - | 85.4 | 87.9 | 91.7 | |
| C-Eval (EM) | 86.1 | 61.5 | 86.5 | - | 76.7 | 76.0 | 88.3 | |
| Reasoning | MATH-500 (EM) | 80.0 | 73.8 | 90.2 | - | 78.3 | 74.6 | 94.6 |
| AIME 2024 (Pass@1) | 23.3 | 23.3 | 39.2 | - | 16.0 | 9.3 | 60.8 | |
| HumanEval-Mul (Pass@1) | 77.3 | 77.2 | 82.6 | - | 81.7 | 80.5 | 81.5 | |
| LiveCodeBench (Pass@1) | 31.1 | 28.4 | 40.5 | - | 36.3 | 33.4 | 47.3 | |
| Vision | MathVista-Test (Pass@1) | - | - | - | 69.7 | 65.3 | 63.8 | 70.1 |
| MMMU-Val (Pass@1) | - | - | - | 64.5 | 66.4 | 69.1 | 68.0 | |
| MathVision-Full (Pass@1) | - | - | - | 26.6 | 35.6 | 30.4 | 31.0 | |
Table 3: Performance of Kimi k1.5 short-CoT and flagship open-source and proprietary models. VLM model performance were obtained from the OpenCompass benchmark platform (https://opencompass.org.cn/).
3.3 Long Context Scaling
We employ a mid-sized model to study the scaling properties of RL with LLMs. Figure 5 illustrates the evolution of both training accuracy and response length across training iterations for the small model variant trained on the mathematical prompt set. As training progresses, we observe a concurrent increase in both response length and performance accuracy. Notably, more challenging benchmarks exhibit a steeper increase in response length, suggesting that the model learns to generate more elaborate solutions for complex problems. Figure 6 indicates a strong correlation between the modelâs output context length and its problem-solving capabilities. Our final run of k1.5 scales to 128k context length and observes continued improvement on hard reasoning benchmarks.
<details>
<summary>x6.png Details</summary>

### Visual Description
\n
## Charts: Performance vs. Iterations for Various Benchmarks
### Overview
The image presents a 3x4 grid of line charts, each depicting the relationship between "Performance" (Accuracy) and "Token Length" against "Iterations" for different benchmarks. The benchmarks include Total@temp_1.0, OMNI-MATH500, MATH500, AIMO2024, AIME2024, ChatGLMMath, GAOKAO, GPQA, BZ2024, Biology, Chemistry, Physics, and KACYAN. Each chart has two y-axes: one for Accuracy (ranging approximately from 0.2 to 0.95) and one for Token Length (ranging approximately from 0 to 30000). The x-axis represents Iterations, ranging from 0 to 150.
### Components/Axes
* **X-axis (all charts):** Iterations (0 to 150)
* **Y-axis (left):** Accuracy (varying scales, approximately 0.2 to 0.95)
* **Y-axis (right):** Token Length (varying scales, approximately 0 to 30000)
* **Data Series (all charts):**
* Performance (Blue line with circle markers)
* Token Length (Orange line with square markers)
* **Chart Titles:** Each chart is labeled with the name of the benchmark.
* **Legend (top-left of each chart):** Indicates which line represents Performance and Token Length.
### Detailed Analysis or Content Details
Here's a breakdown of each chart, noting trends and approximate data points. Due to the image resolution, values are approximate.
1. **Total@temp_1.0:** Performance shows a generally increasing trend, starting around 0.75 and reaching approximately 0.88. Token Length increases steadily from 0 to around 20000.
2. **OMNI-MATH500:** Performance fluctuates between approximately 0.45 and 0.65. Token Length increases steadily from 0 to around 25000.
3. **MATH500:** Performance shows a slight increasing trend, starting around 0.82 and reaching approximately 0.90. Token Length increases steadily from 0 to around 15000.
4. **AIMO2024:** Performance fluctuates between approximately 0.2 and 0.5. Token Length increases steadily from 0 to around 30000.
5. **AIME2024:** Performance shows a generally increasing trend, starting around 0.5 and reaching approximately 0.7. Token Length increases steadily from 0 to around 20000.
6. **ChatGLMMath:** Performance fluctuates between approximately 0.65 and 0.85. Token Length increases steadily from 0 to around 17500.
7. **GAOKAO:** Performance shows a generally increasing trend, starting around 0.85 and reaching approximately 0.96. Token Length increases steadily from 0 to around 14000.
8. **GPQA:** Performance fluctuates between approximately 0.4 and 0.6. Token Length increases steadily from 0 to around 7500.
9. **BZ2024:** Performance fluctuates between approximately 0.7 and 0.85. Token Length increases steadily from 0 to around 20000.
10. **Biology:** Performance fluctuates between approximately 0.7 and 0.8. Token Length increases steadily from 0 to around 15000.
11. **Chemistry:** Performance fluctuates between approximately 0.7 and 0.85. Token Length increases steadily from 0 to around 20000.
12. **Physics:** Performance fluctuates between approximately 0.75 and 0.9. Token Length increases steadily from 0 to around 15000.
13. **KACYAN:** Performance shows a generally increasing trend, starting around 0.6 and reaching approximately 0.8. Token Length increases steadily from 0 to around 10000.
### Key Observations
* **Token Length consistently increases with Iterations:** Across all benchmarks, the Token Length shows a consistent upward trend, indicating that the model generates longer sequences as training progresses.
* **Performance varies significantly across benchmarks:** Some benchmarks (e.g., GAOKAO, MATH500) show a clear improvement in Performance with increasing Iterations, while others (e.g., AIMO2024, GPQA) exhibit more erratic behavior.
* **Correlation between Performance and Token Length is not always clear:** In some charts, Performance and Token Length appear to be positively correlated (both increase with Iterations), while in others, the relationship is less obvious.
* **Fluctuations in Performance:** Many charts show significant fluctuations in Performance, suggesting instability in the training process.
### Interpretation
The charts demonstrate the training dynamics of a model across a diverse set of benchmarks. The consistent increase in Token Length suggests the model is learning to generate more complex and detailed responses as it trains. However, the varying Performance trends indicate that the model's ability to improve differs significantly depending on the specific task. The fluctuations in Performance suggest that the training process may be sensitive to hyperparameters or data variations.
The benchmarks cover a range of domains (mathematics, reasoning, science, general knowledge), and the differences in performance suggest that the model may have varying strengths and weaknesses. For example, GAOKAO and MATH500 show relatively stable and positive performance trends, indicating that the model is effectively learning these tasks. Conversely, AIMO2024 and GPQA exhibit more erratic behavior, suggesting that these tasks may be more challenging or require different training strategies.
The relationship between Performance and Token Length is complex. While a longer Token Length might indicate a more detailed response, it doesn't necessarily guarantee higher Accuracy. It's possible that the model is generating verbose but irrelevant content, leading to a plateau or even a decrease in Performance. Further analysis would be needed to determine the optimal balance between Token Length and Accuracy for each benchmark.
</details>
Figure 5: The changes on the training accuracy and length as train iterations grow. Note that the scores above come from an internal long-cot model with much smaller model size than k1.5 long-CoT model. The shaded area represents the 95% percentile of the response length.
<details>
<summary>x7.png Details</summary>

### Visual Description
\n
## Charts: Model Performance vs. Token Length
### Overview
The image presents a 2x4 grid of line plots, each visualizing the relationship between "Accuracy" and "Mean Token Length" for different models. Each plot includes a blue line representing "Performance" (likely the model's accuracy) with error bars, and a green line representing the "Trend" (linear regression fit). The trend line's slope is also displayed in each plot.
### Components/Axes
Each chart shares the following components:
* **X-axis:** "Mean Token Length" ranging from approximately -2500 to 17500.
* **Y-axis:** "Accuracy" with varying scales depending on the model.
* **Blue Line with Error Bars:** Represents the "Performance" of the model. Error bars indicate the standard deviation or confidence interval around the performance.
* **Green Line:** Represents the "Trend" (linear regression) of the performance.
* **Text Label:** "Trend (slope: X.XXe-XX)" where X.XXe-XX is the slope of the trend line.
* **Legend:** Located in the bottom-left corner of each chart, labeling the blue line as "Performance" and the green line as "Trend".
The charts are labeled with the following model names (top row, left to right):
1. total@temp_1.0
2. OMNI-MATH500
3. MATH500
4. AIIM2024
(bottom row, left to right):
5. AIME2024
6. ChatGLMMath
7. GAOKAO_bmk
8. GPQA
### Detailed Analysis or Content Details
Here's a breakdown of each chart, extracting approximate values and trends:
1. **total@temp_1.0:**
* Accuracy range: ~0.60 to ~0.80
* Trend slope: 2.46e-05
* Performance: The blue line shows a generally increasing trend with increasing token length, but with significant fluctuations. Accuracy starts around 0.62 at -2500 tokens and reaches approximately 0.78 at 17500 tokens.
2. **OMNI-MATH500:**
* Accuracy range: ~0.38 to ~0.60
* Trend slope: 1.05e-05
* Performance: Similar to the first chart, the blue line shows an increasing trend, but with substantial variability. Accuracy starts around 0.40 at -2500 tokens and reaches approximately 0.58 at 17500 tokens.
3. **MATH500:**
* Accuracy range: ~0.775 to ~0.950
* Trend slope: 1.36e-05
* Performance: The blue line shows a clear increasing trend with increasing token length. Accuracy starts around 0.80 at -2500 tokens and reaches approximately 0.93 at 17500 tokens.
4. **AIIM2024:**
* Accuracy range: ~0.30 to ~0.50
* Trend slope: 2.35e-05
* Performance: The blue line shows a generally increasing trend, but with significant fluctuations. Accuracy starts around 0.35 at -2500 tokens and reaches approximately 0.45 at 17500 tokens.
5. **AIME2024:**
* Accuracy range: ~0.15 to ~0.55
* Trend slope: 1.40e-05
* Performance: The blue line shows a generally increasing trend, but with substantial variability. Accuracy starts around 0.20 at -2500 tokens and reaches approximately 0.50 at 17500 tokens.
6. **ChatGLMMath:**
* Accuracy range: ~0.70 to ~0.95
* Trend slope: 2.99e-05
* Performance: The blue line shows a clear increasing trend with increasing token length. Accuracy starts around 0.75 at -2500 tokens and reaches approximately 0.90 at 17500 tokens.
7. **GAOKAO_bmk:**
* Accuracy range: ~0.84 to ~0.96
* Trend slope: 1.26e-05
* Performance: The blue line shows a generally increasing trend with increasing token length. Accuracy starts around 0.85 at -2500 tokens and reaches approximately 0.94 at 17500 tokens.
8. **GPQA:**
* Accuracy range: ~0.30 to ~0.50
* Trend slope: 4.26e-05
* Performance: The blue line shows a generally increasing trend, but with significant fluctuations. Accuracy starts around 0.35 at -2500 tokens and reaches approximately 0.45 at 17500 tokens.
### Key Observations
* Most models exhibit a positive correlation between accuracy and mean token length, indicated by the upward-sloping trend lines.
* The magnitude of the slope varies significantly between models. GPQA and ChatGLMMath have the steepest slopes, suggesting a more pronounced increase in accuracy with longer token lengths.
* The error bars indicate substantial variance in performance, suggesting that the relationship between token length and accuracy is not always consistent.
* The accuracy scales differ significantly between models, making direct comparison challenging.
### Interpretation
The data suggests that, for most of these models, increasing the mean token length generally leads to improved accuracy. However, the extent of this improvement varies considerably. The positive slopes of the trend lines indicate that the models benefit from processing longer sequences of text. The large error bars suggest that other factors, beyond token length, also play a significant role in determining accuracy.
The differences in slopes could be attributed to the model architectures, training data, or the specific tasks they are designed for. Models with steeper slopes (e.g., GPQA, ChatGLMMath) might be more sensitive to context and benefit more from longer input sequences.
The varying accuracy scales suggest that the models are evaluated on different tasks or datasets with different difficulty levels. It would be valuable to normalize the accuracy scales to facilitate a more meaningful comparison of model performance.
The negative token lengths are unusual and likely represent some form of data preprocessing or encoding. Further investigation would be needed to understand their meaning.
</details>
Figure 6: Model Performance Increases with Response Length
<details>
<summary>x8.png Details</summary>

### Visual Description
## Scatter Plots: Model Performance on MATH500 and AIME2024
### Overview
The image presents two scatter plots, side-by-side. The left plot displays performance on the MATH500 dataset, while the right plot shows performance on the AIME2024 dataset. Both plots measure "Accuracy" against "Token Length" for various language models. Each point represents a model's performance.
### Components/Axes
Both plots share the following components:
* **X-axis:** "Token Length" - ranging from approximately 400 to 1400 for MATH500 and 1000 to 5000 for AIME2024.
* **Y-axis:** "Accuracy" - ranging from approximately 75 to 96 for MATH500 and 10 to 62 for AIME2024.
* **Data Points:** Representing different language models. Each point is colored orange or blue.
* **Titles:** "MATH500" (left plot) and "AIME2024" (right plot) positioned at the top-center of each respective plot.
The models represented are:
* k1.5-short w/ rl (orange)
* k1.5-long (orange)
* k1.5-short w/ dpo (orange)
* k1.5-shortest (orange)
* k1.5-short w/ merge (orange)
* k1.5-short w/ merge + rs (orange)
* deepseek-v3 (blue)
* qwen25-72B-inst (blue)
* Claude 3.5 (blue)
* gpt-4-0513 (blue)
### Detailed Analysis or Content Details
**MATH500 (Left Plot):**
* **k1.5-short w/ rl:** Approximately (1300, 93.5).
* **k1.5-long:** Approximately (1200, 94.5).
* **k1.5-short w/ dpo:** Approximately (1100, 92.5).
* **k1.5-shortest:** Approximately (900, 89).
* **k1.5-short w/ merge:** Approximately (1000, 88).
* **k1.5-short w/ merge + rs:** Approximately (1000, 88.5).
* **deepseek-v3:** Approximately (1400, 90).
* **qwen25-72B-inst:** Approximately (600, 80).
* **Claude 3.5:** Approximately (500, 77.5).
* **gpt-4-0513:** Approximately (400, 75).
The orange data points (k1.5 variants) generally show a positive correlation between Token Length and Accuracy, with accuracy increasing as token length increases. The blue data points (deepseek-v3, qwen25-72B-inst, Claude 3.5, gpt-4-0513) are clustered towards the lower end of the accuracy scale and shorter token lengths.
**AIME2024 (Right Plot):**
* **k1.5-long:** Approximately (4500, 61).
* **k1.5-short w/ rl:** Approximately (4000, 58).
* **k1.5-short w/ dpo:** Approximately (4000, 55).
* **k1.5-shortest:** Approximately (2000, 32).
* **k1.5-short w/ merge:** Approximately (3000, 44).
* **k1.5-short w/ merge + rs:** Approximately (3500, 47).
* **deepseek-v3:** Approximately (5000, 40).
* **qwen25-72B-inst:** Approximately (2000, 25).
* **Claude 3.5:** Approximately (1000, 15).
* **gpt-4-0513:** Approximately (1000, 12).
Similar to the MATH500 plot, the orange data points (k1.5 variants) generally show a positive correlation between Token Length and Accuracy. The blue data points are clustered towards the lower end of the accuracy scale and shorter token lengths.
### Key Observations
* The k1.5 models consistently outperform the other models (deepseek-v3, qwen25-72B-inst, Claude 3.5, gpt-4-0513) on both datasets.
* For the k1.5 models, longer token lengths generally correspond to higher accuracy.
* The AIME2024 dataset has a lower overall accuracy range compared to the MATH500 dataset.
* gpt-4-0513 and Claude 3.5 have the lowest accuracy scores across both datasets.
### Interpretation
The data suggests that the k1.5 family of models are particularly effective at solving problems in both MATH500 and AIME2024, and that their performance improves with increasing token length. This implies that these models benefit from having access to more contextual information. The difference in accuracy ranges between the two datasets suggests that AIME2024 is a more challenging benchmark, or that the models are less well-suited to the types of problems it contains. The consistently low performance of gpt-4-0513 and Claude 3.5 suggests they may not be as effective for these types of tasks, or that they require different training strategies. The positive correlation between token length and accuracy for the k1.5 models indicates that the ability to process longer sequences is a key factor in their success. The fact that the k1.5 models with "merge" and "rs" perform similarly suggests that these techniques do not significantly impact performance in this context.
</details>
Figure 7: Long2Short Performance. All the k1.5 series demonstrate better token efficiency compared to other models.
3.4 Long2short
We compared the proposed long2short RL algorithm with the DPO, shortest rejection sampling, and model merge methods introduced in the Section 2.4, focusing on the token efficiency for the long2short problem [8], specifically how the obtained long-cot model can benefit a short model. In Figure 7, k1.5-long represents our long-cot model selected for long2short training. k1.5-short w/ rl refers to the short model obtained using the long2short RL training. k1.5-short w/ dpo denotes the short model with improved token efficiency through DPO training. k1.5-short w/ merge represents the model after model merging, while k1.5-short w/ merge + rs indicates the short model obtained by applying shortest rejection sampling to the merged model. k1.5-shortest represents the shortest model we obtained during the long2short training. As shown in Figure 7, the proposed long2short RL algorithm demonstrates the highest token efficiency compared other mehtods such as DPO and model merge. Notably, all models in the k1.5 series (marked in orange) demonstrate superior token efficiency compared to other models (marked in blue). For instance, k1.5-short w/ rl achieves a Pass@1 score of 60.8 on AIME2024 (averaged over 8 runs) while utilizing only 3,272 tokens on average. Similarly, k1.5-shortest attains a Pass@1 score of 88.2 on MATH500 while consuming approximately the same number of tokens as other short models.
3.5 Ablation Studies
Scaling of model size and context length
Our main contribution is the application of RL to enhance the modelâs capacity for generating extended CoT, thereby improving its reasoning ability. A natural question arises: how does this compare to simply increasing the model size? To demonstrate the effectiveness of our approach, we trained two models of different sizes using the same dataset and recorded the evaluation results and average inference lengths from all checkpoints during RL training. These results are shown in Figure 8. Notably, although the larger model initially outperforms the smaller one, the smaller model can achieve comparable performance by utilizing longer CoTs optimized through RL. However, the larger model generally shows better token efficiency than the smaller model. This also indicates that if one targets the best possible performance, scaling the context length of a larger model has a higher upper bound and is more token efficient. However, if test-time compute has a budget, training smaller models with a larger context length may be viable solutions.
Effects of using negative gradients
We investigate the effectiveness of using ReST [12] as the policy optimization algorithm in our setting. The primary distinction between ReST and other RL-based methods including ours is that ReST iteratively refines the model by fitting the best response sampled from the current model, without applying negative gradients to penalize incorrect responses. As illustrated in Figure 10, our method exhibits superior sample complexity compared to ReST, indicating that the incorporation of negative gradients markedly enhances the modelâs efficiency in generating long CoT. Our method not only elevates the quality of reasoning but also optimizes the training process, achieving robust performance with fewer training samples. This finding suggests that the choice of policy optimization algorithm is crucial in our setting, as the performance gap between ReST and other RL-based methods is not as pronounced in other domains [12]. Therefore, our results highlight the importance of selecting an appropriate optimization strategy to maximize effectiveness in generating long CoT.
Sampling strategies
We further demonstrate the effectiveness of our curriculum sampling strategy, as introduced in Section 2.3.4. Our training dataset $\mathcal{D}$ comprises a diverse mix of problems with varying levels of difficulty. With our curriculum sampling method, we initially use $\mathcal{D}$ for a warm-up phase and then focus solely on hard questions to train the model. This approach is compared to a baseline method that employs a uniform sampling strategy without any curriculum adjustments. As illustrated in Figure 9, our results clearly show that the proposed curriculum sampling method significantly enhances the performance. This improvement can be attributed to the methodâs ability to progressively challenge the model, allowing it to develop a more robust understanding and competency in handling complex problems. By focusing training efforts on more difficult questions after an initial general introduction, the model can better strengthen its reasoning and problem solving capabilities.
<details>
<summary>x9.png Details</summary>

### Visual Description
## Scatter Plots: Model Accuracy vs. Response Length
### Overview
The image presents four scatter plots, each representing the relationship between "Accuracy" and "Mean Response Length (tokens)" for different model sizes ("Small Size" and "Large Size") on four different datasets: OMNI-MATH500, AIME2024, MATH500, and MC2024. Each plot also includes a linear trend line for each model size, with the slope of the line indicated. All plots are truncated at 60.
### Components/Axes
* **X-axis:** "Mean Response Length (tokens)" - Ranges from approximately 0 to 5500 tokens, varying slightly between plots.
* **Y-axis:** "Accuracy" - Ranges from approximately 0.3 to 0.95, varying between plots.
* **Data Series:**
* Small Size (represented by blue dots)
* Large Size (represented by orange dots)
* **Trend Lines:**
* Blue line: Linear trend for Small Size
* Orange line: Linear trend for Large Size
* **Legend:** Located in the top-left corner of each plot, indicating the color-coding for Small and Large sizes, and the slope of the trend lines.
* **Titles:** Each plot has a title indicating the dataset being analyzed (OMNI-MATH500, AIME2024, MATH500, MC2024) and the truncation point (truncated at 60).
### Detailed Analysis or Content Details
**OMNI-MATH500 (truncated at 60)**
* Small Size Trend (slope: 3.33e-05)
* Large Size Trend (slope: 6.42e-05)
* Small Size: Data points are scattered, generally increasing with response length. Approximate values: (1000 tokens, 0.40), (5000 tokens, 0.60).
* Large Size: Data points are also scattered, with a slightly steeper positive trend. Approximate values: (1000 tokens, 0.45), (5000 tokens, 0.70).
**AIME2024 (truncated at 60)**
* Small Size Trend (slope: 9.00e-06)
* Large Size Trend (slope: 0.10e-06)
* Small Size: Data points are more spread out. Approximate values: (1000 tokens, 0.15), (5000 tokens, 0.30).
* Large Size: Data points are clustered, with a very shallow positive trend. Approximate values: (1000 tokens, 0.10), (5000 tokens, 0.15).
**MATH500 (truncated at 60)**
* Small Size Trend (slope: 4.25e-05)
* Large Size Trend (slope: 2.06e-05)
* Small Size: Data points show a clear positive trend. Approximate values: (1000 tokens, 0.78), (5000 tokens, 0.92).
* Large Size: Data points are more concentrated, with a positive trend. Approximate values: (1000 tokens, 0.80), (5000 tokens, 0.88).
**MC2024**
* Small Size Trend (slope: 3.25e-06)
* Large Size Trend (slope: 8.46e-06)
* Small Size: Data points are scattered. Approximate values: (1000 tokens, 0.15), (5000 tokens, 0.25).
* Large Size: Data points are also scattered, with a slightly steeper positive trend. Approximate values: (1000 tokens, 0.10), (5000 tokens, 0.20).
### Key Observations
* The slopes of the trend lines vary significantly between datasets.
* The Large Size model generally exhibits a steeper positive trend than the Small Size model in OMNI-MATH500, MATH500, and MC2024.
* AIME2024 shows a very shallow positive trend for the Large Size model.
* The spread of data points varies between datasets, indicating different levels of consistency in the relationship between accuracy and response length.
### Interpretation
The plots demonstrate the relationship between model size, response length, and accuracy across different mathematical datasets. The positive slopes of the trend lines suggest that, in general, increasing the response length leads to higher accuracy. However, the magnitude of this effect varies significantly depending on the dataset and model size.
The differences in slopes and data point spread suggest that the datasets have different characteristics. Some datasets (e.g., MATH500) show a strong, consistent relationship between response length and accuracy, while others (e.g., AIME2024) exhibit a weaker or more variable relationship.
The fact that the Large Size model often has a steeper slope indicates that it benefits more from increased response length than the Small Size model. This could be because the Large Size model has more capacity to utilize the additional information provided in longer responses.
The truncation at 60 likely refers to a maximum response length considered during evaluation or training. This truncation could influence the observed trends, particularly for datasets where longer responses are more beneficial. The slopes provided are very small, indicating a relatively gradual increase in accuracy with increasing response length.
</details>
Figure 8: Model Performance vs Response Length of Different Model Sizes
<details>
<summary>x10.png Details</summary>

### Visual Description
\n
## Line Chart: Accuracy vs. Iteration for Baseline and Curriculum Learning
### Overview
This image presents a line chart comparing the accuracy of two learning methods â Baseline (Uniform Sampling) and Curriculum Learning â over a series of iterations. The chart visually demonstrates how accuracy changes as the number of iterations increases, with a notable transition point for the Curriculum Learning method.
### Components/Axes
* **X-axis:** Iteration (ranging from approximately 0 to 45).
* **Y-axis:** Accuracy (ranging from approximately 0.30 to 0.65).
* **Data Series 1:** Baseline (Uniform Sampling) â represented by a blue line.
* **Data Series 2:** Curriculum Learning â represented by an orange line.
* **Legend:** Located in the bottom-right corner, clearly labeling each data series with its corresponding color.
* **Title/Annotation:** Located at the top-left corner, describing the two methods: "Baseline: Uniform sampling of mixed easy/hard problems." and "Curriculum: Uniform problems first, then hard problems (transition at bar 24)".
* **Vertical dashed line:** Located at approximately iteration 24, labeled "Curriculum Transition".
### Detailed Analysis
**Baseline (Uniform Sampling) - Blue Line:**
The blue line starts at approximately 0.31 at iteration 0. It exhibits a relatively slow and steady upward slope until approximately iteration 30, where the slope increases slightly. At iteration 45, the accuracy reaches approximately 0.55.
**Curriculum Learning - Orange Line:**
The orange line begins at approximately 0.30 at iteration 0. It shows a consistent upward trend, steeper than the baseline, until approximately iteration 24. At iteration 24, coinciding with the "Curriculum Transition" marker, the slope increases significantly. At iteration 45, the accuracy reaches approximately 0.61.
**Data Points (Approximate):**
| Iteration | Baseline Accuracy | Curriculum Learning Accuracy |
|---|---|---|
| 0 | 0.31 | 0.30 |
| 5 | 0.33 | 0.37 |
| 10 | 0.36 | 0.42 |
| 15 | 0.40 | 0.46 |
| 20 | 0.43 | 0.49 |
| 24 | 0.45 | 0.52 |
| 30 | 0.49 | 0.56 |
| 35 | 0.52 | 0.59 |
| 40 | 0.54 | 0.60 |
| 45 | 0.55 | 0.61 |
### Key Observations
* Curriculum Learning consistently outperforms Baseline (Uniform Sampling) throughout the iterations.
* The "Curriculum Transition" at iteration 24 marks a significant acceleration in the accuracy improvement for Curriculum Learning.
* The Baseline accuracy increases more slowly and remains lower than Curriculum Learning.
* Both methods show a general trend of increasing accuracy with more iterations.
### Interpretation
The data suggests that Curriculum Learning is a more effective approach than Uniform Sampling for this particular task. The initial phase of Curriculum Learning, focusing on easier problems, likely allows the model to establish a strong foundation before tackling more complex challenges. The transition to harder problems at iteration 24 appears to be a critical point, triggering a substantial increase in learning rate. The Baseline method, by randomly sampling both easy and hard problems, may struggle to achieve the same level of performance due to the constant mixing of difficulty levels. The consistent upward trend for both methods indicates that continued iteration generally leads to improved accuracy, but the magnitude of improvement is significantly higher with Curriculum Learning. The vertical dashed line at iteration 24 is a key indicator of the effectiveness of the curriculum approach, showing a clear inflection point in the learning curve.
</details>
Figure 9: Analysis of curriculum learning approaches on model performance.
<details>
<summary>x11.png Details</summary>

### Visual Description
\n
## Line Charts: Accuracy vs. Step for Various Datasets
### Overview
The image presents a grid of 12 line charts, each depicting the relationship between "Accuracy" and "Step" for different datasets. Two lines are plotted on each chart, representing two methods: "ReST" and "Ours". The charts are arranged in a 3x4 grid.
### Components/Axes
* **X-axis:** "Step", ranging from 0 to 50, with markers at intervals of 10.
* **Y-axis:** "Accuracy", with varying scales depending on the dataset.
* **Datasets (Chart Titles):**
* OMNI-MATH500
* MATH500
* AIM0204
* AIME2024
* ChatGPTMath
* GAOKAO_bmk
* GPQA
* k12-biology
* k12-chemistry
* k12-physics
* KAIYAN
* Total
* **Lines:**
* "ReST" (Blue line with circle markers)
* "Ours" (Orange line with circle markers)
* **Legend:** Located in the top-left corner of each chart, indicating the line colors and corresponding methods.
### Detailed Analysis or Content Details
Here's a breakdown of each chart, noting trends and approximate data points. Accuracy values are approximate due to the resolution of the image.
1. **OMNI-MATH500:** The "Ours" line starts at approximately 0.42 and generally fluctuates between 0.42 and 0.46, with a slight upward trend. The "ReST" line starts at approximately 0.38 and shows a more pronounced upward trend, reaching around 0.44 by step 50.
2. **MATH500:** "Ours" starts at around 0.86 and remains relatively stable, fluctuating between 0.84 and 0.88. "ReST" starts at around 0.80 and shows a slight upward trend, reaching approximately 0.83 by step 50.
3. **AIM0204:** "Ours" starts at approximately 0.23 and fluctuates significantly, ranging from 0.10 to 0.25. "ReST" starts at around 0.12 and also fluctuates, with a similar range.
4. **AIME2024:** "Ours" starts at approximately 0.32 and fluctuates between 0.25 and 0.35. "ReST" starts at around 0.20 and shows a similar fluctuating pattern.
5. **ChatGPTMath:** "Ours" starts at approximately 0.72 and fluctuates between 0.68 and 0.76. "ReST" starts at around 0.70 and shows a similar fluctuating pattern.
6. **GAOKAO_bmk:** "Ours" starts at approximately 0.84 and remains relatively stable, fluctuating between 0.82 and 0.86. "ReST" starts at around 0.80 and shows a slight upward trend, reaching approximately 0.83 by step 50.
7. **GPQA:** "Ours" starts at approximately 0.20 and fluctuates between 0.15 and 0.23. "ReST" starts at around 0.14 and shows a similar fluctuating pattern.
8. **k12-biology:** "Ours" starts at approximately 0.72 and fluctuates between 0.68 and 0.76. "ReST" starts at around 0.70 and shows a similar fluctuating pattern.
9. **k12-chemistry:** "Ours" starts at approximately 0.72 and fluctuates between 0.68 and 0.76. "ReST" starts at around 0.70 and shows a similar fluctuating pattern.
10. **k12-physics:** "Ours" starts at approximately 0.76 and fluctuates between 0.72 and 0.80. "ReST" starts at around 0.72 and shows a similar fluctuating pattern.
11. **KAIYAN:** "Ours" starts at approximately 0.18 and fluctuates between 0.15 and 0.22. "ReST" starts at around 0.15 and shows a similar fluctuating pattern.
12. **Total:** "Ours" starts at approximately 0.68 and fluctuates between 0.64 and 0.72. "ReST" starts at around 0.66 and shows a similar fluctuating pattern.
### Key Observations
* The "Ours" method generally exhibits more stable performance across most datasets, with less pronounced fluctuations compared to "ReST".
* "ReST" often shows a slight upward trend in accuracy as the "Step" increases, particularly in OMNI-MATH500 and MATH500.
* The AIM0204, AIME2024, GPQA, and KAIYAN datasets show significant fluctuations for both methods, indicating a more challenging learning process.
* The k12 datasets (biology, chemistry, physics) show relatively stable performance for both methods.
### Interpretation
The charts compare the performance of two methods, "ReST" and "Ours", across a diverse set of datasets. The "Accuracy" metric indicates how well each method performs as the "Step" (likely representing training iterations or steps) increases.
The consistent stability of "Ours" suggests it may be less sensitive to the specific dataset or training process, providing a more reliable baseline performance. The upward trend observed in "ReST" for some datasets indicates that it may benefit from continued training, potentially surpassing "Ours" with more steps.
The high variability in datasets like AIM0204 and GPQA suggests these datasets are more complex or noisy, making it harder for either method to achieve consistent accuracy. The relatively stable performance on the k12 datasets suggests these datasets are more well-defined and easier to learn from.
The overall comparison suggests that the choice between "ReST" and "Ours" depends on the specific application and the characteristics of the dataset. If stability is paramount, "Ours" may be preferred. If there is potential for improvement with continued training, "ReST" may be a better choice.
</details>
Figure 10: Comparison with using ReST for policy optimization.
4 Conclusions
We present the training recipe and system design of k1.5, our latest multi-modal LLM trained with RL. One of the key insights we extract from our practice is that the scaling of context length is crucial to the continued improvement of LLMs. We employ optimized learning algorithms and infrastructure optimization such as partial rollouts to achieve efficient long-context RL training. How to further improve the efficiency and scalability of long-context RL training remains an important question moving forward.
Another contribution we made is a combination of techniques that enable improved policy optimization. Specifically, we formulate long-CoT RL with LLMs and derive a variant of online mirror descent for robust optimization. We also experiment with sampling strategies, length penalty, and optimizing the data recipe to achieve strong RL performance.
We show that strong performance can be achieved by long context scaling and improved policy optimization, even without using more complex techniques such as Monte Carlo tree search, value functions, and process reward models. In the future, it will also be intriguing to study improving credit assignments and reducing overthinking without hurting the modelâs exploration abilities.
We have also observed the potential of long2short methods. These methods largely improve performance of short CoT models. Moreover, it is possible to combine long2short methods with long-CoT RL in an iterative way to further increase token efficiency and extract the best performance out of a given context length budget.
References
- [1] Yasin Abbasi-Yadkori et al. âPolitex: Regret bounds for policy iteration using expert predictionâ In International Conference on Machine Learning, 2019, pp. 3692â3702 PMLR
- [2] Arash Ahmadian et al. âBack to basics: Revisiting reinforce style optimization for learning from human feedback in llmsâ In arXiv preprint arXiv:2402.14740, 2024
- [3] Zachary Ankner et al. âCritique-out-Loud Reward Modelsâ, 2024 arXiv: https://arxiv.org/abs/2408.11791
- [4] Christopher Berner et al. âDota 2 with large scale deep reinforcement learningâ In arXiv preprint arXiv:1912.06680, 2019
- [5] Federico Cassano et al. âMultiPL-E: A Scalable and Extensible Approach to Benchmarking Neural Code Generationâ In ArXiv, 2022 URL: https://arxiv.org/abs/2208.08227
- [6] Federico Cassano et al. âMultiPL-E: A Scalable and Polyglot Approach to Benchmarking Neural Code Generationâ In IEEE Transactions on Software Engineering 49.7, 2023, pp. 3675â3691 DOI: 10.1109/TSE.2023.3267446
- [7] Jianlv Chen et al. âBge m3-embedding: Multi-lingual, multi-functionality, multi-granularity text embeddings through self-knowledge distillationâ In arXiv preprint arXiv:2402.03216, 2024
- [8] Xingyu Chen et al. âDo NOT Think That Much for 2+ 3=? On the Overthinking of o1-Like LLMsâ In arXiv preprint arXiv:2412.21187, 2024
- [9] Tom Everitt et al. âReward Tampering Problems and Solutions in Reinforcement Learning: A Causal Influence Diagram Perspectiveâ, 2021 arXiv: https://arxiv.org/abs/1908.04734
- [10] Samir Yitzhak Gadre et al. âDatacomp: In search of the next generation of multimodal datasetsâ In Advances in Neural Information Processing Systems 36, 2024
- [11] Aaron Grattafiori et al. âThe Llama 3 Herd of Modelsâ, 2024 arXiv: https://arxiv.org/abs/2407.21783
- [12] Caglar Gulcehre et al. âReinforced self-training (rest) for language modelingâ In arXiv preprint arXiv:2308.08998, 2023
- [13] Dan Hendrycks et al. âMeasuring Massive Multitask Language Understandingâ In ArXiv abs/2009.03300, 2020 URL: https://arxiv.org/abs/2009.03300
- [14] Jordan Hoffmann et al. âTraining Compute-Optimal Large Language Modelsâ, 2022 arXiv: https://arxiv.org/abs/2203.15556
- [15] Yuzhen Huang et al. âC-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Modelsâ In ArXiv abs/2305.08322, 2023 URL: https://arxiv.org/abs/2305.08322
- [16] Aaron Jaech et al. âOpenai o1 system cardâ In arXiv preprint arXiv:2412.16720, 2024
- [17] Naman Jain et al. âLiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Codeâ In ArXiv abs/2403.07974, 2024 URL: https://arxiv.org/abs/2403.07974
- [18] Armand Joulin et al. âBag of tricks for efficient text classificationâ In arXiv preprint arXiv:1607.01759, 2016
- [19] Jared Kaplan et al. âScaling Laws for Neural Language Modelsâ, 2020 arXiv: https://arxiv.org/abs/2001.08361
- [20] Wouter Kool, Herke Hoof and Max Welling âBuy 4 reinforce samples, get a baseline for free!â, 2019
- [21] Woosuk Kwon et al. âEfficient Memory Management for Large Language Model Serving with PagedAttentionâ In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles, 2023
- [22] Hugo Laurençon et al. âObelics: An open web-scale filtered dataset of interleaved image-text documentsâ In Advances in Neural Information Processing Systems 36, 2024
- [23] Jeffrey Li et al. âDatacomp-lm: In search of the next generation of training sets for language modelsâ In arXiv preprint arXiv:2406.11794, 2024
- [24] Ming Li et al. âFrom quantity to quality: Boosting llm performance with self-guided data selection for instruction tuningâ In arXiv preprint arXiv:2308.12032, 2023
- [25] Raymond Li et al. âStarCoder: may the source be with you!â, 2023 arXiv: https://arxiv.org/abs/2305.06161
- [26] Hunter Lightman et al. âLetâs Verify Step by Stepâ In arXiv preprint arXiv:2305.20050, 2023
- [27] Wei Liu et al. âWhat makes good data for alignment? a comprehensive study of automatic data selection in instruction tuningâ In arXiv preprint arXiv:2312.15685, 2023
- [28] Anton Lozhkov et al. âStarCoder 2 and The Stack v2: The Next Generationâ, 2024 arXiv: https://arxiv.org/abs/2402.19173
- [29] Pan Lu et al. âMathvista: Evaluating mathematical reasoning of foundation models in visual contextsâ In arXiv preprint arXiv:2310.02255, 2023
- [30] Nat McAleese et al. âLLM Critics Help Catch LLM Bugsâ, 2024 arXiv: https://arxiv.org/abs/2407.00215
- [31] Jincheng Mei et al. âOn principled entropy exploration in policy optimizationâ In Proceedings of the 28th International Joint Conference on Artificial Intelligence, 2019, pp. 3130â3136
- [32] Niklas Muennighoff et al. âScaling Data-Constrained Language Modelsâ, 2023 arXiv: https://arxiv.org/abs/2305.16264
- [33] Ofir Nachum et al. âBridging the gap between value and policy based reinforcement learningâ In Advances in neural information processing systems 30, 2017
- [34] OpenAI âLearning to reason with LLMsâ, 2024 URL: https://openai.com/index/learning-to-reason-with-llms/
- [35] Long Ouyang et al. âTraining language models to follow instructions with human feedbackâ In Advances in neural information processing systems 35, 2022, pp. 27730â27744
- [36] Alexander Pan, Kush Bhatia and Jacob Steinhardt âThe Effects of Reward Misspecification: Mapping and Mitigating Misaligned Modelsâ In International Conference on Learning Representations, 2022 URL: https://openreview.net/forum?id=JYtwGwIL7ye
- [37] Keiran Paster et al. âOpenwebmath: An open dataset of high-quality mathematical web textâ In arXiv preprint arXiv:2310.06786, 2023
- [38] Guilherme Penedo et al. âThe fineweb datasets: Decanting the web for the finest text data at scaleâ In arXiv preprint arXiv:2406.17557, 2024
- [39] Ruoyu Qin et al. âMooncake: A KVCache-centric Disaggregated Architecture for LLM Servingâ, 2024 arXiv: https://arxiv.org/abs/2407.00079
- [40] Rafael Rafailov et al. âDirect preference optimization: Your language model is secretly a reward modelâ In Advances in Neural Information Processing Systems 36, 2024
- [41] Christoph Schuhmann et al. âLaion-5b: An open large-scale dataset for training next generation image-text modelsâ In Advances in Neural Information Processing Systems 35, 2022, pp. 25278â25294
- [42] Mohammad Shoeybi et al. âMegatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelismâ, 2020 arXiv: https://arxiv.org/abs/1909.08053
- [43] David Silver et al. âMastering the game of go without human knowledgeâ In nature 550.7676 Nature Publishing Group, 2017, pp. 354â359
- [44] Charlie Snell et al. âScaling llm test-time compute optimally can be more effective than scaling model parametersâ In arXiv preprint arXiv:2408.03314, 2024
- [45] Dan Su et al. âNemotron-CC: Transforming Common Crawl into a Refined Long-Horizon Pretraining Datasetâ In arXiv preprint arXiv:2412.02595, 2024
- [46] Jianlin Su et al. âRoformer: Enhanced transformer with rotary position embeddingâ In Neurocomputing 568 Elsevier, 2024, pp. 127063
- [47] Gemini Team et al. âGemini: A Family of Highly Capable Multimodal Modelsâ, 2024 arXiv: https://arxiv.org/abs/2312.11805
- [48] Manan Tomar et al. âMirror descent policy optimizationâ In arXiv preprint arXiv:2005.09814, 2020
- [49] Ashish Vaswani et al. âAttention is All you Needâ In Advances in Neural Information Processing Systems 30 Curran Associates, Inc., 2017 URL: https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf
- [50] Pablo Villalobos et al. âWill we run out of data? Limits of LLM scaling based on human-generated dataâ, 2024 arXiv: https://arxiv.org/abs/2211.04325
- [51] Oriol Vinyals et al. âGrandmaster level in StarCraft II using multi-agent reinforcement learningâ In nature 575.7782 Nature Publishing Group, 2019, pp. 350â354
- [52] Ke Wang et al. âMeasuring multimodal mathematical reasoning with math-vision datasetâ In arXiv preprint arXiv:2402.14804, 2024
- [53] Haoran Wei et al. âGeneral OCR Theory: Towards OCR-2.0 via a Unified End-to-end Modelâ In arXiv preprint arXiv:2409.01704, 2024
- [54] Jason Wei et al. âChain-of-thought prompting elicits reasoning in large language modelsâ In Advances in neural information processing systems 35, 2022, pp. 24824â24837
- [55] Yangzhen Wu et al. âInference scaling laws: An empirical analysis of compute-optimal inference for problem-solving with language modelsâ In arXiv preprint arXiv:2408.00724, 2024
- [56] Liang Xu et al. âCLUE: A Chinese Language Understanding Evaluation Benchmarkâ In International Conference on Computational Linguistics, 2020 URL: https://arxiv.org/abs/2004.05986
- [57] Enneng Yang et al. âModel merging in llms, mllms, and beyond: Methods, theories, applications and opportunitiesâ In arXiv preprint arXiv:2408.07666, 2024
- [58] Shunyu Yao et al. âTree of thoughts: Deliberate problem solving with large language modelsâ In Advances in Neural Information Processing Systems 36, 2024
- [59] Xiang Yue et al. âMmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agiâ In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 9556â9567
- [60] Xiang Yue et al. âMammoth: Building math generalist models through hybrid instruction tuningâ In arXiv preprint arXiv:2309.05653, 2023
- [61] Lunjun Zhang et al. âGenerative verifiers: Reward modeling as next-token prediction, 2024â In URL https://arxiv. org/abs/2408.15240, 2024
- [62] Lianmin Zheng et al. âSGLang: Efficient Execution of Structured Language Model Programsâ, 2024 arXiv: https://arxiv.org/abs/2312.07104
- [63] Jeffrey Zhou et al. âInstruction-Following Evaluation for Large Language Modelsâ In ArXiv abs/2311.07911, 2023 URL: https://arxiv.org/abs/2311.07911
- [64] Wanrong Zhu et al. âMultimodal c4: An open, billion-scale corpus of images interleaved with textâ In Advances in Neural Information Processing Systems 36, 2024
Appendix
Appendix A Contributions
Research & Development
Angang Du Bofei Gao Bowei Xing Changjiu Jiang Cheng Chen Cheng Li Chenjun Xiao Chenzhuang Du Chonghua Liao* Congcong Wang Dehao Zhang Enming Yuan Enzhe Lu Flood Sung Guokun Lai Haiqing Guo Han Zhu Hao Ding Hao Hu Hao Yang Hao Zhang Haotian Yao Haotian Zhao Haoyu Lu Hongcheng Gao Huan Yuan Huabin Zheng Jingyuan Liu Jianlin Su Jianzhou Wang Jin Zhang Junjie Yan Lidong Shi Longhui Yu Mengnan Dong Neo Zhang Ningchen Ma* Qiwei Pan Qucheng Gong Shaowei Liu Shupeng Wei Sihan Cao Tao Jiang Weimin Xiong Weiran He Weihao Gao* Weixiao Huang Weixin Xu Wenhao Wu Wenyang He Xianqing Jia Xingzhe Wu Xinran Xu Xinyu Zhou Xinxing Zu Xuehai Pan Yang Li Yangyang Hu Yangyang Liu Yanru Chen Yejie Wang Yidao Qin Yibo Liu Yiping Bao Yifeng Liu* Yulun Du Yuzhi Wang Yuxin Wu Y. Charles Zaida Zhou Zhaoji Wang Zhaowei Li Zheng Zhang Zhexu Wang Zhiqi Huang Zhilin Yang Zihao Huang Ziyao Xu Zonghan Yang Zongyu Lin
Data Annotation
Chuning Tang Fengxiang Tang Guangda Wei Haoze Li Haozhen Yu Jia Chen Jianhang Guo Jie Zhao Junyan Wu Ling Ye Shengling Ma Siying Huang Xianghui Wei Yangyang Liu Ying Yang Zhen Zhu
The listing of authors is in alphabetical order based on their first names. Names marked with an asterisk (*) indicate people who are no longer part of our team.
Appendix B Pretraining
Reinforcement learning (RL) efficiency is closely tied to the performance of the underlying base model. Frontier models such as Gemini [47] and Llama [11] highlight the importance of pretraining data quality in achieving high performance. However, many recent open-source models lack full transparency regarding their data processing pipelines and recipes, creating challenges for broader community understanding. While we are not open-sourcing our proprietary model at this time, we are committed to providing a comprehensive disclosure of our data pipeline and methodologies. In this section, we focus primarily on the multimodal pretraining data recipe, followed by a brief discussion of the model architecture and training stages.
B.1 Language Data
Our pretrain corpus is designed to provide comprehensive and high-quality data for training large language models (LLMs). It encompasses five domains: English, Chinese, Code, Mathematics & Reasoning, and Knowledge. We employ sophisticated filtering and quality control mechanisms for each domain to ensure the highest quality training data. For all pretrain data, we conducted rigorous individual validation for each data source to assess its specific contribution to the overall training recipe. This systematic evaluation ensures the quality and effectiveness of our diverse data composition.
English and Chinese textual data
we developed a multi-dimensional quality filtering framework that combines multiple scoring methods to reduce individual biases and ensure comprehensive quality assessment. Our framework incorporates:
1. Rule-based filtering: We implement domain-specific heuristics to remove problematic content, including duplicate content, machine-translated text, and low-quality web scrapes. We also filter out documents with excessive special characters, unusual formatting, or spam patterns.
1. FastText-based classification: We trained specialized FastText [18, 23] models to identify content quality based on linguistic features and semantic coherence. This helps identify documents with natural language flow and proper grammatical structure.
1. Embedding-based similarity analysis: Using document embeddings [7], we compute document-level similarity scores to identify and remove near-duplicates while preserving semantically valuable variations. This approach helps maintain diversity in our training corpus.
1. LLM-based quality assessment: Following [38], we leverage LLMs to score documents based on coherence, informativeness, and potential educational value. This method is particularly effective at identifying nuanced quality indicators that simpler methods might miss.
The final quality score for each document is calculated as a combination of these individual scores. Based on extensive empirical analysis, we implement dynamic sampling rates, where high-quality documents are upsampled, while low-quality documents are downsampled during training.
Code data
The code data primarily consists of two categories. For the pure code data derived from code files, we adhered to the methodology of BigCode [25, 28] and conducted a comprehensive preprocessing of the dataset. Initially, we eliminated miscellaneous languages and applied a rule-based cleaning procedure to enhance data quality. Subsequently, we addressed language imbalance through strategic sampling techniques. Specifically, markup languages such as JSON, YAML, and YACC were down-sampled, while 32 major programming languages, including Python, C, C++, Java, and Go, were up-sampled to ensure a balanced representation. Regarding the text-code interleaved data sourced from various data sources, we use an embedding-based method to recall high-quality data. This approach ensures the diversity of the data and maintains its high quality.
Math & Reasoning data
The mathematics and reasoning component of our dataset is crucial for developing strong analytical and problem-solving capabilities. The mathematical pre-training data are mainly retrieved from web text and PDF documents collected from publicly available internet sources. [37] Initially, we discovered that our general-domain text extraction, data cleaning process and OCR models exhibited high false negative rates in the mathematical domain. Therefore, we first developed specialized data cleaning procedures and OCR models specifically for mathematical content, aiming to maximize the recall rate of mathematical data. Subsequently, we implemented a two-stage data cleaning process:
1. Using FastText model for initial cleaning to remove most irrelevant data.
1. Utilizing a fine-tuned language model to further clean the remaining data, resulting in high-quality mathematical data.
Knowledge data
The knowledge corpus is meticulously curated to ensure a comprehensive coverage in academic disciplines. Our knowledge base primarily consists of academic exercises, textbooks, research papers, and other general educational literature. A significant portion of these materials is digitized through OCR processing, for which we have developed proprietary models optimized for academic content, particularly for handling mathematical formulas and special symbols.
We employ internal language models to annotate documents with multi-dimensional labels, including:
1. OCR quality metrics to assess recognition accuracy
1. Educational value indicators measuring pedagogical relevance
1. Document type classification (e.g., exercises, theoretical materials)
Based on these multi-dimensional annotations, we implement a sophisticated filtering and sampling pipeline. First and foremost, documents are filtered through OCR quality thresholds. Our OCR quality assessment framework places special attention on detecting and filtering out common OCR artifacts, particularly repetitive text patterns that often indicate recognition failures.
Beyond basic quality control, we carefully evaluate the educational value of each document through our scoring system. Documents with high pedagogical relevance and knowledge depth are prioritized, while maintaining a balance between theoretical depth and instructional clarity. This helps ensure that our training corpus contains high-quality educational content that can effectively contribute to the modelâs knowledge acquisition.
Finally, to optimize the overall composition of our training corpus, the sampling strategy for different document types is empirically determined through extensive experimentation. We conduct isolated evaluations to identify document subsets that contribute most significantly to the modelâs knowledge acquisition capabilities. These high-value subsets are upsampled in the final training corpus. However, to maintain data diversity and ensure model generalization, we carefully preserve a balanced representation of other document types at appropriate ratios. This data-driven approach helps us optimize the trade-off between focused knowledge acquisition and broad generalization capabilities.
B.2 Multimodal Data
Our multi-modal pretraining corpus is designed to provide high-quality data that enables models to process and understand information from multiple modalities, including text, images, and videos. To this end, we also have curated high-quality data from five categoriesâcaptioning, interleaving, OCR (Optical Character Recognition), knowledge, and general question answeringâto form the corpus.
When constructing our training corpus, we developed several multi-modal data processing pipelines to ensure data quality, encompassing filtering, synthesis, and deduplication. Establishing an effective multi-modal data strategy is crucial during the joint training of vision and language, as it both preserves the capabilities of the language model and facilitates alignment of knowledge across diverse modalities.
We provide a detailed description of these sources in this section, which is organized into the following categories:
Caption data
Our caption data provides the model with fundamental modality alignment and a broad range of world knowledge. By incorporating caption data, the multi-modal LLM gains wider world knowledge with high learning efficiency. We have integrated various open-source Chinese and English caption datasets like [41, 10] and also collected substantial in-house caption data from multiple sources. However, throughout the training process, we strictly limit the proportion of synthetic caption data to mitigate the risk of hallucination stemming from insufficient real-world knowledge.
For general caption data, we follow a rigorous quality control pipeline that avoids duplication and maintain high image-text correlation. We also vary image resolution during pretraining to ensure that the vision tower remains effective when processing images of both high- and low-resolution.
Image-text interleaving data During the pretraining phase, model is benefit from interleaving data for many aspects, for example, multi-image comprehension ability can be boosted by interleaving data; interleaving data always provide detailed knowledge for the given image; a longer multi-modal context learning ability can also be gained by the interleaving data. Whatâs more, we also find that interleaving data can contributes positively to maintaining the modelâs language abilities. Thus, image-text interleaving data is an important part in our training corpus. Our multi-modal corpus considered open-sourced interleave datasets like [64, 22] and also constructed large-scale in-house data using resources like textbooks, webpages and tutorials. Further, we also find that synthesizing the interleaving data benefits the performance of multi-modal LLM for keeping the text knowledges. To ensure each imageâs knowledge is sufficiently studied, for all the interleaving data, other than the standard filtering, deduping and other quality control pipeline, we also integrated a data reordering procedure for keeping all the image and text in the correct order.
OCR data Optical Character Recognition (OCR) is a widely adopted technique that converts text from images into an editable format. In k1.5, a robust OCR capability is deemed essential for better aligning the model with human values. Accordingly, our OCR data sources are diverse, ranging from open-source to in-house datasets, and encompassing both clean and augmented images.
In addition to the publicly available data, we have developed a substantial volume of in-house OCR datasets, covering multilingual text, dense text layouts, web-based content, and handwritten samples. Furthermore, following the principles outlined in OCR 2.0 [53], our model is also equipped to handle a variety of optical image types, including figures, tables, geometry diagrams, mermaid plots, and natural scene text. We apply extensive data augmentation techniquesâsuch as rotation, distortion, color adjustments, and noise additionâto enhance the modelâs robustness. As a result, our model achieves a high level of proficiency in OCR tasks.
Knowledge data The concept of multi-modal knowledge data is analogous to the previously mentioned text pretraining data, except here we focus on assembling a comprehensive repository of human knowledge from diverse sources to further enhance the modelâs capabilities. For example, carefully curated geometry data in our dataset is vital for developing visual reasoning skills, ensuring the model can interpret the abstract diagrams created by humans.
Our knowledge corpus adheres to a standardized taxonomy to balance content across various categories, ensuring diversity in data sources. Similar to text-only corpora, which gather knowledge from textbooks, research papers, and other academic materials, multi-modal knowledge data employs both a layout parser and an OCR model to process content from these sources. While we also include filtered data from internet-based and other external resources.
Because a significant portion of our knowledge corpus is sourced from internet-based materials, infographics can cause the model to focus solely on OCR-based information. In such cases, relying exclusively on a basic OCR pipeline may limit training effectiveness. To address this, we have developed an additional pipeline that better captures the purely textual information embedded within images.
General QA Data During the training process, we observed that incorporating a substantial volume of high-quality QA datasets into pretraining offers significant benefits. Specifically, we included rigorous academic datasets addressing tasks such as grounding, table/chart question answering, web agents, and general QA. In addition, we compiled a large amount of in-house QA data to further enhance the modelâs capabilities. To maintain balanced difficulty and diversity, we applied scoring models and meticulous manual categorization to our general question answering dataset, resulting in overall performance improvements.
B.3 Model Architecture
Kimi k-series models employ a variant of the Transformer decoder [49] that integrates multimodal capabilities alongside improvements in architecture and optimization strategies, illustrated in Figure 11. These advancements collectively support stable large-scale training and efficient inference, tailored specifically to large-scale reinforcement learning and the operational requirements of Kimi users.
Extensive scaling experiments indicate that most of the base model performance comes from improvements in the quality and diversity of the pretraining data. Specific details regarding model architecture scaling experiments lie beyond the scope of this report and will be addressed in future publications.
<details>
<summary>x12.png Details</summary>

### Visual Description
\n
## Diagram: Transformer Model Input/Output Flow
### Overview
The image is a diagram illustrating the input and output flow of a Transformer model. It depicts two input types â Text Sequences and Interleave Image-text Sequences â feeding into a central "Transformer" block, which then outputs to "Large Scale Reinforcement Learning". The diagram uses icons to represent the input and output types and arrows to indicate the flow of information.
### Components/Axes
The diagram consists of the following components:
* **Text Sequences:** Represented by an icon of stacked lines, labeled "Text Sequences".
* **Interleave Image-text Sequences:** Represented by an icon depicting a mountain range with text, labeled "Interleave Image-text Sequences".
* **Transformer:** A large, gray rectangular block labeled "Transformer". This is the central processing unit.
* **Large Scale Reinforcement Learning:** Represented by an icon of a lightbulb with a head silhouette, labeled "Large Scale Reinforcement Learning".
* **Arrows:** Curved arrows indicate the flow of information. One set of arrows connects the two input types to the Transformer, and another set connects the Transformer to the Reinforcement Learning output.
### Detailed Analysis or Content Details
The diagram shows a two-pronged input into the Transformer model.
* The first input is "Text Sequences".
* The second input is "Interleave Image-text Sequences".
Both inputs converge on the "Transformer" block. The output of the Transformer is then directed to "Large Scale Reinforcement Learning". The arrows indicate a unidirectional flow of information from inputs to the Transformer and then from the Transformer to the output.
### Key Observations
The diagram highlights the Transformer's ability to process both text-only and combined image-text data. The use of Reinforcement Learning as the output suggests the Transformer is being used to train or optimize a reinforcement learning agent. The diagram does not provide any quantitative data or specific details about the Transformer's architecture or training process.
### Interpretation
This diagram illustrates a common architecture in modern AI, particularly in the field of multimodal learning. The Transformer model is positioned as a central component capable of handling diverse input types (text and image-text combinations). The output to Large Scale Reinforcement Learning suggests the model is being used to learn complex behaviors or policies through trial and error. The diagram emphasizes the Transformer's role as a versatile feature extractor that can be integrated into larger AI systems. The interleaving of image and text suggests the model is designed to understand relationships between visual and textual information, which is crucial for tasks like image captioning, visual question answering, and robotics.
</details>
Figure 11: Kimi k1.5 supports interleaved images and text as input, leveraging large-scale reinforcement learning to enhance the modelâs reasoning capabilities.
B.4 Training Stages
The Kimi k1.5 model is trained in three stages: the vision-language pretraining stage, the vision-language cooldown stage, and the long-context activation stage. Each stage of the Kimi k1.5 modelâs training focuses on a particular capability enhancement.
Vision-language pretraining stage
In this stage, the model is firstly trained solely on language data, establishing a robust language model foundation. Then the model is gradually introduced to interleaved vision-language data, acquiring multimodal capabilities. The visual tower is initially trained in isolation without updating the language model parameters, then we unfreeze the language model layers, and ultimately increase the proportion of vision-text data to 30%. The final data mixtures and their respective weights were determined through ablation studies conducted on smaller models.
Vision-language cooldown stage
The second stage serves as a cooldown phase, where the model is continue trained with high-quality language and vision-language datasets to ensure superior performance. Through empirical investigation, we observed that the incorporation of synthetic data during the cooldown phase yields significant performance improvements, particularly in mathematical reasoning, knowledge-based tasks, and code generation. The English and Chinese components of the cooldown dataset are curated from high-fidelity subsets of the pre-training corpus. For math, knowledge, and code domains, we employ a hybrid approach: utilizing selected pre-training subsets while augmenting them with synthetically generated content. Specifically, we leverage existing mathematical, knowledge and code corpora as source material to generate question-answer pairs through a proprietary language model, implementing rejection sampling techniques to maintain quality standards [60, 45]. These synthesized QA pairs undergo comprehensive validation before being integrated into the cooldown dataset.
Long-context activation stage
Finally, in the third stage, k1.5 is trained with upsampled long-context cooldown data, enabling it to process extended sequences and support tasks that demand longer context. To ensure excellent long-text capabilities of the base model, we upsampled long-context data and used 40% full attention data and 60% partial attention data during long context training. The full attention data came partly from high-quality natural data and partly from synthetic long context Q&A and summary data. The partial attention data came from uniform sampling of cooldown data. The RoPE frequency [46] was set to 1,000,000. During this stage, we gradually extended length activation training by increasing the maximum sequence length from 4,096 to 32,768, and ultimately to 131,072.
Appendix C Evaluation Details
C.1 Text Benchmark
MMLU [13] covers 57 subjects in STEM, the humanities, social sciences, and more. It ranges in difficulty from an elementary level to an advanced professional level, and it tests both world knowledge and problem-solving ability.
IF-Eval [63] is a benchmark for evaluating large language modelsâ ability to follow verifiable instructions. There are 500+ prompts with instructions such as "write an article with more than 800 words", etc. Due to a version shift, the number of IFEval reported in Table 3 derived from an intermediate model. We will update the scores based on the final model.
CLUEWSC [56] is a coreference resolution task in CLUE benchmark, requiring models to determine if a pronoun and a noun phrase in a sentence co-refer, with data from Chinese fiction books.
C-EVAL [15] is a comprehensive Chinese evaluation suite for assessing advanced knowledge and reasoning abilities of foundation models. It includes 13,948 multiple-choice questions across 52 disciplines and four difficulty levels.
C.2 Reasoning Benchmark
HumanEval-Mul is a subset of Multipl-E [5]. MultiPL-E extends the HumanEval benchmark and MBPP benchmark to 18 languages that encompass a range of programming paradigms and popularity. We choose HumanEval translations in 8 mainstream programming languages (Python, Java, Cpp, C#, JavaScript, TypeScript, PHP, and Bash).
LiveCodeBench [17] serves as a comprehensive and contamination-free benchmark for assessing large language models (LLMs) in coding tasks. It features live updates to prevent data contamination, holistic evaluation across multiple coding scenarios, high-quality problems and tests, and balanced problem difficulty. We test short-CoT model with questions from 2408-2411 (release v4), and long-CoT model with questions from 2412-2502 (release v5).
AIME 2024 comprises the competition questions for the AIME in 2024. The AIME is a prestigious, invitation-only math contest for top high school students, assessing advanced math skills and requiring solid foundation and high logical thinking.
MATH-500 [26] is a comprehensive mathematics benchmark that contains 500 problems on various mathematics topics including algebra, calculus, probability, and more. Tests both computational ability and mathematical reasoning. Higher scores indicate stronger mathematical problem-solving capabilities.
Codeforces is a well-known online judge platform and serves as a popular testbed for evaluating long-CoT coding models. To achieve higher rankings in the Div2 and Div3 competitions, we utilize majority voting on the code snippets generated by the k1.5 long-CoT model, employing test cases that are also generated by the same model. The percentile of the codeforce ELO rating was extracted from OpenAI Day12 talk https://www.youtube.com/watch?v=SKBG1sqdyIU&ab_channel=OpenAI c.
C.3 Image Benchmark
MMMU [59] encompasses a carefully curated collection of 11.5K multimodal questions sourced from college exams, quizzes, and textbooks. These questions span six major academic fields: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering.
MATH-Vision (MATH-V) [52] is a carefully curated collection of 3,040 high-quality mathematical problems with visual contexts that are sourced from real math competitions. It covers 16 distinct mathematical disciplines and is graded across 5 levels of difficulty. This dataset offers a comprehensive and diverse set of challenges, making it ideal for evaluating the mathematical reasoning abilities of LMMs.
MathVista [29] is a benchmark that integrates challenges from a variety of mathematical and visual tasks, demanding participants to exhibit fine-grained, deep visual understanding along with compositional reasoning to successfully complete the tasks.