# Self-Verifying Reflection Helps Transformers with CoT Reasoning
Abstract
Advanced large language models (LLMs) frequently reflect in reasoning chain-of-thoughts (CoTs), where they self-verify the correctness of current solutions and explore alternatives. However, given recent findings that LLMs detect limited errors in CoTs, how reflection contributes to empirical improvements remains unclear. To analyze this issue, in this paper, we present a minimalistic reasoning framework to support basic self-verifying reflection for small transformers without natural language, which ensures analytic clarity and reduces the cost of comprehensive experiments. Theoretically, we prove that self-verifying reflection guarantees improvements if verification errors are properly bounded. Experimentally, we show that tiny transformers, with only a few million parameters, benefit from self-verification in both training and reflective execution, reaching remarkable LLM-level performance in integer multiplication and Sudoku. Similar to LLM results, we find that reinforcement learning (RL) improves in-distribution performance and incentivizes frequent reflection for tiny transformers, yet RL mainly optimizes shallow statistical patterns without faithfully reducing verification errors. In conclusion, integrating generative transformers with discriminative verification inherently facilitates CoT reasoning, regardless of scaling and natural language.
1 Introduction
Numerous studies have explored the ability of large language models (LLMs) to reason through a chain of thought (CoT), an intermediate sequence leading to the final answer. While simple prompts can elicit CoT reasoning [13], subsequent works have further enhanced CoT quality through reflective thinking [10] and the use of verifiers [4]. Recently, reinforcement learning (RL) [33] has achieved notable success in advanced reasoning models, such as OpenAI-o1 [20] and Deepseek-R1 [5], which show frequent reflective behaviors that self-verify the correctness of current solutions and explore alternatives, integrating generative processes with discriminative inference. However, researchers also report that the ability of these LLMs to detect errors is rather limited, and a large portion of reflection fails to bring correct solutions [11]. Given the weak verification ability, the experimental benefits of reflection and the emergence of high reflection frequency in RL require further explanation.
To address this challenge, we seek to analyze two main questions in this paper: 1) what role self-verifying reflection plays in training and execution of reasoning models, and 2) how reflective reasoning evolves in RL with verifiable outcome rewards [15]. However, the complexity of natural language and the prohibitive training cost of LLMs make it difficult to draw clear conclusions from theoretical abstraction and comprehensive experiments across settings. Inspired by Zeyuan et al. [2], we observe that task-specific reasoning and self-verifying reflection do not necessitate complex language. This allows us to investigate reflective reasoning through tiny transformer models [36], which provide efficient tools to understand self-verifying reflection through massive experiments.
To enable tiny transformers to produce long reflective CoTs and ensure analytic simplicity, we introduce a minimalistic reasoning framework, which supports essential reasoning behaviors that are operable without natural language. In our study, the model self-verifies the correctness of each thought step; then, it may resample incorrect steps or trace back to previous steps. Based on this framework, we theoretically prove that self-verifying reflection improves reasoning accuracy if verification errors are properly bounded, which does not necessitate a strong verifier. Additionally, a trace-back mechanism that allows revisiting previous solutions conditionally improves performance if the problem requires a sufficiently large number of steps.
Our experiments evaluate 1M, 4M, and 16M transformers in solving integer multiplication [7] and Sudoku puzzles [3], which have simple definitions (thus, operable by transformers without language) yet still challenging for even LLM solvers. To maintain relevance to broader LLM research, the tiny transformers are trained from scratch through a pipeline similar to that of training LLM reasoners. Our main findings are listed as follows: 1) Learning to self-verify greatly facilitates the learning of forward reasoning. 2) Reflection improves reasoning accuracy if true correct steps are not excessively verified as incorrect. 3) Resembling the results of DeepSeek-R1 [5], RL can incentivize reflection if the reasoner can effectively explore potential solutions. 4) However, RL fine-tuning increases performance mainly statistically, with limited improvements in generalizable problem-solving skills.
Overall, this paper contributes to the fundamental understanding of reflection in reasoning models by clarifying its effectiveness and synergy with RL. Our findings based on minimal reasoners imply a general benefit of reflection for more advanced models, which operate on a super-set of our simplified reasoning behaviors. In addition, our implementation also provides insights into the development of computationally efficient reasoning models.
2 Related works
CoT reasoning
Pretrained LLMs emerge the ability to produce CoTs from simple prompts [13, 38], which can be explained via the local dependencies [25] and probabilistic distribution [35] of natural-language reasoning. Many recent studies develop models targeted at reasoning, e.g., scaling test-time inference with external verifiers [4, 17, 18, 32] and distilling large general models to smaller specialized models [34, 9]. In this paper, we train tiny transformers from scratch to not only generate CoTs but also self-verify, i.e., detect errors in their own thoughts without external models.
RL fine-tuning for CoT reasoning
RL [33] recently emerges as a key method for CoT reasoning [31, 40]. It optimizes the transformer model by favoring CoTs that yield high cumulated rewards, where PPO [29] and its variant GRPO [31] are two representative approaches. Central to RL fine-tuning are reward models that guide policy optimization: the 1) outcome reward models (ORM) assessing final answers, and the 2) process reward models (PRM) [17] evaluating intermediate reasoning steps. Recent advances in RL with verifiable rewards (RLVR) [5, 41] demonstrate that simple ORM based solely on answer correctness can induce sophisticated reasoning behaviors.
Reflection in LLM reasoning
LLM reflection provides feedback to the generated solutions [19] and may accordingly refine the solutions [10]. Research shows that supervised learning from verbal reflection improves performance, even though the reflective feedback is omitted during execution [42]. Compared to the generative verbal reflection, self-verification uses discriminative labels to indicate the correctness of reasoning steps, which supports reflective execution and is operable without linguistic knowledge. Recently, RL is widely used to develop strong reflective abilities [14, 27, 20]. In particular, DeepSeek-R1 [5] shows that RLVR elicits frequent reflection, and such a result is reproduced in smaller LLMs [24]. In this paper, we further investigate how reflection evolves during RLVR by examining the change of verification errors.
Understanding LLMs through small transformers
Small transformers are helpful tools to understand LLMs, for their architectural consistency with LLMs and low development cost to support massive experiments. For example, transformers smaller than 1B provide insights into how data mixture and data diversity influence LLM training [39, 2]. They also contribute to foundational understanding of CoT reasoning, such as length generalization [12], internalization of thoughts [6], and how CoTs inherently extend the problem-solving ability [8, 16]. In this paper, we further use tiny transformers to better understand reflection in CoT reasoning.
3 Reflective reasoning for transformers
In this section, we develop transformers to perform simple reflective reasoning in long CoTs. Focusing on analytic clarity and broader implications, the design of our framework follows the minimalistic principle, providing only essential reasoning behavior operable without linguistic knowledge. More advanced reasoning frameworks optimized for small-scale models are certainly our next move in future work. In the following, we first introduce the basic formulation of CoT reasoning; then, based on this formulation, we introduce our simple reasoning framework for self-verifying reflection; afterwards, we describe how transformers are trained to reason through this framework.
3.1 Reasoning formulation
<details>
<summary>x1.png Details</summary>

### Visual Description
## Diagram: Sequential Reasoning Model with State Transitions
### Overview
The diagram illustrates a sequential reasoning process where an initial state `Q` evolves through a series of reasoning steps (`Rβ` to `R_{T-1}`) to produce a final answer `A`. Each step involves probabilistic reasoning (`Ο`) and state transitions (`T`), with the final state `S_T` directly leading to `A`.
### Components/Axes
- **Model**: Labeled as "Model" at the top, indicating the overarching framework.
- **Reasoning Steps**:
- `Q` (initial state) β `Rβ` β `Rβ` β ... β `R_{T-1}` β `A` (final answer).
- Each `R_i` is annotated with `R_i ~ Ο(Β· | S_{i-1})`, indicating probabilistic dependence on the prior state.
- **State Transitions**:
- `Sβ = Q` (initial state).
- `Sβ = T(Sβ, Rβ)`, `Sβ = T(Sβ, Rβ)`, ..., `S_T = T(S_{T-1}, A)`.
- Arrows show deterministic transitions between states.
- **Notation**:
- `Ο(Β· | S)`: Probability distribution over actions given a state.
- `T(S, R)`: Transformation function mapping state `S` and reasoning step `R` to the next state.
### Detailed Analysis
- **Flow Direction**: Left-to-right progression from `Q` to `A`.
- **Key Relationships**:
- Each `R_i` is conditionally dependent on the prior state `S_{i-1}` via `Ο`.
- State transitions `S_i` are deterministic functions of the prior state and reasoning step.
- The final step `S_T` uses `A` instead of `R`, suggesting `A` is the terminal output.
- **Uncertainty**: Probabilistic annotations (`Ο`) imply uncertainty in reasoning steps, while state transitions (`T`) are deterministic.
### Key Observations
1. **Sequential Dependency**: Each reasoning step (`R_i`) and state (`S_i`) depends on the immediately prior state (`S_{i-1}`).
2. **Terminal Step**: The final transition `S_T = T(S_{T-1}, A)` treats `A` as a direct input, bypassing the probabilistic `Ο` used in earlier steps.
3. **No Numerical Data**: The diagram lacks quantitative values, focusing instead on symbolic relationships.
### Interpretation
This diagram represents a **Markov-like reasoning process** where each step updates the system's state based on prior information and probabilistic reasoning. The use of `Ο` suggests uncertainty in intermediate steps, while the deterministic `T` function ensures structured progression. The final answer `A` is derived after all reasoning steps, indicating a hierarchical or layered reasoning architecture. The model could apply to tasks like natural language inference, decision-making systems, or AI planning, where intermediate uncertainty is resolved through sequential updates. The absence of numerical data implies the diagram is conceptual, emphasizing process over empirical results.
</details>
Figure 1: The illustration of MTP, where the transformer model $\pi$ reasons the answer $A$ of a query $Q$ through $T-1$ intermediate steps.
<details>
<summary>x2.png Details</summary>

### Visual Description
## Flowchart: State Transition Process with Arithmetic Operations
### Overview
The image depicts a three-stage computational process involving state transitions and arithmetic operations. It shows the flow from an initial state (`S_t`) through a transformation (`R_t+1`) to a final state (`S_t+1`), with intermediate calculations and color-coded numerical values.
---
### Components/Axes
1. **Left Block (`S_t`)**:
- **Labels**:
- `145 Γ 340` (blue)
- `+ 290` (green)
- **Result**: Implicitly calculated as `60090` (not explicitly labeled).
2. **Middle Block (`R_t+1`)**:
- **Labels**:
- `340 β 300` (blue, indicating a reduction)
- `145 Γ 4 = 580` (green)
- `290 + 5800 = 6090` (red)
- **Arrows**:
- Input from `S_t` via `Ο` (symbolic transformation).
- Output to `S_t+1` via `T` (transformation function).
3. **Right Block (`S_t+1`)**:
- **Labels**:
- `145 Γ 300` (blue)
- `+ 6090` (red)
- **Result**: Explicitly labeled as `145` (contradicts arithmetic expectations).
---
### Detailed Analysis
1. **`S_t` Block**:
- Computation: `145 Γ 340 = 49,300` (blue) + `290` (green) = `49,590`.
- **Note**: The diagram does not explicitly show this result, but it is required for subsequent steps.
2. **`R_t+1` Block**:
- **Step 1**: `340 β 300` (blue, reduction by 40).
- **Step 2**: `145 Γ 4 = 580` (green, multiplication).
- **Step 3**: `290 + 5800 = 6090` (red, addition).
- **Anomaly**: The value `5800` appears unaccounted for in prior steps (likely a typo or missing intermediate value).
3. **`S_t+1` Block**:
- Computation: `145 Γ 300 = 43,500` (blue) + `6090` (red) = `49,590`.
- **Contradiction**: The diagram labels the final result as `145`, which is inconsistent with the arithmetic.
---
### Key Observations
1. **Flow Direction**:
- Data flows from `S_t` β `R_t+1` β `S_t+1` via transformations `Ο` and `T`.
2. **Color Coding**:
- Blue: Initial values (`145`, `340`).
- Green: Intermediate operations (`Γ4`, `+290`).
- Red: Final adjustments (`+6090`).
3. **Mathematical Inconsistency**:
- The final state `S_t+1` is labeled `145`, but the arithmetic suggests `49,590`. This discrepancy suggests either:
- A typo in the diagram.
- A non-literal interpretation of the final state (e.g., modulo operation, symbolic reset).
---
### Interpretation
1. **Process Logic**:
- The system appears to model a state transition where:
- `S_t` represents an initial state with components `145` and `290`.
- `R_t+1` modifies these values (e.g., reducing `340` to `300`, scaling `145` by 4).
- `S_t+1` combines transformed values, but the final result is anomalously reset to `145`.
2. **Possible Explanations for Anomaly**:
- **Typo**: The final value `145` may be a placeholder or mislabeled.
- **Symbolic Reset**: The system might reset to an initial state (`145`) after transformations, ignoring intermediate results.
- **Modulo Operation**: The final value could represent `49,590 mod 49,445 = 145` (hypothetical, not explicitly stated).
3. **Design Implications**:
- The flowchart emphasizes transformations (`Ο`, `T`) over arithmetic accuracy, suggesting a focus on process flow rather than numerical precision.
- The use of color coding aids in tracing value propagation but introduces ambiguity in the final step.
---
### Conclusion
The diagram illustrates a state transition process with arithmetic operations, but the final state (`S_t+1`) contains a critical inconsistency. This may reflect a design choice (e.g., symbolic reset) or an error in the diagram. Further clarification is needed to resolve the discrepancy between the labeled result (`145`) and the computed value (`49,590`).
</details>
(a) Multiplication
<details>
<summary>x3.png Details</summary>

### Visual Description
## Diagram: Sudoku Transformation Process
### Overview
The image depicts a Sudoku puzzle transformation process, showing two Sudoku grids labeled **S_t** (initial state) and **S_{t+1}** (transformed state), connected by a transformation box **R_{t+1}**. The transformation involves specific cell updates, with arrows indicating changes in values.
### Components/Axes
1. **Grids**:
- **S_t**: Initial Sudoku grid with pre-filled numbers (e.g., 1, 7, 8, 2, 3, 5, 9, 4, 6, 7, 8, 4).
- **S_{t+1}**: Transformed grid with updated values in specific cells.
2. **Transformation Box (R_{t+1})**:
- Contains two labeled cell updates:
- **Cell 6,2 β 7** (value changes from 7 to 2).
- **Cell 7,8 β 2** (value changes from 2 to 7).
- Arrows (Ο and T) indicate the flow of transformation.
### Detailed Analysis
- **Cell Updates**:
- **Cell 6,2**: Original value **7** (S_t) β Updated to **2** (S_{t+1}).
- **Cell 7,8**: Original value **2** (S_t) β Updated to **7** (S_{t+1}).
- **Grid Structure**:
- Both grids follow standard Sudoku rules (9x9 grid, 3x3 subgrids).
- Numbers 1β9 are used, with some cells pre-filled and others blank.
### Key Observations
1. **Symmetry in Changes**: The transformation swaps values between two cells (7 β 2), suggesting a constraint-based update.
2. **Consistency**: All other cells remain unchanged, implying the transformation is localized to specific positions.
3. **Notation**: The use of **Ο** and **T** as transformation operators is abstract but critical to the process.
### Interpretation
This diagram illustrates a step in a Sudoku-solving algorithm or constraint satisfaction process. The transformation **R_{t+1}** modifies specific cells to resolve conflicts or progress toward a valid solution. The swapping of values (7 β 2) likely ensures adherence to Sudoku rules (no duplicates in rows, columns, or subgrids). The abstract operators **Ο** and **T** may represent logical rules or constraints governing the update. The process highlights how localized changes propagate through the grid, emphasizing the interplay between individual cells and global puzzle constraints.
</details>
(b) Sudoku
Figure 2: Example reasoning steps for multiplication and Sudoku, where the core planning is presented in the reasoning step ${R}_{t+1}$ .
CoT Reasoning as a Markov decision process
A general form of CoT reasoning is given as a tuple $({Q},\{{R}\},{A})$ , where ${Q}$ is the input query, $\{{R}\}=({R}_{1},...,{R}_{T-1})$ is the sequence of $T-1$ intermediate steps, and ${A}$ is the final answer. Following Wang [37], we formulate the CoT reasoning as a Markov thought process (MTP). As shown in Figure 1, an MTP follows that [37]:
$$
\displaystyle{R}_{t+1}\sim\pi(\cdot\mid{S}_{t}),\ {S}_{t+1}=\mathcal{T}({S}_{t},{R}_{t+1}), \tag{1}
$$
where ${S}_{t}$ is the $t$ -th reasoning state, $\pi$ is the planning policy (the transformer model), and $\mathcal{T}$ is the (usually deterministic) transition function. The initial state ${S}_{0}:=Q$ is given by the input query. In each reasoning step ${R}_{t+1}$ , the policy $\pi$ plans the next reasoning action that determines the state transition, which is then executed by $\mathcal{T}$ to obtain the next state. The process terminates when the step presents the answer, i.e., $A={R}_{T}$ . For clarity, a table of notations is presented in Appendix A.
An MTP is implemented by specifying the state representations and transition function $\mathcal{T}$ . Since we use tiny transformers that are weak in inferring long contexts, we suggest reducing the length of state representations, so that each state ${S}_{t}$ carries only necessary information for subsequent reasoning. Here, we present two examples to better illustrate how MTPs are designed for tiny transformers.
**Example 1 (An MTP for integer multiplication)**
*As shown in Figure 2(a), to reason the product of two integers $x,yβ₯ 0$ , each state is an expression ${S}_{t}:=[x_{t}Γ y_{t}+z_{t}]$ mathematically equal to $xΓ y$ , initialized as ${S}_{0}=[xΓ y+0]$ . On each step, $\pi$ plans $y_{t+1}$ by eliminate a non-zero digit in $y_{t}$ to $0$ , and it then computes $z_{t+1}=z_{t}+x_{t}(y_{t}-y_{t+1})$ . Consequently, $\mathcal{T}$ updates ${S}_{t+1}$ as $[x_{t+1}Γ y_{t+1}+z_{t+1}]$ with $x_{t+1}=x_{t}$ . Similarly, $\pi$ may also eliminate non-zero digits in $x_{t}$ in a symmetric manner. Finally, $\pi$ yields $A=z_{t}$ as the answer if either $x_{t}$ or $y_{t}$ becomes $0$ .*
**Example 2 (An MTP for Sudoku[3])**
*As shown in Figure 2(b), each Sudoku state is a $9Γ 9$ game board. On each step, the model $\pi$ fills some blank cells to produce a new board, which is exactly the next state. The answer $A$ is a board with no blank cells.*
3.2 The framework of self-verifying reflection
<details>
<summary>x4.png Details</summary>

### Visual Description
## Flowchart: Process Transition Diagram
### Overview
The image depicts a flowchart illustrating a sequential process with decision points, transitions, and outcomes. It includes nodes labeled with identifiers (e.g., Q, R1, R2, etc.), arrows representing transitions, and success/failure indicators (β for success, Γ for failure). The diagram uses color coding (red for failure, green for success) and includes mathematical expressions for state transitions.
### Components/Axes
- **Nodes**:
- **Q**: Starting point (Sβ = Q).
- **R1, R2, R3, R4, R5, R6**: Intermediate nodes with transitions.
- **A**: Final node (acceptance state).
- **Arrows**:
- **Red arrows**: Indicate failed transitions (Γ).
- **Green arrows**: Indicate successful transitions (β).
- **Conditions**:
- State transitions are defined by equations (e.g., Sβ = Sβ, Sβ = T(Sβ, Rβ)).
- **Legend**:
- Red: Failure (Γ).
- Green: Success (β).
### Detailed Analysis
1. **Starting Node (Q)**:
- Initial state: Sβ = Q.
- Two outgoing arrows:
- **Red arrow to R1**: Condition Sβ = Sβ (failure).
- **Green arrow to R2**: Condition Sβ = T(Sβ, Rβ) (success).
2. **Intermediate Nodes**:
- **R2**:
- Green arrow to R3: Sβ = T(Sβ, Rβ) (success).
- **R3**:
- Green arrow to R6: Sβ = T(Sβ , Rβ) (success).
- Red arrows to R4 and R5:
- Sβ = Sβ (failure).
- Sβ = Sβ (failure).
- **R6**:
- Green arrow to A: Sβ = T(Sβ, A) (success).
3. **Final Node (A)**:
- Acceptance state (β).
### Key Observations
- **Success Path**: Q β R2 β R3 β R6 β A (all transitions marked β).
- **Failure Paths**:
- Q β R1 (failure).
- R3 β R4 and R3 β R5 (both failures).
- **State Transitions**:
- The function T appears to map states based on node interactions (e.g., Sβ = T(Sβ, Rβ)).
- **Color Consistency**:
- Red arrows (failure) and green arrows (success) align with the legend.
### Interpretation
The diagram represents a decision-making or validation process where transitions between states depend on specific conditions. Successful paths (green) lead to the final acceptance state (A), while failed transitions (red) halt progress. The use of mathematical expressions (e.g., Sβ = T(Sβββ, Rβ)) suggests a formalized system, possibly for algorithmic validation or workflow automation. The presence of multiple failure points (R1, R4, R5) highlights potential bottlenecks or error conditions in the process. The final state A is only reachable via a strictly successful path, emphasizing the importance of adhering to transition rules.
</details>
(a) Reflective MTP
<details>
<summary>x5.png Details</summary>

### Visual Description
## Flowchart Diagram: State Transition Process with Conditional Outcomes
### Overview
The diagram illustrates a sequential state transition process starting from an initial state `Q` (Sβ) and progressing through a series of conditional transitions (RββRβ) to reach a final state `A` (Sββ). Transitions are color-coded (orange, red, green) to indicate success, failure, or acceptance, with state transitions governed by rules RββRβ and an acceptance rule `A`. The process includes loops, conditional branches, and terminal states.
---
### Components/Axes
- **Nodes**:
- **Initial State**: `Q` (labeled Sβ).
- **Transition Rules**:
- Rβ, Rβ, Rβ, Rβ, Rβ , Rβ, Rβ, Rβ, Rβ (each with β or Γ symbols).
- **Final State**: `A` (labeled Sββ).
- **Edges**:
- Arrows represent transitions between states (e.g., Sβ β Sβ via Rββ).
- Dotted arrows indicate conditional or alternative paths (e.g., Sβ β Sβ or Sβ).
- **Legend**:
- **Orange**: Successful transitions (β).
- **Red**: Failed transitions (Γ).
- **Green**: Accepted transitions (β).
---
### Detailed Analysis
1. **Initial Path (Sβ β Sβ β Sβ β Sβ)**:
- Sβ = Q β Sβ = T(Sβ, Rβ) via Rββ (orange).
- Sβ β Sβ = Sβ via RβΓ (red, no state change).
- Sβ β Sβ = T(Sβ, Rβ) via Rββ (orange).
2. **Branching at Sβ**:
- **Primary Path (Sβ β Sβ β Sβ β Sβ)**:
- Sβ β Sβ = T(Sβ, Rβ) via Rββ (orange).
- Sβ β Sβ = Sβ via Rβ Γ (red, no state change).
- Sβ β Sβ = Sβ via RβΓ (red, no state change).
- **Alternative Path (Sβ β Sβ β Sβ)**:
- Sβ β Sβ = Sβ via RβΓ (red, loop back to initial state).
3. **Secondary Path (Sβ β Sβ β Sβ β Sββ)**:
- Sβ β Sβ = T(Sβ, Rβ) via Rββ (green).
- Sβ β Sβ = T(Sβ, Rβ) via Rββ (green).
- Sβ β Sββ = T(Sβ, A) via Aβ (green, final acceptance).
---
### Key Observations
1. **Success Path**: The only fully successful path to `A` (Sββ) requires bypassing the initial loop (Sβ β Sβ β Sβ β Sβ) and proceeding directly via Rβ and Rβ (green transitions).
2. **Failure Points**:
- Rβ, Rβ , Rβ, and Rβ all fail (Γ), causing state stagnation or loops.
- Rβ creates a critical loop back to Sβ, resetting progress.
3. **Conditional Logic**:
- Transitions like Sβ β Sβ/Sβ suggest branching based on Rββs outcome.
- Rβ and Rβ are prerequisites for reaching `A`, independent of earlier failures.
---
### Interpretation
This diagram models a **state machine with conditional acceptance criteria**. The process begins at `Q` (Sβ) and attempts multiple transitions, but most fail (Γ), leading to stagnation or loops. The critical path to success (`A`, Sββ) requires:
1. Avoiding the initial failed transitions (Rβ, Rβ , Rβ, Rβ).
2. Executing Rβ and Rβ successfully (green), which are only accessible after resetting via RβΓ (Sβ β Sβ β Sβ).
The use of color-coding and symbols (β/Γ) emphasizes the binary outcomes of each rule. The loop from Sβ to Sβ introduces a **retry mechanism**, but only the secondary path (via Rβ/Rβ) avoids infinite recursion. This suggests a system where success depends on bypassing early failures and adhering to a specific subset of rules.
</details>
(b) Reflective trace-back search (width $m=2$ )
Figure 3: Reflective reasoning based on MTP. β $\checkmark$ β and β $Γ$ β are self-verification labels for positive and negative steps, respectively. The steps that are instantly verified as negative are highlighted in red. In RTBS, the dashed-line arrows back-propagate the negative labels, causing parental steps to be recursively rejected (orange). The green shows the steps that successfully lead to the answer.
Conceptually, reflection provides feedback for the proposed steps and may alter the subsequent reasoning accordingly. Reflection takes flexible forms in natural language (e.g., justifications and comprehensive evaluations), making it extremely costly to analyze. In this work, we propose to equip transformers with the simplest discriminative form of reflection, where the model self-verifies the correctness of each step and is allowed to retry those incorrect attempts. We currently do not consider the high-level revisory behavior that maps incorrect steps to correct ones, as we find learning such a mapping is challenging for tiny models and leads to no significant gain in practice. Specifically, we analyze two basic variants of reflective reasoning in this paper: the reflective MTP and the reflective trace-back search, as described below (see pseudo-code in Appendix D.1).
Reflective MTP (RMTP)
Given any MTP with a policy $\pi$ and transition $\mathcal{T}$ , we use a verifier $\mathcal{V}$ to produce a verification sequence after each reasoning step, denoted as ${V}_{t}\sim\mathcal{V}(Β·|{R}_{t})$ . Such ${V}_{t}$ includes verification label(s): The positive β $\checkmark$ β and negative β $Γ$ " signifying correct and incorrect reasoning of ${R}_{t}$ , respectively. Given the verified step ${\tilde{R}}_{t+1}:=({R}_{t+1},{V}_{t+1})$ that contains verification, we define $\tilde{\mathcal{T}}$ as the reflective transition function that rejects incorrect steps:
$$
{S}_{t+1}=\tilde{\mathcal{T}}({S}_{t},{\tilde{R}}_{t+1})=\tilde{\mathcal{T}}({S}_{t},({R}_{t+1},{V}_{t+1})):=\begin{cases}{S}_{t},&\text{``$\times$''}\in{V}_{t+1};\\
\mathcal{T}({S}_{t},{R}_{t+1}),&\text{otherwise.}\end{cases} \tag{2}
$$
In other words, if $\mathcal{V}$ detects any error (i.e. β $Γ$ ") in ${R}_{t+1}$ , the state remains unchanged so that $\pi$ may re-sample another attempt. Focusing on self-verification, we use a single model called the self-verifying policy $\tilde{\pi}:=\{\pi,\mathcal{V}\}$ to serve simultaneously as the planning policy $\pi$ and the verifier $\mathcal{V}$ . By operating tokens, $\tilde{\pi}$ outputs the verified step ${\tilde{R}}_{t}$ for each input state ${S}_{t}$ . In this way, $\tilde{\mathcal{T}}$ and $\tilde{\pi}$ constitute a new MTP called the RMTP, with illustration in Figure 3(a).
Reflective trace-back search (RTBS)
Though RMTP allows instant rejections of incorrect steps, sometimes the quality of a step can be better determined by actually trying it. For example, a Sudoku solver occasionally makes tentative guesses and traces back if the subsequent reasoning fails. Inspired by o1-journey [26], a trace-back search allowing the reasoner to revisit previous states may be applied to explore solution paths in an MTP. We implement simple RTBS by simulating the depth-first search in the trajectory space. Let $m$ denote the RTBS width, i.e., the maximal number of attempts on each step. As illustrated in Figure 3(b), if $m$ proposed steps are rejected on a state ${S}_{t}$ , the negative label β $Γ$ β will be propagated back to recursively reject the previous step ${R}_{t}$ . As a result, the state traces back to the closest ancestral state that has remaining attempt opportunities.
3.3 Training
<details>
<summary>x6.png Details</summary>

### Visual Description
## Flowchart: Multi-Stage Model Training Process
### Overview
The diagram illustrates a four-phase technical workflow for training a machine learning model, combining pretraining, reflective/non-reflective supervised fine-tuning (SFT), and reinforcement learning (RL) fine-tuning. It emphasizes context window sampling, ground-truth verification, and reward-driven optimization.
### Components/Axes
- **Phases**:
- (I) Pretraining (green)
- (II) Non-reflective SFT (blue)
- (III) Reflective SFT (gray)
- (IV) RL fine-tuning (orange)
- **Key Elements**:
- **Training Data**: Green cylinder labeled "Training Data (data mixture)"
- **Context Windows**: Red boxes highlighting sequences like `Q, R1, R2, A`
- **States/Steps**: Labeled `S1, S2, S3` (states) and `R1, R2, R3` (steps)
- **Verification**: "Expert Verifier" block with ground-truth checks
- **Policy Optimization**: Arrows connecting "Reward Model" and "Policy Optimization" in RL phase
- **Flow Direction**: Left-to-right progression with feedback loops (e.g., `Ο` and `ΟΜ` symbols)
### Detailed Analysis
1. **Pretraining (I)**:
- Input: Training data mixture β CoT examples
- Process: Randomly drawn context windows (e.g., `Q, R1, R2, A`)
- Output: Labeled `Ο` (policy parameter)
2. **Reflective SFT (III)**:
- Input: States (`S1, S2, S3`) and steps (`R1, R2, R3`)
- Process: Ground-truth verification via "Expert Verifier"
- Output: Transformed states `S_t = T(S_{t-1}, R_t)`
3. **Non-reflective SFT (II)**:
- Input: States and steps/answers
- Process: Direct mapping to `Ο` (policy parameter)
- Output: Labeled `ΟΜ` (modified policy)
4. **RL Fine-tuning (IV)**:
- Input: `Ο` and `ΟΜ` from prior phases
- Process:
- MTP (Model-based Training Process) with `Q β R_t β A`
- RMTP (Reward-based MTP) with `Q β R_t, V_t β A`
- Output: Reward Model and Policy Optimization
### Key Observations
- **Color Coding**:
- Green (Training Data), Red (Context Windows), Blue (Non-reflective SFT), Orange (RL Phase)
- **Feedback Loops**: Arrows indicate iterative refinement between phases (e.g., `Ο` β `ΟΜ` β RL optimization).
- **Verification Integration**: Ground-truth checks in Reflective SFT ensure data quality before RL fine-tuning.
### Interpretation
This workflow demonstrates a hybrid approach to model training:
1. **Pretraining** establishes foundational knowledge using context-aware examples.
2. **Reflective SFT** introduces iterative state-step transformations with human/expert validation, ensuring alignment with ground-truth.
3. **Non-reflective SFT** focuses on direct policy parameterization without intermediate verification.
4. **RL Fine-tuning** optimizes policies using reward signals, balancing exploration (MTP) and exploitation (RMTP).
The separation of reflective/non-reflective SFT suggests a strategy to handle uncertainty: reflective SFT validates intermediate steps, while non-reflective SFT accelerates training. The final RL phase prioritizes real-world applicability through reward-driven adjustments, likely improving robustness in dynamic environments.
</details>
Figure 4: The training workflow for transformers to perform CoT reasoning.
As shown in Figure 4, we train the tiny transformers from scratch through consistent techniques of LLM counterparts, such as pretraining, supervised fine-tuning (SFT), and RL fine-tuning. First, we use conventional pipelines to train a baseline model $\pi$ with only the planning ability in MTPs. During (I) pretraining, these CoT examples are treated as a textual corpus, where sequences are randomly drawn to minimize cross-entropy loss of next-token prediction. Then, in (II) non-reflective SFT, the model learns to map each state ${S}_{t}$ to the corresponding step ${R}_{t+1}$ by imitating examples.
Next, we employ (III) reflective SFT to integrate the planning policy $\pi$ with the knowledge of self-verification. To produce ground-truth verification labels, we use $\pi$ to sample non-reflective CoTs, in which the sampled steps are then labeled by an expert verifier (e.g., a rule-based process reward model). Reflective SFT learns to predict these labels from the states and the proposed steps, i.e., $({S}_{t},{R}_{t+1})β{V}_{t+1}$ . To prevent disastrous forgetting, we also mix the same CoT examples as in non-reflective SFT. This converts $\pi$ to a self-verifying policy $\tilde{\pi}$ that can self-verify reasoning steps.
Thus far, we have obtained the planning policy $\pi$ and the self-verifying policy $\tilde{\pi}$ , which can be further strengthened through (IV) RL fine-tuning. As illustrated in Figure 4, RL fine-tuning involves iteratively executing $\pi$ ( $\tilde{\pi}$ ) to collect experience CoTs through an MTP (RMTP), evaluating these CoTs with a reward model, and updating the policy to favor higher-reward solutions. Following the RLVR paradigm [15], we use binary outcome rewards (i.e., $1$ for correct answers and $0$ otherwise) computed by a rule-based answer checker $\operatorname{ORM}(Q,A)$ . When training the self-verifying policy $\tilde{\pi}$ , the RMTP treats verification ${V}_{t}$ as a part of the augmented step ${\tilde{R}}_{t}$ , simulating R1-like training [5] where reflection and solution planning are jointly optimized. We mainly use GRPO [31] as the algorithms to optimize policies. Details of RL fine-tuning are elaborated in Appendix B.
4 Theoretical results
This section establishes theoretical conditions under which self-verifying reflection (RMTP or RTBS in Section 3.2) enhances reasoning accuracy (the probability of deriving correct answers). The general relationship between the verification ability and reasoning accuracy (discussed in Appendix C.1) for any MTP is intractable as the states and transitions can be arbitrarily specified. Therefore, to derive interpretable insights, we discuss a simplified prototype of reasoning that epitomizes the representative principle of CoTs β to incrementally express complex relations by chaining the local relation in each step [25]. Specifically, Given query $Q$ as the initial state, we view a CoT as the step-by-step process that reduces the complexity within states:
- We define $\mathcal{S}_{n}$ as the set of states with a complexity scale of $n$ . For simplicity, we assume that each step, if not rejected by reflection, reduces the complexity scale by $1$ . Therefore, the scale $n$ is the number of effective steps required to derive an answer.
- An answer $A$ is a state with a scale of $0$ , i.e. $Aβ\mathcal{S}_{0}$ . Given an input query $Q$ , the answers $\mathcal{S}_{0}$ are divided into positive (correct) answers $\mathcal{S}_{0}^{+}$ and negative (wrong) answers $\mathcal{S}_{0}^{-}$ .
- States $\mathcal{S}_{n}$ ( $n$ > 0) are divided into 1) positive states $\mathcal{S}_{n}^{+}$ that potentially lead to correct answers and 2) negative states $\mathcal{S}_{n}^{-}$ leading to only incorrect answers through forward transitions.
Consider a self-verifying policy $\tilde{\pi}=\{\pi,\mathcal{V}\}$ to solve this simplified task. We describe its fundamental abilities using the following probabilities (whose meanings will be explained afterwards):
$$
\displaystyle\mu:=p_{{R}\sim\pi}(\mathcal{T}({S},{R})\in\mathcal{S}^{+}_{n-1}\mid{S}\in\mathcal{S}_{n}^{+}) \displaystyle e_{+}:=p_{{R},{V}\sim\tilde{\pi}}(\mathcal{T}({S},{R})\in\mathcal{S}^{-}_{n-1},\text{``$\times$''}\notin{V}\mid{S}\in\mathcal{S}_{n}^{+}), \displaystyle e_{-}:=p_{{R},{V}\sim\tilde{\pi}}(\mathcal{T}({S},{R})\in\mathcal{S}^{+}_{n-1},\text{``$\times$''}\in{V}\mid{S}\in\mathcal{S}_{n}^{+}), \displaystyle f:=p_{{R},{V}\sim\tilde{\pi}}(\text{``$\times$''}\in{V}\mid{S}\in\mathcal{S}_{n}^{-}). \tag{3}
$$
To elaborate, $\mu$ measures the planning ability, defined as the probability that $\pi$ plans a step that leads to a positive next state, given that the current state is positive. For verification abilities, we measure the rates of two types of errors: $e_{+}$ (false positive rate) is the probability of accepting a step that leads to a negative state, and $e_{-}$ (false negative rate) is the probability of rejecting a step that leads to a positive state. Additionally, $f$ is the probability of rejecting any step on negative states, providing the chance of tracing back to previous states. Given these factors, Figure 5 illustrates the state transitions in non-reflective (vanilla MTP) and reflective (RMTB and RTBS) reasoning.
<details>
<summary>x7.png Details</summary>

### Visual Description
## Diagram: State Transition System with Probabilistic and Deterministic Flows
### Overview
The diagram illustrates a state transition system with four nodes (two red, two green) and three directed edges. Red nodes represent states labeled with a superscript "-" (e.g., \( \mathcal{S}_n^- \)), while green nodes use a superscript "+" (e.g., \( \mathcal{S}_n^+ \)). Arrows indicate transitions between states, labeled with numerical values or expressions.
### Components/Axes
- **Nodes**:
- Top row: Red circles labeled \( \mathcal{S}_n^- \) (left) and \( \mathcal{S}_{n-1}^- \) (right).
- Bottom row: Green circles labeled \( \mathcal{S}_n^+ \) (left) and \( \mathcal{S}_{n-1}^+ \) (right).
- **Edges**:
1. **Red arrow (top)**: From \( \mathcal{S}_n^- \) to \( \mathcal{S}_{n-1}^- \), labeled "1".
2. **Green arrow (bottom)**: From \( \mathcal{S}_n^+ \) to \( \mathcal{S}_{n-1}^+ \), labeled "ΞΌ".
3. **Red arrow (diagonal)**: From \( \mathcal{S}_n^+ \) to \( \mathcal{S}_{n-1}^- \), labeled "1-ΞΌ".
### Detailed Analysis
- **Node Labels**:
- \( \mathcal{S}_n^- \), \( \mathcal{S}_{n-1}^- \): Red nodes, likely representing negative states at time steps \( n \) and \( n-1 \).
- \( \mathcal{S}_n^+ \), \( \mathcal{S}_{n-1}^+ \): Green nodes, likely representing positive states at time steps \( n \) and \( n-1 \).
- **Edge Labels**:
- "1": Deterministic transition (100% probability) from \( \mathcal{S}_n^- \) to \( \mathcal{S}_{n-1}^- \).
- "ΞΌ": Probabilistic transition (probability \( \mu \)) from \( \mathcal{S}_n^+ \) to \( \mathcal{S}_{n-1}^+ \).
- "1-ΞΌ": Complementary probability (\( 1-\mu \)) from \( \mathcal{S}_n^+ \) to \( \mathcal{S}_{n-1}^- \).
### Key Observations
1. **Deterministic Flow**: The red arrow labeled "1" ensures \( \mathcal{S}_n^- \) always transitions to \( \mathcal{S}_{n-1}^- \).
2. **Probabilistic Branching**: From \( \mathcal{S}_n^+ \), transitions split into two paths:
- \( \mu \)-probability to \( \mathcal{S}_{n-1}^+ \) (green arrow).
- \( 1-\mu \)-probability to \( \mathcal{S}_{n-1}^- \) (red arrow).
3. **Color Coding**: Red edges may represent negative/loss states, while green edges represent positive/gain states.
### Interpretation
This diagram likely models a **Markov chain** or **state-dependent system** with two states (\( n \) and \( n-1 \)) and two modes (+/-). The deterministic transition (\( \mathcal{S}_n^- \to \mathcal{S}_{n-1}^- \)) suggests a fixed decay or loss process for negative states. For positive states (\( \mathcal{S}_n^+ \)), transitions are probabilistic:
- A fraction \( \mu \) retains the positive state (\( \mathcal{S}_{n-1}^+ \)).
- A fraction \( 1-\mu \) transitions to the negative state (\( \mathcal{S}_{n-1}^- \)).
The system could represent phenomena like:
- **Economic models**: Positive states as profitable periods, negative as losses.
- **Biological systems**: Active (green) vs. dormant (red) states.
- **Technical systems**: Stable (green) vs. failure (red) states.
The absence of numerical values for \( \mu \) implies it is a parameter to be defined externally. The diagram emphasizes structural relationships over quantitative trends.
</details>
(a) Non-reflective reasoning
<details>
<summary>x8.png Details</summary>

### Visual Description
## Diagram: State Transition System with Probabilistic Paths
### Overview
The diagram illustrates a probabilistic state transition system with two parallel pathways (red and green nodes) representing negative (Sβ») and positive (SβΊ) states. Arrows denote transitions between states with associated probabilities, and a final equation defines an aggregate error rate (Ξ±). The system appears to model iterative attempts (m) in a process labeled "RTBS."
---
### Components/Axes
1. **Nodes**:
- **Red Nodes (Sβ»)**:
- Sβ»βββ (top-left)
- Sβ»β (center)
- Sβ»βββ (top-right)
- **Green Nodes (SβΊ)**:
- SβΊβββ (bottom-left)
- SβΊβ (center)
- SβΊβββ (bottom-right)
- Arrows connect nodes with directional probabilities (e.g., "1-f," "f," "ΞΌ(1-e_-)").
2. **Arrows/Transitions**:
- **Red Pathway**:
- Sβ»β β Sβ»βββ: Probability = 1 - f
- Sβ»β β Sβ»βββ: Probability = f (self-loop)
- **Green Pathway**:
- SβΊβ β SβΊβββ: Probability = ΞΌ(1 - e_-)
- SβΊβ β SβΊβββ: Probability = Ξ± (self-loop)
- **Cross-Path Arrows**:
- Sβ»β β SβΊβ: Probability = (1 - ΞΌ)e_+
- SβΊβ β Sβ»β: Probability = (1 - ΞΌ)(1 - e_+)
3. **Equation**:
- Ξ± := ΞΌe_- + (1 - ΞΌ)(1 - e_+)
- Positioned at the bottom-right, defining the aggregate error rate.
4. **Annotations**:
- "After m attempts in RTBS" (bottom-left corner).
---
### Detailed Analysis
- **Red Pathway (Sβ»)**:
- Transitions between Sβ» states are governed by a Bernoulli process with parameter f (failure rate).
- Self-loop at Sβ»β implies a probability f of remaining in the same state.
- Transition to Sβ»βββ occurs with probability 1 - f.
- **Green Pathway (SβΊ)**:
- Transitions depend on ΞΌ (a weighting factor) and e_- (error rate for negative outcomes).
- Self-loop at SβΊβ has probability Ξ±, derived from the equation.
- Transition to SβΊβββ occurs with probability ΞΌ(1 - e_-).
- **Cross-Path Dynamics**:
- Red-to-green transitions (Sβ»β β SβΊβ) occur with probability (1 - ΞΌ)e_+.
- Green-to-red transitions (SβΊβ β Sβ»β) occur with probability (1 - ΞΌ)(1 - e_+).
---
### Key Observations
1. **Symmetry**: Red and green pathways mirror each other in structure but differ in transition probabilities.
2. **Error Aggregation**: The equation for Ξ± combines error terms from both pathways, suggesting Ξ± represents a system-wide error metric.
3. **Self-Loops**: Both pathways include self-loops, indicating the possibility of stalling in a state.
4. **Parameter Dependencies**: ΞΌ and e_Β± govern cross-path transitions, while f and e_- control intra-path transitions.
---
### Interpretation
This diagram models a **probabilistic state machine** with two competing pathways (Sβ» and SβΊ). The transitions reflect:
- **Failure/Success Dynamics**: The red pathway (Sβ») emphasizes failure rates (f), while the green pathway (SβΊ) focuses on error mitigation (e_-).
- **Error Propagation**: The equation for Ξ± quantifies the overall error rate, blending errors from both pathways weighted by ΞΌ.
- **Iterative Process**: The "m attempts" annotation suggests the system evolves over repeated trials, with states shifting based on probabilistic rules.
The model could represent scenarios like:
- **Machine Learning**: Classifying outcomes (Sβ»/SβΊ) with error correction.
- **Quality Control**: Tracking defects (Sβ») and rework (SβΊ) in manufacturing.
- **Decision Trees**: Modeling choices with probabilistic outcomes.
Notably, the cross-path transitions (e.g., Sβ»β β SβΊβ) imply a feedback mechanism where errors in one pathway influence the other, highlighting interdependencies in the system.
</details>
(b) Reflective reasoning through an RMTP or RTBS
Figure 5: The diagram of state transitions starting from scale $n$ in the simplified reasoning, where probabilities are attached to solid lines. In (b) reflective reasoning, the dashed-line arrow presents the trace-back move after $m$ attempts in RTBS.
For input problems with scale $n$ , we use $\rho(n)$ , $\tilde{\rho}(n)$ , and $\tilde{\rho}_{m}(n)$ to respectively denote the reasoning accuracy using no reflection, RMTP, and RTBS (with width $m$ ). Obviously, we have $\rho(n)=\mu^{n}$ . In contrast, the mathematical forms of $\tilde{\rho}(n)$ and $\tilde{\rho}_{m}(n)$ are more complicated and therefore left to Appendix C.2. Our main result provides simple conditions for the above factors $(\mu,e_{-},e_{+},f)$ to ensure an improved accuracy when reasoning through an RMTP or RTBS.
**Theorem 1**
*In the above simplified problem, consider a self-verifying policy $\tilde{\pi}$ where $\mu$ , $e_{-}$ , and $e_{+}$ are non-trivial (i.e. neither $0$ nor $1$ ). Let $\alpha:=\mu e_{-}+(1-\mu)(1-e_{+})$ denote the rejection probability on positive states. Given an infinite computation budget, for $n>0$ we have:
- $\tilde{\rho}(n)β₯\rho(n)$ if and only if $e_{-}+e_{+}β€ 1$ , where equalities hold simultaneously; furthermore, reducing either $e_{-}$ or $e_{+}$ strictly increases $\tilde{\rho}(n)$ .
- $\tilde{\rho}_{m}(n)>\tilde{\rho}(n)$ for a sufficiently large $n$ if and only if $f>\alpha$ and $m>\frac{1}{1-\alpha}$ ; furthermore, such a gap of $\tilde{\rho}_{m}(n)$ over $\tilde{\rho}(n)$ increases strictly with $f$ .*
Does reflection require a strong verifier? Theorem 1 shows that RMTP improves performance over vanilla MTP if the verification errors $e_{+}$ and $e_{-}$ are properly bounded, which does not necessitate a strong verifier. In our simplified setting, this only requires the verifier $\mathcal{V}$ to be better than random guessing (which ensures $e_{-}+e_{+}=1$ ). This also indicates a trivial guarantee of RTBS, as an infinitely large width ( $mβ+β$ ) substantially converts RTBS to RMTB.
When does trace-back search facilitate reflection? Theorem 1 provides the conditions for RTBS to outperform RMTP for a sufficiently large $n$ : 1) The width $m$ is large enough to ensure effective exploration. 2) $f>\alpha$ indicates that negative states are inherently discriminated from positive ones, leading to a higher rejection probability on negative states than on positive states (see Figure 5(b)). In other words, provided $f>\alpha$ , RTBS is ensured to be more effective on complicated queries using a finite $m$ . However, this also implies a risk of over-thought on simple queries that have a small $n$ .
The derivation and additional details of Theorem 1 are provided in Appendix C.3. In addition, we also derive how many steps it costs to find a correct solution in RMTP. The following Proposition 1 (see proof in Appendix C.4) shows that a higher $e_{-}$ causes more steps to be necessarily rejected and increases the solution cost. In contrast, although a higher $e_{+}$ reduces accuracy, it forces successful solutions to rely less on reflection, leading to fewer expected steps. Therefore, a high false negative rate $e_{-}$ is worse than a high $e_{+}$ given the limited computational budget in practice.
**Proposition 1 (RMTP Reasoning Length)**
*For a simplified reasoning problem with scale $n$ , the expected number of steps $\bar{T}$ for $\tilde{\pi}$ to find a correct answer is $\bar{T}=\frac{n}{(1-\mu)e_{+}+\mu(1-e_{-})}$ . Especially, a correct answer will never be found if the denominator is $0$ .*
Appendix C.5 further extends our analysis to more realistic reasoning, where rejected attempts lead to a posterior drop of $\mu$ (or rise of $e_{-}$ ), indicating that the model may not well generalize the current state. In this case, the bound of $e_{-}$ to ensure improvements becomes stricter than that in Theorem 1.
5 Experiments
We conduct comprehensive experiments to examine the reasoning performance of tiny transformers under various settings. We trained simple causal-attention transformers [36] (implemented by LitGPT [1]) with 1M, 4M, and 16M parameters, through the pipelines described in Section 3.3. Details of training data, model architectures, tokenization, and hyperparameters are included in Appendix D. The source code is available at https://github.com/zwyu-ai/self-verifying-reflection-reasoning.
We test tiny transformers in two reasoning tasks: The integer multiplication task (Mult for short) computes the product of two integers $x$ and $y$ ; the Sudoku task fills numbers into blank positions of a $9Γ 9$ matrix, such that each row, column, or $3Γ 3$ block is a permutation of $\{1,...,9\}$ . For both tasks, we divide queries into 3 levels of difficulties: The in-distribution (ID) Easy, ID Hard, and out-of-distribution (OOD) Hard. The models are trained on ID-Easy and ID-Hard problems, while tested additionally on OOD-Hard cases. We define the difficulty of a Mult query by the number $d$ of digits of the greater multiplicand, and that of a Sudoku puzzle is determined by the number $b$ of blanks to be filled. Specifically, we have $1β€ dβ€ 5$ or $9β€ b<36$ for ID Easy, $6β€ dβ€ 8$ or $36β€ b<54$ for ID Hard, and $9β€ dβ€ 10$ or $54β€ b<63$ for OOD Hard.
Our full results are presented in Appendix E. Shown in Appendix E.1, these seemingly simple tasks pose challenges even for some well-known LLMs. Remarkably, through simple self-verifying reflection, our best 4M Sudoku model is as good as OpenAI o3-mini [21], and our best 16M Mult model outperforms DeepSeek-R1 [5] in ID difficulties.
5.1 Results of supervised fine-tuning
First, we conduct (I) pretraining, (II) non-reflective SFT, and (III) reflective SFT as described in Section 3.3. In reflective SFT, we consider learning two types of self-verification: 1) The binary verification includes a single binary label indicating the overall correctness of a planned step; 2) the detailed verification includes a series of binary labels checking the correctness of each meaningful element in the step. The implementation of verification labels is elaborated in Appendix D.2.3. We present our full SFT results in Appendix E.2, which includes training 30 models and executing 54 tests. In the following, we discuss our main findings through visualizing representative results.
<details>
<summary>x9.png Details</summary>

### Visual Description
## Bar Chart: Model Performance Across Verification Types and Task Difficulty
### Overview
The chart compares model accuracy across three task difficulties (ID Easy, ID Hard, OOD Hard) and three verification types (None, Binary, Detailed) for model sizes 1M, 4M, and 16M. Accuracy is measured in percentage.
### Components/Axes
- **X-axis**: Task difficulty categories (ID Easy, ID Hard, OOD Hard) with subcategories for model sizes (1M, 4M, 16M).
- **Y-axis**: Accuracy (%) ranging from 0 to 100.
- **Legend**:
- Blue = Verification Type: None
- Orange = Verification Type: Binary
- Green = Verification Type: Detailed
- **Bar Colors**: Match legend labels (blue/orange/green) for each verification type.
### Detailed Analysis
#### ID Easy
- **1M Model**:
- None: ~25%
- Binary: ~95%
- Detailed: ~20%
- **4M Model**:
- None: ~90%
- Binary: ~98%
- Detailed: ~95%
- **16M Model**:
- None: ~100%
- Binary: ~100%
- Detailed: ~100%
#### ID Hard
- **1M Model**:
- None: ~100%
- Binary: ~75%
- Detailed: ~25%
- **4M Model**:
- None: ~100%
- Binary: ~60%
- Detailed: ~40%
- **16M Model**:
- None: ~100%
- Binary: ~65%
- Detailed: ~65%
#### OOD Hard
- **1M Model**:
- None: ~8%
- Binary: ~4%
- Detailed: ~2%
- **4M Model**:
- None: ~2%
- Binary: ~3%
- Detailed: ~4%
- **16M Model**:
- None: ~8%
- Binary: ~6%
- Detailed: ~10%
### Key Observations
1. **ID Easy**: High accuracy across all verification types and model sizes, with minimal variation.
2. **ID Hard**:
- Accuracy drops significantly for smaller models (1M) with Detailed verification.
- Larger models (16M) show improved performance with Detailed verification.
3. **OOD Hard**:
- Extremely low accuracy for all verification types, especially with smaller models.
- Detailed verification shows marginal improvement at 16M model size.
### Interpretation
- **Task Difficulty Impact**:
- Easy tasks (ID Easy) are robust to model size and verification type.
- Hard tasks (ID Hard, OOD Hard) require larger models and detailed verification for meaningful accuracy gains.
- **Verification Type Trade-offs**:
- Detailed verification improves performance in harder tasks but introduces computational overhead.
- Binary verification balances performance and efficiency in ID Hard.
- **OOD Hard Limitations**:
- Current models struggle with out-of-distribution (OOD) tasks, even with 16M parameters and Detailed verification.
- Suggests need for architectural improvements or specialized training for OOD scenarios.
### Spatial Grounding & Trend Verification
- **Legend Position**: Right-aligned, clearly mapping colors to verification types.
- **Bar Trends**:
- ID Easy: Flat lines across verification types (confirmed by uniform bar heights).
- ID Hard: Steeper decline for Detailed verification in smaller models.
- OOD Hard: Gradual improvement with model size, but plateaued at 16M.
### Component Isolation
- **Header**: Task difficulty labels (ID Easy, ID Hard, OOD Hard).
- **Main Chart**: Three grouped bar clusters per task difficulty.
- **Footer**: Y-axis scale (0-100%) and x-axis model size labels.
### Data Table Reconstruction
| Task | Model Size | Verification Type | Accuracy (%) |
|------------|------------|-------------------|--------------|
| ID Easy | 1M | None | 25 |
| ID Easy | 1M | Binary | 95 |
| ID Easy | 1M | Detailed | 20 |
| ID Easy | 4M | None | 90 |
| ID Easy | 4M | Binary | 98 |
| ID Easy | 4M | Detailed | 95 |
| ID Easy | 16M | None | 100 |
| ID Easy | 16M | Binary | 100 |
| ID Easy | 16M | Detailed | 100 |
| ID Hard | 1M | None | 100 |
| ID Hard | 1M | Binary | 75 |
| ID Hard | 1M | Detailed | 25 |
| ID Hard | 4M | None | 100 |
| ID Hard | 4M | Binary | 60 |
| ID Hard | 4M | Detailed | 40 |
| ID Hard | 16M | None | 100 |
| ID Hard | 16M | Binary | 65 |
| ID Hard | 16M | Detailed | 65 |
| OOD Hard | 1M | None | 8 |
| OOD Hard | 1M | Binary | 4 |
| OOD Hard | 1M | Detailed | 2 |
| OOD Hard | 4M | None | 2 |
| OOD Hard | 4M | Binary | 3 |
| OOD Hard | 4M | Detailed | 4 |
| OOD Hard | 16M | None | 8 |
| OOD Hard | 16M | Binary | 6 |
| OOD Hard | 16M | Detailed | 10 |
### Final Notes
- All values are approximate due to visual estimation from the bar chart.
- No non-English text detected in the image.
</details>
Figure 6: The accuracy of non-reflective execution of models in Mult. In each group, we compare training with various types of verification (βNoneβ for no reflective SFT).
Does learning self-verification facilitate learning the planning policy? We compare our models under the non-reflective execution, where self-verification is not actively used in test time. As shown in Figure 6, reflective SFT with binary verification brings remarkable improvements for 1M and 4M in ID-Easy and ID-Hard Mult problems, greatly reducing the gap among model sizes. Although detailed verification does not benefit as much as binary verification in ID problems, it significantly benefits the 16M model in solving OOD-Hard problems. Therefore, learning to self-verify benefits the learning of forward planning, increasing performance even if test-time reflection is not enabled.
Since reflective SFT mixes the same CoT examples as used in non-reflective SFT, an explanation for this phenomenon is that learning to self-verify serves as a regularizer to the planning policy. This substantially improves the quality of hidden embeddings in transformers, which facilitates the learning of CoT examples. Binary verification is inherently a harder target to learn, which produces stronger regularizing effects than detailed verification. However, the complexity (length) of the verification should match the capacity of the model; otherwise, it could severely compromise the benefits of learning self-verification. For instance, learning binary verification and detailed verification fails to improve the 16M model and the 1M model, respectively.
<details>
<summary>x10.png Details</summary>

### Visual Description
## Bar Chart: Accuracy and Error Metrics Across Model Sizes and Verification Types
### Overview
The image contains eight grouped bar charts comparing accuracy (%) and error metrics (%) across three model sizes (1M, 4M, 16M) and three reflective execution methods (None, RMTP, RTBS). The charts are divided into four main categories:
1. **Mult ID-Hard Binary Verification**
2. **Mult ID-Hard Detailed Verification**
3. **Sudoku ID-Hard Binary Verification**
4. **Sudoku ID-Hard Detailed Verification**
Each category includes two sub-charts:
- **Top Row**: Accuracy (%)
- **Bottom Row**: Error Metrics (%)
### Components/Axes
- **X-Axis**: Model Size (1M, 4M, 16M)
- **Y-Axis (Top Charts)**: Accuracy (%) (0β80%)
- **Y-Axis (Bottom Charts)**: Error (%) (0β75%)
- **Legend**:
- **None**: Gray bars
- **RMTP**: Green bars
- **RTBS**: Red bars
### Detailed Analysis
#### Accuracy Trends (Top Charts)
1. **Mult ID-Hard Binary Verification**
- **1M**: None (50%), RMTP (60%), RTBS (70%)
- **4M**: None (55%), RMTP (65%), RTBS (75%)
- **16M**: None (60%), RMTP (70%), RTBS (78%)
2. **Mult ID-Hard Detailed Verification**
- **1M**: None (40%), RMTP (50%), RTBS (60%)
- **4M**: None (45%), RMTP (55%), RTBS (65%)
- **16M**: None (50%), RMTP (60%), RTBS (70%)
3. **Sudoku ID-Hard Binary Verification**
- **1M**: None (40%), RMTP (50%), RTBS (60%)
- **4M**: None (45%), RMTP (55%), RTBS (65%)
- **16M**: None (50%), RMTP (60%), RTBS (70%)
4. **Sudoku ID-Hard Detailed Verification**
- **1M**: None (5%), RMTP (15%), RTBS (30%)
- **4M**: None (10%), RMTP (25%), RTBS (45%)
- **16M**: None (20%), RMTP (40%), RTBS (60%)
#### Error Metrics (Bottom Charts)
1. **Mult ID-Hard Binary Verification**
- **1M**: RMTP e- (30%), RMTP e+ (5%), RTBS e- (25%), RTBS e+ (3%)
- **4M**: RMTP e- (20%), RMTP e+ (2%), RTBS e- (15%), RTBS e+ (1%)
- **16M**: RMTP e- (10%), RMTP e+ (1%), RTBS e- (5%), RTBS e+ (0.5%)
2. **Mult ID-Hard Detailed Verification**
- **1M**: RMTP e- (5%), RMTP e+ (3%), RTBS e- (3%), RTBS e+ (1%)
- **4M**: RMTP e- (3%), RMTP e+ (1%), RTBS e- (2%), RTBS e+ (0.5%)
- **16M**: RMTP e- (2%), RMTP e+ (0.5%), RTBS e- (1%), RTBS e+ (0.2%)
3. **Sudoku ID-Hard Binary Verification**
- **1M**: RMTP e- (30%), RMTP e+ (5%), RTBS e- (25%), RTBS e+ (3%)
- **4M**: RMTP e- (20%), RMTP e+ (2%), RTBS e- (15%), RTBS e+ (1%)
- **16M**: RMTP e- (10%), RMTP e+ (1%), RTBS e- (5%), RTBS e+ (0.5%)
4. **Sudoku ID-Hard Detailed Verification**
- **1M**: RMTP e- (70%), RMTP e+ (25%), RTBS e- (60%), RTBS e+ (20%)
- **4M**: RMTP e- (60%), RMTP e+ (30%), RTBS e- (50%), RTBS e+ (25%)
- **16M**: RMTP e- (50%), RMTP e+ (20%), RTBS e- (40%), RTBS e+ (15%)
### Key Observations
1. **Accuracy Trends**:
- Larger models (16M) consistently outperform smaller models (1M/4M) across all verification types.
- **RTBS** achieves the highest accuracy in most cases, followed by **RMTP**, with **None** performing the worst.
- **Sudoku ID-Hard Detailed Verification** shows a significant drop in accuracy for **RMTP** at 16M (40% vs. 60% for RTBS).
2. **Error Metrics**:
- **RMTP** exhibits higher error rates in the negative direction (**e-**) compared to **RTBS** across all model sizes.
- **Sudoku ID-Hard Detailed Verification** has the highest error rates for **RMTP e-** (70% at 1M), suggesting systematic failures.
### Interpretation
1. **Model Size Impact**:
- Scaling model size improves performance, with 16M models achieving near-human-level accuracy in some tasks (e.g., Sudoku ID-Hard Binary Verification).
2. **Reflective Execution Methods**:
- **RTBS** outperforms **RMTP** and **None** in accuracy and error reduction, indicating superior reasoning capabilities.
- **RMTP** shows higher negative errors (**e-**), suggesting it may overcorrect or struggle with complex reasoning.
3. **Anomalies**:
- In **Sudoku ID-Hard Detailed Verification**, **RMTP** accuracy drops sharply at 16M (40% vs. 60% for RTBS), possibly due to task complexity or model limitations.
- **RTBS e-** errors remain consistently low, implying robust error handling.
4. **Practical Implications**:
- For high-stakes tasks (e.g., Sudoku), **RTBS** is preferable despite higher computational costs.
- **RMTP** may be suitable for simpler tasks but requires caution in complex scenarios.
### Spatial Grounding
- **Legend**: Right-aligned, with clear color coding (gray = None, green = RMTP, red = RTBS).
- **Charts**: Arranged in a 2x4 grid, with accuracy charts above error metrics.
- **Axis Labels**: Bold, centered, with percentage scales.
### Conclusion
The data demonstrates that larger models and advanced reflective execution methods (RTBS) significantly improve accuracy and reduce errors. However, task-specific anomalies (e.g., Sudoku ID-Hard) highlight the need for tailored model selection.
</details>
Figure 7: Performance of reflective execution methods across different model sizes, including the accuracy (top) and the self-verification errors (bottom).
When do reflective executions improve reasoning accuracy? Figure 7 evaluates the non-reflective, RMTP, and RTBS executions for models in solving ID-Hard problems. Apart from the accuracy, the rates of verification error (i.e., the false positive rate $e_{+}$ and false negative rate $e_{-}$ defined in Section 4) are measured using an oracle verifier. In these results, RMTP reasoning raises the performance over non-reflective reasoning except for the 1M models (which fail in ID-hard Sudoku). Smaller error rates (especially $e_{-}$ ) generally lead to higher improvements, whereas a high $e_{-}$ in binary verification severely compromises the performance of the 1M Mult Model. Overall, reflection improves reasoning if the chance of rejecting correct steps ( $e_{-}$ ) is sufficiently small.
In what task is the trace-back search helpful? As seen in Figure 7, though RTBS shows no advantage against RMTP in Mult, it outperforms RMTP in Sudoku, especially the 4M model with detailed verification. This aligns with Theory 1 β The state of Sudoku (the $9Γ 9$ matrix) is required to comply with explicit verifiable rules, making incorrect states easily discriminated from correct states. However, errors in Mult states can only be checked by recalculating all historical steps. Therefore, we are more likely to have $f>\alpha$ in Sudoku, which grants a higher chance of solving harder problems. This suggests that RTBS can be more helpful than RMTP if incorrect states in the task carry verifiable errors, which validates our theoretical results.
5.2 Results of reinforcement learning
<details>
<summary>x11.png Details</summary>

### Visual Description
## Bar Chart: Accuracy and Error Metrics Across Verification Types and Datasets
### Overview
The image presents a comparative analysis of model performance across four datasets (Mult ID-Hard 4M, Mult OOD-Hard 4M, Mult ID-Hard 16M, Mult OOD-Hard 16M) using three verification types (None, Binary, Detailed) and two methods (RMTP, RTBS). Accuracy and error metrics are visualized using grouped bar charts with error bars.
### Components/Axes
- **X-Axes**:
- Labeled "Verification Type" with categories: None, Binary, Detailed.
- Repeated across four sub-charts (one per dataset).
- **Y-Axes**:
- Top row: "Accuracy (%)" (0β80 scale).
- Bottom row: "Error (%)" (0β75 scale).
- **Legends**:
- Right-aligned, with color coding:
- Gray: None
- Green: RMTP
- Red: RTBS
- Bottom row includes error metric labels:
- Green crosshatch: RMTP eβ»
- Green solid: RMTP eβΊ
- Red crosshatch: RTBS eβ»
- Red solid: RTBS eβΊ
### Detailed Analysis
#### Accuracy Trends
1. **Mult ID-Hard (4M)**:
- **None**: ~65% accuracy.
- **RMTP**: ~75% accuracy (highest).
- **RTBS**: ~70% accuracy.
- Error bars show moderate variability for all methods.
2. **Mult OOD-Hard (4M)**:
- **None**: ~5% accuracy (lowest).
- **RMTP**: ~5% accuracy (matches None).
- **RTBS**: ~5% accuracy (matches None).
- Error bars are minimal, indicating low variability.
3. **Mult ID-Hard (16M)**:
- **None**: ~75% accuracy.
- **RMTP**: ~80% accuracy (highest).
- **RTBS**: ~78% accuracy.
- Error bars are small, suggesting consistent performance.
4. **Mult OOD-Hard (16M)**:
- **None**: ~5% accuracy.
- **RMTP**: ~5% accuracy.
- **RTBS**: ~5% accuracy.
- Error bars are negligible.
#### Error Metrics
- **RMTP eβ»/eβΊ**:
- **Mult ID-Hard (4M)**: eβ» ~25%, eβΊ ~50%.
- **Mult OOD-Hard (4M)**: eβ» ~5%, eβΊ ~25%.
- **Mult ID-Hard (16M)**: eβ» ~10%, eβΊ ~20%.
- **Mult OOD-Hard (16M)**: eβ» ~5%, eβΊ ~15%.
- **RTBS eβ»/eβΊ**:
- **Mult ID-Hard (4M)**: eβ» ~30%, eβΊ ~60%.
- **Mult OOD-Hard (4M)**: eβ» ~5%, eβΊ ~30%.
- **Mult ID-Hard (16M)**: eβ» ~15%, eβΊ ~40%.
- **Mult OOD-Hard (16M)**: eβ» ~5%, eβΊ ~25%.
### Key Observations
1. **Accuracy**:
- RMTP consistently outperforms RTBS and None in ID-Hard datasets (4M and 16M).
- In OOD-Hard datasets, all methods perform poorly (~5% accuracy), with no significant differences.
2. **Error Metrics**:
- RMTP generally has lower error rates (eβ») than RTBS, especially in ID-Hard datasets.
- Error variability (error bars) is highest for RTBS in Detailed verification (e.g., ~40% eβΊ in Mult ID-Hard 16M).
3. **Verification Type Impact**:
- Detailed verification correlates with higher error rates (e.g., RTBS eβΊ spikes to ~60% in Mult ID-Hard 4M).
### Interpretation
The data suggests that **RMTP** is more robust than RTBS in ID-Hard scenarios, achieving higher accuracy and lower error rates. However, both methods fail catastrophically in OOD-Hard datasets, indicating a lack of generalization. The Detailed verification type introduces higher error variability, possibly due to increased complexity or overfitting. The error metrics (eβ»/eβΊ) highlight that RMTPβs performance is more stable (smaller error bars) compared to RTBS, which exhibits greater inconsistency. This implies RMTP may be preferable for ID-Hard tasks, but neither method is viable for OOD-Hard challenges without further improvements.
</details>
Figure 8: Performance of the 4M and 16M models in Mult after GRPO, including accuracy and the verification error rates. As an ablation, we also include non-reflective models. The vertical arrows start from the baseline accuracy after SFT, presenting the relative change caused by GRPO.
As introduced in Section 3.3, we further apply GRPO to fine-tune the models after SFT. Especially, GRPO based on RMTP allows solution planning and verification to be jointly optimized for self-verifying policies. The full GRPO results are presented in Appendix E.3, and the main findings are presented below. Overall, RL does enable most models to better solve ID problems, yet such improvements arise from a superficial shift in the distribution of known reasoning skills.
How does RL improve reasoning accuracy? Figure 8 presents the performance of 4M and 16M models in Mult after GRPO, where the differences from SFT results are visualized. GRPO effectively enhances accuracy in solving ID-Hard problems, yet the change in OOD performance is marginal. Therefore, RL can optimize ID performance, while failing to generalize to OOD cases.
Does RL truly enhance verification? From the change of verification errors in Figure 8, we find that the false negative rate $e_{-}$ decreases along with an increase in the false positive rate $e_{+}$ . This suggests that models learn an optimistic bias, which avoids rejecting correct steps through a high false positive rate that bypasses verification. In other words, instead of truly improving the verifier (where $e_{-}$ and $e_{+}$ both decrease), RL mainly induces an error-type trade-off, shifting from false negatives ( $e_{+}$ ) to false positives ( $e_{-}$ ).
To explain this, we note that a high $e_{-}$ raises the computational cost (Proposition 1) and thus causes a significant performance loss under the limited budget of RL sampling, making reducing $e_{-}$ more rewarding than maintaining a low $e_{+}$ . Meanwhile, shifting the error type is easy to learn, achievable by adjusting only a few parameters in the output layer of the transformer.
Inspired by DeepSeek-R1 [5], we additionally examine how RL influences the frequency of reflective behavior. To simulate the natural distribution of human reasoning, we train models to perform optional detailed verification by adding examples of empty verification (in the same amount as the full verification) into reflective SFT. This allows the policy to optionally omit self-verification, usually with a higher probability than producing full verification, since empty verification is easier to learn. Consequently, we can measure the reflection frequency by counting the proportion of steps that include non-empty verification. Since models can implicitly omit binary verification by producing false positive labels, we do not explicitly examine the optional binary verification.
When does RL incentivize frequent reflection?
Figure 9 shows reflection frequency in Mult before and after GRPO, comparing exploratory ( $1.25$ ) and exploitative ( $1$ ) temperatures when sampling experience CoTs. With a temperature $1.25$ , GRPO elicits frequent reflection, especially on hard queries. However, reflection frequency remains low if using temperature $1$ . Additional results for other model sizes and Sudoku appear in Appendix E.3.3. In conclusion, RL can adapt reflection frequency to align with the exploratory ability of the planning policy $\pi$ , encouraging more reflection if the policy can potentially explore rewards. This helps explain why RL promotes frequent reflection in LLMs [5], as the flexibility of language naturally fosters exploratory reasoning.
<details>
<summary>x12.png Details</summary>

### Visual Description
## Heatmaps: Digit Distribution Before and After GRPO Processing
### Overview
The image contains three side-by-side heatmaps comparing digit distribution patterns before and after GRPO processing at different temperatures. Each heatmap uses a 10x10 grid where rows represent "number of y's digits" (1-10) and columns represent "number of x's digits" (1-10). Color intensity indicates magnitude, with red/yellow representing higher values and blue representing lower values.
### Components/Axes
- **X-axis**: "number of x's digits" (1-10)
- **Y-axis**: "number of y's digits" (1-10)
- **Heatmap Titles**:
1. "Before GRPO"
2. "After GRPO Temperature: 1.25"
3. "After GRPO Temperature: 1.0"
- **Color Gradient**: Red (high values) β Yellow (medium) β Blue (low values)
### Detailed Analysis
#### Before GRPO
- **Row 1 (y=1)**: 7, 23, 15, 6, 15, 13, 22, 11, 5, 4
- **Row 2 (y=2)**: 21, 23, 24, 19, 14, 14, 20, 9, 4, 5
- **Row 3 (y=3)**: 6, 12, 20, 8, 5, 3, 5, 3, 5, 5
- **Row 4 (y=4)**: 8, 18, 11, 3, 2, 2, 4, 5, 3, 5
- **Row 5 (y=5)**: 18, 10, 6, 1, 2, 1, 8, 5, 5, 4
- **Row 6 (y=6)**: 13, 11, 5, 2, 4, 4, 8, 5, 3, 4
- **Row 7 (y=7)**: 14, 8, 6, 8, 9, 6, 10, 8, 5, 3
- **Row 8 (y=8)**: 5, 6, 9, 6, 4, 3, 8, 5, 4, 4
- **Row 9 (y=9)**: 5, 4, 5, 4, 2, 3, 4, 5, 3, 2
- **Row 10 (y=10)**: 4, 4, 2, 2, 2, 3, 3, 4, 2, 2
#### After GRPO (T=1.25)
- **Row 1 (y=1)**: 30, 43, 35, 34, 32, 32, 35, 32, 32, 32
- **Row 2 (y=2)**: 50, 56, 51, 49, 49, 49, 52, 47, 49, 52
- **Row 3 (y=3)**: 30, 54, 56, 56, 57, 56, 56, 58, 58, 58
- **Row 4 (y=4)**: 32, 58, 63, 61, 61, 62, 64, 67, 68, 68
- **Row 5 (y=5)**: 37, 70, 66, 65, 65, 67, 75, 78, 74, 74
- **Row 6 (y=6)**: 45, 70, 68, 70, 72, 74, 84, 82, 77, 77
- **Row 7 (y=7)**: 44, 71, 76, 77, 82, 86, 89, 94, 79, 75
- **Row 8 (y=8)**: 33, 71, 80, 80, 83, 88, 88, 86, 74, 60
- **Row 9 (y=9)**: 33, 69, 79, 83, 84, 83, 83, 79, 59, 47
- **Row 10 (y=10)**: 35, 71, 79, 81, 82, 82, 84, 73, 52, 50
#### After GRPO (T=1.0)
- **Row 1 (y=1)**: 0, 9, 5, 2, 5, 1, 5, 0, 0, 0
- **Row 2 (y=2)**: 19, 14, 14, 13, 6, 9, 0, 0, 0, 0
- **Row 3 (y=3)**: 1, 5, 13, 3, 0, 0, 0, 0, 0, 0
- **Row 4 (y=4)**: 6, 17, 11, 1, 0, 0, 3, 2, 0, 0
- **Row 5 (y=5)**: 21, 15, 9, 1, 1, 3, 6, 3, 0, 0
- **Row 6 (y=6)**: 25, 14, 7, 4, 9, 5, 9, 2, 0, 0
- **Row 7 (y=7)**: 24, 12, 12, 12, 9, 9, 4, 0, 0, 0
- **Row 8 (y=8)**: 10, 13, 12, 11, 10, 6, 6, 4, 0, 2
- **Row 9 (y=9)**: 3, 9, 10, 4, 1, 1, 1, 1, 2, 2
- **Row 10 (y=10)**: 3, 9, 10, 4, 1, 0, 1, 1, 2, 2
### Key Observations
1. **Before GRPO**: Values range from 1-24, with moderate clustering in mid-range values (8-15).
2. **After GRPO (T=1.25)**: Values increase significantly (up to 94), showing a gradient from red (high values) to blue (low values), with peak intensity in middle rows (y=5-7).
3. **After GRPO (T=1.0)**: Values drop dramatically, with 60% of cells containing 0-10. The bottom-right quadrant shows near-zero values, indicating strong suppression of higher digit combinations.
### Interpretation
The GRPO processing appears to:
1. **Amplify mid-range values** at higher temperatures (T=1.25), suggesting optimization of specific digit combinations.
2. **Suppress extreme values** at lower temperatures (T=1.0), particularly eliminating higher digit combinations (x=7-10, y=8-10).
3. **Introduce systematic zeros** in the T=1.0 heatmap, indicating threshold-based filtering or elimination of certain patterns.
The temperature parameter acts as a control knob: higher temperatures preserve more complex patterns while lower temperatures enforce stricter simplification. The near-total elimination of values >10 at T=1.0 suggests a phase transition in the system's behavior.
</details>
Figure 9: The hot-maps of reflection frequencies of the 4M transformer in multiplication before and after GRPO using temperatures $1$ and $1.25$ , tested with RMTP execution. The $i$ -th row and $j$ -th column shows the frequency (%) for problems $xΓ y$ where $x$ has $j$ digits and $y$ has $i$ digits.
6 Conclusion and Discussion
In this paper, we provide a foundational analysis of self-verifying reflection in multi-step CoTs using small transformers. Through minimalistic prototypes of reflective reasoning (the RMTP and RTBS), we demonstrate that self-verification benefits both training and execution. Compared to natural-language reasoning based on LLMs, the proposed minimalistic framework performs effective reasoning and reflection using limited computational resources. We also show that RL fine-tuning can enhance the performance in solving in-distribution problems and incentivize reflective thinking for exploratory reasoners. However, the improvements from RL rely on shallow patterns and lack generalizable new skills. Overall, we suggest that self-verifying reflection is inherently beneficial for CoT reasoning, yet its synergy with RL fine-tuning is limited in superficial statistics.
Limitations and future work
Although the current training pipeline enables tiny transformers to reason properly through reflective CoTs, the generalization ability is still low and not improved in RL. Therefore, future work will extend reflection frameworks and explore novel training approaches. Observing the positive effect of learning self-verification, a closer connection between generative and discriminative reasoning may be the key to addressing this challenge. Additionally, how our findings transfer from small transformers to natural-language LLMs needs to be further examined. However, the diversity of natural language and high computational cost pose significant challenges to comprehensive evaluation, and our proposed framework does not sufficiently exploit the emergent linguistic ability of LLMs. To this end, we expect to investigate a more flexible self-verification framework with an efficient evaluator of natural-language reflection in future work.
Acknowledgments and Disclosure of Funding
We gratefully acknowledge Dr. Linyi Yang for providing partial computational resources.
References
- [1] Lightning AI βLitGPTβ, https://github.com/Lightning-AI/litgpt, 2023
- [2] Zeyuan Allen-Zhu and Yuanzhi Li βPhysics of Language Models: Part 3.1, Knowledge Storage and Extractionβ arXiv, 2024 DOI: 10.48550/arXiv.2309.14316
- [3] Eric C. Chi and Kenneth Lange βTechniques for Solving Sudoku Puzzlesβ arXiv, 2013 DOI: 10.48550/arXiv.1203.2295
- [4] Karl Cobbe et al. βTraining Verifiers to Solve Math Word Problemsβ arXiv, 2021 arXiv: 2110.14168 [cs]
- [5] DeepSeek-AI et al. βDeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learningβ arXiv, 2025 DOI: 10.48550/arXiv.2501.12948
- [6] Yuntian Deng, Yejin Choi and Stuart Shieber βFrom Explicit CoT to Implicit CoT: Learning to Internalize CoT Step by Stepβ arXiv, 2024 DOI: 10.48550/arXiv.2405.14838
- [7] Nouha Dziri et al. βFaith and Fate: Limits of Transformers on Compositionalityβ arXiv, 2023 DOI: 10.48550/arXiv.2305.18654
- [8] Guhao Feng et al. βTowards Revealing the Mystery behind Chain of Thought: A Theoretical Perspectiveβ arXiv, 2023 DOI: 10.48550/arXiv.2305.15408
- [9] Yao Fu et al. βSpecializing Smaller Language Models towards Multi-Step Reasoningβ In Proceedings of the 40th International Conference on Machine Learning PMLR, 2023, pp. 10421β10430
- [10] Alex Havrilla et al. βGLoRe: When, Where, and How to Improve LLM Reasoning via Global and Local Refinementsβ arXiv, 2024 DOI: 10.48550/arXiv.2402.10963
- [11] Yancheng He et al. βCan Large Language Models Detect Errors in Long Chain-of-Thought Reasoning?β arXiv, 2025 DOI: 10.48550/arXiv.2502.19361
- [12] Kaiying Hou et al. βUniversal Length Generalization with Turing Programsβ arXiv, 2024 DOI: 10.48550/arXiv.2407.03310
- [13] Takeshi Kojima et al. βLarge Language Models Are Zero-Shot Reasonersβ arXiv, 2023 DOI: 10.48550/arXiv.2205.11916
- [14] Aviral Kumar et al. βTraining Language Models to Self-Correct via Reinforcement Learningβ arXiv, 2024 arXiv: 2409.12917 [cs]
- [15] Nathan Lambert et al. βTulu 3: Pushing Frontiers in Open Language Model Post-Trainingβ arXiv, 2025 DOI: 10.48550/arXiv.2411.15124
- [16] Zhiyuan Li, Hong Liu, Denny Zhou and Tengyu Ma βChain of Thought Empowers Transformers to Solve Inherently Serial Problemsβ arXiv, 2024 DOI: 10.48550/arXiv.2402.12875
- [17] Hunter Lightman et al. βLetβs Verify Step by Stepβ arXiv, 2023 arXiv: 2305.20050 [cs]
- [18] Liangchen Luo et al. βImprove Mathematical Reasoning in Language Models by Automated Process Supervisionβ arXiv, 2024 arXiv: 2406.06592 [cs]
- [19] Aman Madaan et al. βSelf-Refine: Iterative Refinement with Self-Feedbackβ arXiv, 2023 DOI: 10.48550/arXiv.2303.17651
- [20] OpenAI βLearning to Reason with LLMsβ, https://openai.com/index/learning-to-reason-with-llms/
- [21] OpenAI βOpenAI O3-Mini System Cardβ In OpenAI o3-mini System Card, https://openai.com/index/o3-mini-system-card
- [22] OpenAI et al. βGPT-4o System Cardβ arXiv, 2024 DOI: 10.48550/arXiv.2410.21276
- [23] Long Ouyang et al. βTraining Language Models to Follow Instructions with Human Feedbackβ In Advances in Neural Information Processing Systems 35 (NeurIPS 2022), 2022 arXiv: 2203.02155 [cs]
- [24] Jiayi Pan et al. βTinyZeroβ Accessed: 2025-01-24, https://github.com/Jiayi-Pan/TinyZero, 2025
- [25] Ben Prystawski, Michael Y. Li and Noah D. Goodman βWhy Think Step by Step? Reasoning Emerges from the Locality of Experienceβ arXiv, 2023 arXiv: 2304.03843 [cs]
- [26] Yiwei Qin et al. βO1 Replication Journey: A Strategic Progress Report β Part 1β arXiv, 2024 DOI: 10.48550/arXiv.2410.18982
- [27] Yuxiao Qu, Tianjun Zhang, Naman Garg and Aviral Kumar βRecursive Introspection: Teaching Language Model Agents How to Self-Improveβ arXiv, 2024 DOI: 10.48550/arXiv.2407.18219
- [28] John Schulman et al. βHigh-Dimensional Continuous Control Using Generalized Advantage Estimationβ arXiv, 2018 arXiv: 1506.02438 [cs]
- [29] John Schulman et al. βProximal Policy Optimization Algorithmsβ arXiv, 2017 arXiv: 1707.06347 [cs]
- [30] Rico Sennrich, Barry Haddow and Alexandra Birch βNeural Machine Translation of Rare Words with Subword Unitsβ arXiv, 2016 DOI: 10.48550/arXiv.1508.07909
- [31] Zhihong Shao et al. βDeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Modelsβ arXiv, 2024 arXiv: 2402.03300 [cs]
- [32] Charlie Snell, Jaehoon Lee, Kelvin Xu and Aviral Kumar βScaling LLM Test-Time Compute Optimally Can Be More Effective than Scaling Model Parametersβ arXiv, 2024 arXiv: 2408.03314 [cs]
- [33] Richard S. Sutton and Andrew G. Barto βReinforcement Learning: An Introductionβ Cambridge, Massachusetts: The MIT Press, 2018
- [34] Yijun Tian et al. βTinyLLM: Learning a Small Student from Multiple Large Language Modelsβ arXiv, 2024 DOI: 10.48550/arXiv.2402.04616
- [35] Rasul Tutunov et al. βWhy Can Large Language Models Generate Correct Chain-of-Thoughts?β arXiv, 2024 DOI: 10.48550/arXiv.2310.13571
- [36] Ashish Vaswani et al. βAttention Is All You Needβ In Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017
- [37] Jun Wang βA Tutorial on LLM Reasoning: Relevant Methods behind ChatGPT O1β arXiv, 2025 DOI: 10.48550/arXiv.2502.10867
- [38] Jason Wei et al. βChain-of-Thought Prompting Elicits Reasoning in Large Language Modelsβ arXiv, 2023 DOI: 10.48550/arXiv.2201.11903
- [39] Sang Michael Xie et al. βDoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretrainingβ arXiv, 2023 DOI: 10.48550/arXiv.2305.10429
- [40] An Yang et al. βQwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvementβ arXiv, 2024 arXiv: 2409.12122 [cs]
- [41] Yang Yue et al. βDoes Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?β arXiv, 2025 DOI: 10.48550/arXiv.2504.13837
- [42] Zhihan Zhang et al. βLearn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoningβ arXiv, 2024 DOI: 10.48550/arXiv.2406.12050 Contents
1. 1 Introduction
1. 2 Related works
1. 3 Reflective reasoning for transformers
1. 3.1 Reasoning formulation
1. 3.2 The framework of self-verifying reflection
1. 3.3 Training
1. 4 Theoretical results
1. 5 Experiments
1. 5.1 Results of supervised fine-tuning
1. 5.2 Results of reinforcement learning
1. 6 Conclusion and Discussion
1. A Notations
1. B Details of reinforcement learning
1. B.1 Proximal policy optimization
1. B.2 Group-reward policy optimization
1. B.3 Technical Implementation
1. C Theory
1. C.1 A general formulation of reasoning performance
1. C.1.1 Bellman equations in RMTP
1. C.1.2 Bellman equations in RTBS
1. C.2 Accuracy derivation in the simplified reasoning task
1. C.3 Derivation of Theorem 1
1. C.3.1 Proof of Proposition 3
1. C.3.2 Proof of Proposition 4
1. C.4 Derivation of RMTP reasoning cost
1. C.5 Considering posterior risks of rejected attempts
1. D Implementation details
1. D.1 Algorithmic descriptions of reflective reasoning
1. D.2 Example CoT data
1. D.2.1 Multiplication CoT
1. D.2.2 Sudoku CoT
1. D.2.3 Verification of reasoning steps
1. D.3 Model architectures and tokenization
1. D.4 Hyperparameters
1. D.5 Computational resources
1. E Supplementary results of experiments
1. E.1 Evaluation of LLMs
1. E.2 Results of supervised fine tuning
1. E.3 Results of GRPO
1. E.3.1 The verification errors after GRPO
1. E.3.2 The planning correctness rate after GRPO
1. E.3.3 Reflection frequency of optional detailed verification
1. E.4 Reflection frequency under controlled verification error rates
1. E.5 Results of PPO
Appendix A Notations
The notations used in the main paper are summarized in Table 1. Notations only appear in the appendix are not included.
Table 1: Notations in the main paper.
| ${Q}$ $\{{R}\}$ ${R}_{t}$ | The query of CoT reasoning The sequence of intermediate reasoning steps The $t$ -th intermediate step in CoT reasoning |
| --- | --- |
| ${A}$ | The answer of CoT reasoning. |
| $T$ | The number of steps (including the final answer) in an CoT |
| $\pi$ | The planning policy in MTP reasoning |
| ${s}_{t}$ | The $t$ -th state in CoT reasoning |
| $\mathcal{T}$ | The transition function in an MTP |
| β $\checkmark$ β | The special token as the positive label of verification. |
| β $Γ$ β | The special token as the negative label of verification |
| ${V}_{t}$ | The verification sequence for the proposed step ${R}_{t}$ . |
| $\mathcal{V}$ | The verifier such that ${V}_{t+1}\sim\mathcal{V}(Β·|{S}_{t},{R}_{t+1})$ |
| $\tilde{{R}_{t}}$ | The verified reasoning step, i.e. $({R}_{t},{V}_{t})$ |
| $\tilde{\mathcal{T}}$ | The reflective transition function in an RMTP |
| $\tilde{\mathcal{\pi}}$ | The self-verifying policy, i.e. $\{\pi,\mathcal{V}\}$ |
| $m$ | The RTBS width, i.e. maximal number of attempts on each state |
| $\mu$ | The probability of proposing a correct step on positive states |
| $e_{-}$ | The probability of instantly rejecting a correct step on positive states |
| $e_{+}$ | The probability of accepting an incorrect step on positive states |
| $f$ | The probability of instantly rejecting any step on negative states |
| $\alpha$ | The shorthand of $\mu e_{-}+(1-\mu)(1-e_{+})$ |
| $\rho(n)$ | The accuracy of non-reflective MTP reasoning |
| $\tilde{\rho}(n)$ | The accuracy of RMTP reasoning for queries with scale $n$ |
| $\tilde{\rho}_{m}(n)$ | The accuracy of RTBS reasoning with width $m$ for queries with scale $n$ |
Appendix B Details of reinforcement learning
This section introduces PPO and GRPO algorithms used in RL fine-tuning. We introduce PPO and GRPO under the context of MTP, which is described in Section 3.1. This also applies to RMTP reasoning in Section 3.2, as RMTP is a special MTP given the self-verifying policy $\tilde{\pi}$ and the reflective transition function $\tilde{\mathcal{T}}$ .
For any sequence ${X}$ of tokens, we additionally define the following notations: ${{X}^{[i]}}$ denotes the $i$ -th token, ${{X}^{[<i]}}$ ( ${{X}^{[β€ i]}}$ ) denotes the former $i-1$ ( $i$ ) tokens, and $|{X}|$ denotes the length (i.e., the number of tokens).
Both PPO and GRPO iteratively update the reasoning policy through online experience. Let $\pi_{\theta}$ to denote a reasoning policy parameterized by $\theta$ . On each iteration, PPO and GRPO use a similar process to update $\theta$ :
1. Randomly draw queries from the task or taring set, and apply the old policy $\pi_{\theta_{old}}$ to sample experience CoTs.
1. Use reward models to assign rewards to the experience CoTs. Let $\operatorname{ORM}$ and $\operatorname{PRM}$ be the outcome reward model and process reward model, respectively. For each CoT $({Q},{R}_{1},...,{R}_{T-1},A)$ , we obtain outcome rewards $r_{o}=\operatorname{ORM}(Q,A)$ and the process rewards $r_{t}=\operatorname{PRM}({S}_{t},{R}_{t+1})$ for $t=0,1,...,T-1$ (where ${R}_{T}=A$ ). In our case, we only use the outcome reward model and thus all process rewards are $0$ .
1. Then, $\theta$ is updated by maximizing an objective function based on the experience CoTs with above rewards. Especially, PPO additionally needs to update an value approximator.
B.1 Proximal policy optimization
PPO [29] is a classic RL algorithm widely used in various applications. It includes a value model $v$ to approximate the value function, namely the expected cumulated rewards:
$$
v({S}_{t},{{R}_{t}^{[<i]}})=\mathbb{E}_{\pi}\left(r_{o}+\sum_{k=t}^{T}r_{k}\right) \tag{7}
$$
Let $q_{t,i}(\theta)=\frac{\pi_{\theta}\left({{R}_{t}^{[i]}}\middle|{S}_{t},{{R}_{t}^{[<i]}}\right)}{\pi_{\theta_{old}}\left({{R}_{t}^{[i]}}\middle|{S}_{t},{{R}_{t}^{[<i]}}\right)}$ be the relative likelihood of the $i$ -th token in the $t$ -th step, and $\pi_{ref}$ be the reference model (e.g., the policy before RL-tuning). Then, the PPO algorithm maximizes
$$
\displaystyle J_{PPO}(\theta)= \displaystyle\mathbb{E}_{{Q}\sim P({Q}),\{{R}\}\sim\pi_{\theta_{old}}}\frac{1}{\sum_{t=1}^{T}|{R}_{t}|}\sum_{t=1}^{T}\sum_{i=1}^{|{R}_{t}|} \displaystyle\left\{\min\left[q_{t,i}(\theta)\hat{A}_{t,i},\operatorname{clip}\left(q_{t,i}(\theta),1-\varepsilon,1+\varepsilon\right)\hat{A}_{t,i}\right]-\beta\mathbb{D}_{KL}\left[\pi_{\theta}\|\pi_{ref}\right]\right\}. \tag{8}
$$
Here, $\hat{A}_{t,i}$ is the advantage of the $i$ -th token in step $t$ , computed using the value model $v$ . For example, $\hat{A}_{t,i}=v({S}_{t},{{R}_{t}^{[<i]}},{{R}_{t}^{[i]}})-v({S}_{t},{{R}_{t}^{[<i]}})$ is a simple way to estimate advantage. In practice, advantages can be estimated using the general advantage estimation (GAE) [28].
The value model $v$ is implemented using the same architecture as the reasoner except for the output layer, which is replaced by a linear function that outputs a scalar value. The value model is initialized using the same parameters as the reasoner, apart from the output layer. Assuming that $v$ is parameterized by $\omega$ , we learn $v$ by minimizing the temporal-difference error:
$$
J_{v}(\omega)=\mathbb{E}_{{Q}\sim P({Q}),{R}\sim\pi_{\theta_{old}}}\sum_{t=1}^{T}\sum_{i=1}^{|{R}_{t}|}\left(v_{\omega}({S}_{t},{{R}_{t}^{[<i]}})-v_{\omega_{old}}({S}_{t+1})\right)^{2}. \tag{9}
$$
Although PPO proves effective in training LLMs [23], we deprecate using it in training tiny transformers due to the difficulty of learning the value function. Since the value model $v$ is also a tiny transformer, its weakness in model capacity severely compromise the precision of value approximation, leading to unreliable advantage estimation.
B.2 Group-reward policy optimization
PPO requires learning an additional value model, which can be expensive and unstable. Alternatively, GRPO [31] directly computes the advantages using the relative rewards from a group of $G$ solutions. For each query ${Q}$ , it samples a group of $G$ solutions:
$$
\{{R}_{g}\}=({R}_{g,1},\ldots,{R}_{g,T_{g}-1},A_{g})\sim\pi_{\theta_{old}},\qquad\text{for}\ g=1,\ldots,G. \tag{10}
$$
In this group, each solution $\{{R}_{g}\}$ contains $T_{g}$ steps, where the answer $A_{g}$ is considered as the final step ${R}_{g,T_{g}}$ . Using the reward models, we obtain process rewards $\boldsymbol{r}_{p}:=\{(r_{g,1},...,r_{g,T_{g}})\}_{g=1}^{G}$ and outcome rewards $\boldsymbol{r}_{o}:=\{r_{g,o}\}_{g=1}^{G}$ . Then, GPRO computes the normalized rewards, given by:
$$
\tilde{r}_{g,t}=\frac{r_{g,t}-\operatorname{mean}\boldsymbol{r}_{p}}{\operatorname{std}\boldsymbol{r}_{p}},\ \tilde{r}_{g,o}=\frac{r_{g,o}-\operatorname{mean}\boldsymbol{r}_{o}}{\operatorname{std}\boldsymbol{r}_{o}} \tag{11}
$$
Afterwards, the advantage of step $t$ in the $g$ -th solution of the group is $\hat{A}_{g,t}=\tilde{r}_{g,o}+\sum_{k=t}^{T_{g}}\tilde{r}_{g,t^{\prime}}$ . Let $q_{g,t,i}(\theta)=\frac{\pi_{\theta}\left({{R}_{g,t}^{[i]}}\middle|{S}_{g,t},{{R}_{g,t}^{[<i]}}\right)}{\pi_{\theta_{old}}\left({{R}_{g,t}^{[i]}}\middle|{S}_{g,t},{{R}_{g,t}^{[<i]}}\right)}$ be the relative likelihood of the $i$ -th token in the $t$ -th step from the $g$ -th solution. Then, the GRPO objective is to maximize the following:
$$
\displaystyle J_{GRPO}(\theta)= \displaystyle\mathbb{E}_{{Q}\sim P({Q}),\{{R}_{g}\}\sim\pi_{old}}\frac{1}{G}\sum_{g=1}^{G}\frac{1}{\sum_{t=1}^{T_{g}}|\tau^{(t)}|}\sum_{t=1}^{T_{g}}\sum_{i=1}^{|{R}_{g,t}|} \displaystyle\left\{\min\left[q_{g,t,i}(\theta)\hat{A}_{g,t},\operatorname{clip}\left(q_{g,t,i}(\theta),1-\varepsilon,1+\varepsilon\right)\hat{A}_{g,t}\right]-\beta\mathbb{D}_{KL}\left[\pi_{\theta}\|\pi_{ref}\right]\right\} \tag{12}
$$
B.3 Technical Implementation
We made two technical modifications that make RL more suitable in our case, described in the following.
First, in RMTP, we mask off the advantage of rejected steps, while the advantage of self-verification labels is reserved. This prevents the algorithm from increasing the likelihood of rejected steps, allowing the planning policy $\pi$ to be properly optimized. In practice, we find this modification facilitates the training of models that perform mandatory detailed verification. Otherwise, RL could make the reasoner excessively rely on reflection, leading to CoTs that are unnecessarily long.
Second, we employ an early-truncating strategy when sampling trajectories in training. If the model has already made a clear error at some step (detected using an oracle process reward model), we truncate the trajectory as it is impossible to find a correct answer. This avoids unnecessarily punishing later steps due to previous deviations, as some later steps may be locally correct in their own context. Empirically, we find this modification reduces the training time required to reach the same performance, while the difference in final performance is marginal.
Appendix C Theory
C.1 A general formulation of reasoning performance
Let $\mathcal{S}$ denote the state space and $\mathcal{A}$ denote the answer space. We use $\mathcal{A}_{{Q}}βeq\mathcal{A}$ to denote the set of correct answers for some input query ${Q}$ . Given any thought state ${S}$ , the accuracy, namely the probability of finding a correct answer, is denoted as
$$
\rho_{{Q}}({S})=p_{({R}_{t+1},{R}_{t+2},\ldots,{A})\sim\pi}({A}\in\mathcal{A}_{{Q}}\mid{S}_{t}={S}) \tag{13}
$$
C.1.1 Bellman equations in RMTP
By considering the reasoning correctness as the binary outcome reward, we may use Bellman equations [33] to provide a general formulation of the reasoning performance for arbitrary MTPs (RMTP). For simplicity, we use ${S}$ , ${R}$ , and ${S}^{\prime}$ to respectively denote the state, step, and next state in a transition.
Initially, in the absence of a trace-back mechanism, the accuracy $\rho_{{Q}}(s)$ can be interpreted as the value function when considering the MTP as a goal-directed decision process. For simplicity, we denote the transition probability drawn from the reasoning dynamics $\mathcal{T}$ as $p({S^{\prime}}\mid{S},{R})$ . In non-reflective reasoning, the state transition probability $p({S^{\prime}}\mid{S})$ can be expressed as:
$$
p({S^{\prime}}|{S})=\sum_{{R}}p({S^{\prime}}|{S},{R})\pi({R}|{S}) \tag{14}
$$
When using RMTP execution, assuming that $\xi({S},{R}):=p_{{V}\sim\mathcal{V}}(\text{``$Γ$''}β{V}\mid{S},{R})$ represents the probability of rejecting the step ${R}$ , we have:
$$
p({S^{\prime}}|{S})=\begin{cases}\sum_{R}\pi({R}|{S})(1-\xi({S},{R}))p({S^{\prime}}|{S},{R}),&\text{if }{S^{\prime}}\neq{S}\\
\sum_{R}\pi({R}|{S})\left((1-\xi({S},{R}))p({S^{\prime}}|{S},{R})+\xi({S},{R})\right),&\text{if }{S^{\prime}}={S}\end{cases} \tag{15}
$$
Consequently, the Bellman equation follows:
$$
\rho_{{Q}}({S})=\begin{cases}1,&\text{if }{S}\in\mathcal{A}_{{Q}}\\
0,&\text{if }{S}\in\mathcal{A}\setminus\mathcal{A}_{{Q}}\\
\sum_{{S^{\prime}}}\rho_{{Q}}({S^{\prime}})p({S^{\prime}}\mid{S}),&\text{if }s\in\mathcal{S}\setminus\mathcal{A}\end{cases} \tag{16}
$$
C.1.2 Bellman equations in RTBS
Let $m$ denote the number of attempts at each state, and let $\phi({S})$ represent the failure probability (i.e., the probability of tracing back after $m$ rejected steps) at state ${S}$ . The probability of needing to retry a proposed step due to instant rejection or recursive rejection is given by:
$$
\epsilon({S})=\sum_{{R}}\pi({R}|{S})\left(\xi({S},{R})+\sum_{{S^{\prime}}}(1-\xi({S},{R}))p({S^{\prime}}|{S},{R})\phi({S^{\prime}})\right) \tag{17}
$$
The failure probability is then given by $\phi({S})=\epsilon^{m}({S})$ . When there are $k$ attempts remaining at the current state ${S}$ , we denote the accuracy as $\rho_{x}({S},k)$ , given by:
$$
\rho_{Q}({S},k)=\begin{cases}\epsilon({S})\rho_{x}({S},k-1)+\sum_{{R}}\pi({R}|{S})(1-\xi({S},{R}))\sum_{{S^{\prime}}}p({S^{\prime}}|{S},{R})\rho_{Q}({S^{\prime}}),&k>0\\
0,&k=0\end{cases} \tag{18}
$$
It follows that $\rho_{x}({S})=\rho_{x}({S},m)$ . This leads to a recursive formulation that ultimately results in the following equations for each $sβ\mathcal{S}$ :
$$
\displaystyle\epsilon({S})=\sum_{{R}}\pi({R}|{S})\left(\xi({S},{R})+\sum_{{S^{\prime}}}(1-\xi({S},{R}))p({S^{\prime}}|{S},{R})\epsilon^{m}({S^{\prime}})\right), \displaystyle\rho_{x}({S})=\frac{1-\epsilon^{m}({S})}{1-\epsilon({S})}\sum_{{S^{\prime}}}\rho_{Q}({S^{\prime}})\pi({R}|{S})(1-\xi({S},{R}))p({S^{\prime}}|{S},{R}). \tag{19}
$$
C.2 Accuracy derivation in the simplified reasoning task
In the following, we derive the accuracy of reflective reasoning with and without the trace-back search, given the simplified reasoning task in Section 4. For each proposed step on a correct state, we define several probabilities to simplify notations: $\alpha:=\mu e_{-}+(1-\mu)(1-e_{+})$ is the probability of being instantly rejected; $\beta=\mu(1-e_{-})$ is the probability of being correct and accepted; $\gamma=(1-\mu)e_{+}$ is the probability of being incorrect but accepted. Note that $\beta$ here no longer refers to the KL-divergence factor in Appendix B.
**Proposition 2**
*The RTMP accuracy $\tilde{\rho}(n)$ for problems with a scale of $n$ is
$$
\tilde{\rho}(n)=\left(\frac{\beta}{1-\alpha}\right)^{n} \tag{21}
$$
Let $m$ be the width of RTBS. Let $\delta_{m}(n)$ and $\epsilon_{m}(n)$ be the probability of a proposed step being rejected (either instantly or recursively) on a correct state and incorrect state of scale $n$ , respectively. We have $\sigma_{m}(0)=\epsilon_{m}(0)=0$ and the following recursive equations for $n>0$ :
$$
\displaystyle\delta_{m}(n)=\alpha+\beta(\delta_{m}(n-1))^{m}+\gamma(\epsilon_{m}(n-1))^{m} \displaystyle\epsilon_{m}(n)=f+(1-f)\left(\epsilon_{m}(n-1)\right)^{m} \tag{22}
$$
Then, the RTBS accuracy $\tilde{\rho}_{m}(n)$ for problems with a scale of $n$ is given by
$$
\tilde{\rho}_{m}(n)=\prod_{t=1}^{n}\sigma_{m}(t),\qquad\text{where }\sigma_{m}(t)=\beta\sum_{i=0}^{m-1}(\delta_{m}(t))^{i}=\frac{1-(\delta_{m}(t))^{m}}{1-\delta_{m}(t)}\beta. \tag{24}
$$
In addition, $\delta_{m}(n)$ , $\epsilon_{m}(n)$ and $\sigma_{m}(n)$ all motonously increase and converge in relation to $n$ .*
* Proof*
We first consider reasoning through RTBS. Let $\phi_{m}(n)$ and $\psi_{m}(n)$ denote the probabilities of failure (reaching the maximum number of attempts) in correct and incorrect states, respectively. Let $\tilde{\rho}_{i|m}(n)$ indicate the accuracy after the $i$ attempts at the current sub-problem of scale $n$ . Therefore, we have $\tilde{\rho}_{m}(n)=\tilde{\rho}_{0|m}(n)$ and $\tilde{\rho}_{m|m}(n)=0$ . At a correct state, we have the following possible cases: - A correct step is proposed and instantly accepted with probability $\beta=\mu(1-e_{-})$ . In this case, the next state has a scale of $n-1$ , which is correctly solved with probability $\rho_{0|m}(n-1)$ and fails (i.e., is recursively rejected) with probability $\phi_{m}(n-1)$ .
- A correct step is proposed and instantly rejected with probability $\mu e_{-}$ .
- An incorrect step is proposed and instantly accepted with probability $\gamma=(1-\mu)e_{+}$ . In this scenario, the next state has a scale of $n-1$ , which fails with probability $\psi_{m}(n-1)$ .
- An incorrect step is proposed and instantly rejected with probability $\beta=(1-\mu)(1-e_{+})$ . Thus, we have a probability of $\alpha=\mu e_{-}+(1-\mu)(1-e_{+})$ to instantly reject the step, and a probability of $\beta\phi_{m}(n-1)+\gamma\psi_{m}(n-1)$ to recursively reject the step. Therefore, the overall probability of rejecting an attempt on correct states is:
$$
\delta_{m}(n)=\alpha+\beta\phi_{m}(n-1)+\gamma\psi_{m}(n-1). \tag{25}
$$
Since failure occurs after $m$ rejections, we have:
$$
\phi_{m}(n)=\left(\delta_{m}(n)\right)^{m} \tag{26}
$$ At an incorrect state, we have a probability $f$ to instantly reject a step. Otherwise, we accept the step, and the next state fails with probability $\psi_{m}(n-1)$ . Therefore, the overall probability of rejecting an attempt for incorrect states is:
$$
\epsilon_{m}(n)=f+(1-f)\psi_{m}(n-1). \tag{27}
$$
Similarly, we obtain:
$$
\psi_{m}(n)=\left(\epsilon_{m}(n)\right)^{m} \tag{28}
$$ By substituting Equations (26) and (28) into Equations (25) and (27), we obtain Equations (22) and (23). If an attempt is rejected (either instantly or recursively), we initiate another attempt which solves the problem with a probability of $\rho_{i+1|m}(n)$ . Therefore, we have the recursive form of the accuracy, given by:
$$
\tilde{\rho}_{i|m}(n)=\beta\tilde{\rho}_{0|m}(n-1)+\delta(n,m)\tilde{\rho}_{i+1|m}(n) \tag{29}
$$
Thus, we can expand $\tilde{\rho}_{m}(n)$ as:
$$
\displaystyle\tilde{\rho}_{m}(n) \displaystyle=\tilde{\rho}_{0|m}(n) \displaystyle=\beta\tilde{\rho}_{m}(n-1)+\delta_{m}(n)\tilde{\rho}_{1|m}(n) \displaystyle\cdots \displaystyle=(\beta+\delta_{m}(n)\beta+\delta_{m}^{2}(n)\beta+\cdots+\delta_{m}^{m}(n)\beta)\tilde{\rho}_{m}(n-1) \displaystyle=\sigma_{m}(n)\tilde{\rho}_{m}(n-1) \tag{30}
$$ Note that $n=0$ indicates that the state is exactly the outcome, which means $\tilde{\rho}_{m}(0)=1$ . Then, Equation (24) is evident given the recursive form in Equation (31). For reflective reasoning without trace-back, we can simply replace $\delta_{m}(n)$ with $\alpha$ in $\sigma_{m}(n)$ , as only instant rejections are allowed. We then set $mββ$ , leading to Equation (21).
Monotonicity
We first prove the monotonic increase of $\epsilon_{m}(n)$ . Equation (23) gives $\epsilon_{m}(n)=f+(\epsilon_{m}(n-1))^{m}$ and $\epsilon_{m}(n+1)=f+(\epsilon_{m}(n))^{m}$ for each $n>1$ . Therefore, if $\epsilon_{m}(n)β₯\epsilon_{m}(n-1)$ , we have:
$$
\epsilon_{m}(n+1)=f+(\epsilon_{m}(n))^{m}\geq f+(\epsilon_{m}(n-1))^{m}=\epsilon_{m}(n). \tag{32}
$$
Additionally, it is clear that $\epsilon_{m}(1)=fβ₯ 0=\epsilon_{m}(0)$ . Using mathematical induction, we conclude that $\epsilon_{m}(n+1)>\epsilon_{m}(n)$ for all $nβ₯ 0$ . The monotonicity of $\delta_{m}(n)$ can be proven similarly, and the monotonicity of $\sigma_{m}(n)$ is evident from that of $\delta_{m}(n)$ . Since $\delta_{m}(n)$ and $\epsilon_{m}(n)$ are probabilities, they are bounded in $[0,1]$ and thus converge monotonically. β
Illustration of accuracy curves
Using the recursive formulae in Proposition 2, we are able to implement a program to compute the reasoning accuracy in the simplified reasoning problem in Section 4 and thereby visualize the accuracy curves of various reasoning algorithms. For example, Figure 10 presents the reasoning curves given $\mu=0.8$ , $e_{-}=0.3$ , $e_{+}=0.2$ , and $f=0.8$ , which lead to $\alpha=0.4<f$ . For this example, we may observe the following patterns: 1) An overly small width $m$ in RTBS leads to poor performance; and 2) by choosing $m$ properly, $\tilde{\rho}_{m}(n)$ remains stable when $nββ$ . These observations are formally described and proved in Appendix C.3.
<details>
<summary>x13.png Details</summary>

### Visual Description
## Line Graph: Reasoning Accuracy vs. Problem Scale
### Overview
The graph illustrates the relationship between reasoning accuracy (Ο) and problem scale (n) for different computational models. Accuracy declines as problem scale increases, with distinct performance patterns across models.
### Components/Axes
- **Y-axis**: Reasoning accuracy (Ο) ranging from 0.0 to 1.0 in increments of 0.2.
- **X-axis**: Problem scale (n) ranging from 0 to 50 in increments of 10.
- **Legend**: Positioned in the top-right corner, containing:
- RTBS models (m=1 to m=6) with solid colored lines (blue, orange, green, red, purple, brown).
- RMTP (solid black line).
- "no reflection" (dashed black line).
### Detailed Analysis
1. **RTBS Models (m=1 to m=6)**:
- All RTBS lines start at Ο=1.0 when n=0.
- Accuracy declines sharply for lower m values (e.g., m=1 drops to ~0.2 by n=10).
- Higher m values (m=4β6) maintain higher accuracy longer (e.g., m=6 retains ~0.6 at n=50).
- Lines are ordered by color: m=1 (blue) β m=6 (brown).
2. **RMTP**:
- Solid black line starts at Ο=1.0 and declines gradually to ~0.1 by n=50.
- Outperforms "no reflection" but lags behind RTBS models with mβ₯3.
3. **No Reflection**:
- Dashed black line remains near Ο=0.05 across all n values.
- Shows minimal improvement even at n=0.
### Key Observations
- **RTBS Scaling**: Higher m values correlate with better performance on larger problem scales.
- **RMTP vs. No Reflection**: RMTP significantly outperforms "no reflection" but is less effective than RTBS models with mβ₯3.
- **Steepest Declines**: Lower m RTBS models (m=1β2) experience the fastest accuracy drops.
### Interpretation
The data suggests that increasing the parameter m in RTBS models enhances reasoning accuracy for larger problem scales, likely due to improved computational capacity or parameter efficiency. RMTP provides moderate performance, while "no reflection" is ineffective. The sharpest declines in lower m RTBS models highlight the importance of model complexity for scalability. This trend underscores the trade-off between model size and generalization in reasoning tasks.
</details>
Figure 10: The accuracy curves of non-reflective MTP $\rho(n)$ , RMTP $\tilde{\rho}(n)$ , and RTBS $\tilde{\rho}_{m}(n)$ , using $\mu=0.8$ , $e_{-}=0.3$ , $e_{+}=0.2$ , and $f=0.8$ .
Furthermore, in Figure 10 we see that a small $m$ stabilizes the drop of $\tilde{\rho}_{m}(n)$ when $n$ is large, yet it also makes $\tilde{\rho}_{m}(n)$ drop sharply in the area where $n$ is small. This indicates the potential of using an adaptive width in RTBS, where $m$ is set small when the current subproblem (state) requires a large number $n$ of steps to solve, and $m$ increases when $n$ is reduced by previous reasoning steps. Since this paper currently focuses on the minimalistic reflection framework, we expect to explore such an extension in future work.
C.3 Derivation of Theorem 1
Theorem 1 is obtained by merging the following Proposition 3 and Proposition 4, which also provide supplementary details on the non-trivial assumptions of factors $\mu$ , $e_{-}$ , and $e_{+}$ . Additionally, Proposition 4 also shows that there exists an ideal range of RTBS width $m$ such that stabilizes the drop of $\tilde{\rho}_{m}(n)$ when $nββ$ .
**Proposition 3 (RMTP Validity conditions)**
*For all $nβ₯ 0$ , we have $\tilde{\rho}(n)β₯\rho(n)\iff e_{-}+e_{+}β€ 1$ . Additionally, if $\mu>0$ and $e_{-}<1$ , then for all $nβ₯ 1$ we have that $\tilde{\rho}(n)=\rho(n)\iff e_{-}+e_{+}=1$ and $\tilde{\rho}(n)$ decreases strictly with either $e_{-}$ or $e_{+}$ .*
**Proposition 4 (RTBS Validity Condition)**
*Assuming $0<\mu<1$ , $e_{-}<1$ , and $e_{+}>0$ , then
$$
\lim_{n\to\infty}\frac{\tilde{\rho}_{m}(n)}{\tilde{\rho}(n)}>1\iff\left(m>\frac{1}{1-\alpha}\ \text{and}\ f>\alpha\right). \tag{33}
$$
Furthermore, we have
- $\lim_{nββ}\frac{\tilde{\rho}_{m}(n)}{\tilde{\rho}_{m}(n-1)}=1$ if $mβ[\frac{1}{\mu(1-e_{-})},\frac{1}{1-f}]$ .
- $\tilde{\rho}_{m}(n)$ increases strictly with $f$ for all $nβ₯ 2$ .*
The proof of propositions 3 and 4 is given in Appendix C.3.1 and Appendix C.3.2, which are based on the previous derivation in Appendix C.2.
C.3.1 Proof of Proposition 3
In any case, we have $\tilde{\rho}(0)=\rho(0)=1$ .
If $\mu=0$ or $e_{-}=1$ , we clearly have $\tilde{\rho}(n)=\rho(n)=0$ for $nβ₯ 1$ .
If $0>\mu$ and $e_{-}<1$ , we can transform $\tilde{\rho}(n)$ (given in Proposition 2) as:
$$
\tilde{\rho}(n)=\left(\frac{1}{1+\frac{e_{+}}{1-e_{-}}(\mu^{-1}-1)}\right)^{n}=\left(\frac{\mu(1-e_{-})}{\mu(1-e_{+}-e_{-})+e_{+}}\right)^{n}. \tag{34}
$$
This shows that $\rho(n)$ strictly decreases with both $e_{+}$ and $e_{-}$ , and $\tilde{\rho}(n)=\mu^{n}\iff e_{+}+e_{-}=1$ . Therefore, we also have $\tilde{\rho}(n)>\mu^{n}\iff e_{+}+e_{-}<1$ .
The Proposition is proved by combining all the above cases.
C.3.2 Proof of Proposition 4
The assumptions $0<\mu<1$ , $e_{-}<1$ , and $e_{+}>0$ ensure that $\beta>0$ and $\gamma>0$ . Proposition 2 suggests the monotonous convergence of $\delta_{m}(n)$ , $\epsilon_{m}(n)$ , and $\sigma_{m}(n)$ . For simplicity, we denote $\delta:=\lim_{nββ}\delta_{m}(n)$ , $\epsilon:=\lim_{nββ}\epsilon_{m}(n)$ , and $\sigma:=\lim_{nββ}\sigma_{m}(n)$ . From Equations (22) and (23), we have:
$$
\displaystyle\delta \displaystyle=\alpha+\beta\delta^{m}+\gamma\epsilon^{m} \displaystyle\epsilon \displaystyle=f+(1-f)\epsilon^{m} \tag{35}
$$
Note that $\epsilon=\delta=1$ gives the trivial solution of the above equations. However, there may exist another solution (if any) such that $\delta<1$ or $\epsilon<1$ under certain circumstances. Since $\epsilon_{m}(0)=0$ and $\delta_{m}(0)=0$ , the limits $\epsilon$ and $\delta$ take the smaller solution within $[0,1]$ . In the following, we first discuss when another non-trivial solution exists.
**Lemma 1**
*For any $mβ₯ 1$ , if $0β€ p<\frac{m-1}{m}$ , then $x=p+(1-p)x^{m}$ has a unique solution $x_{*}β[p,1)$ , which strictly increases with $p$ . Otherwise, if $\frac{m-1}{m}β€ pβ€ 1$ , the only solution in $[0,1]$ is $x_{*}=1$ .*
* Proof*
Define $F(x):=p+(1-p)x^{m}-x$ . We find:
$$
F^{\prime}(x)=m(1-p)x^{m-1}-1.
$$
It is observed that $F^{\prime}(x)$ increases monotonically with $x$ . Additionally, we have $F^{\prime}(0)=-1<0$ , $F^{\prime}(1)=m(1-p)-1$ , and $F(1)=0$ . We only consider the scenario where $p>0$ , since $x=0β[0,1)$ is obviously the unique solution. If $0β€ p<\frac{m-1}{m}$ , we have $1-p>\frac{1}{m}$ . This implies $F^{\prime}(1)>0$ . Combining $F^{\prime}(0)<0$ , there exists $\xiβ(0,1)$ such that $F^{\prime}(\xi)=0$ . As a result, $F(x)$ strictly decreases in $[0,\xi]$ and increases in $[\xi,1)$ . Therefore, we have $F(\xi)<F(1)=0$ . Since $F(p)=(1-p)p^{m}>0$ , we know that there exists a unique $x_{*}β[p,\xi)$ such that $F(x_{*})=0$ . If $\frac{m-1}{m}β€ pβ€ 1$ , we have $1-pβ€\frac{1}{m}$ and $F^{\prime}(1)β€ 0$ . In this case, $F^{\prime}(x)<0$ in $[0,1)$ due to the monotonicity of $F^{\prime}(x)$ . Thus, $F(x)>F(1)=0$ for all $xβ[0,1)$ . Therefore, $x_{*}=1$ is the only solution within $[0,1]$ . Now, we prove the monotonic increase of $x_{*}$ when $0β€ p<\frac{m-1}{m}$ . We have:
$$
\displaystyle\frac{\mathrm{d}x_{*}}{\mathrm{d}p}=1+m(1-p)x_{*}^{m-1}\frac{\mathrm{d}x_{*}}{\mathrm{d}p}-x_{*}^{m} \displaystyle\frac{\mathrm{d}x_{*}}{\mathrm{d}p}=\frac{1-x_{*}^{m}}{1-m(1-p)x_{*}^{m-1}}=\frac{1-x_{*}^{m}}{-F^{\prime}(x_{*})} \tag{37}
$$
The previous discussion shows that with $x_{*}<[p,\xi)$ for some $\xi$ such that $F^{\prime}(\xi)=0$ . Given that $F^{\prime}(x)$ increases monotonically, we have $F^{\prime}(x_{*})<0$ and thus $\frac{\mathrm{d}x_{*}}{\mathrm{d}p}>0$ . β
**Lemma 2**
*Assume $pβ₯ 0$ , $q>0$ , and $p+qβ€ 1$ . Then, the equation $x=p+qx^{m}$ has a unique solution $x_{*}β[0,1)$ , which increases monotonically with $pβ[0,1-q]$ .*
* Proof*
Define $F(x):=p+qx^{m}-x$ . Since $F(0)β₯ 0$ and $F(1)<0$ , there exists a solution $x_{*}β[0,1)$ . Since $F$ is convex, we know there is at most one other solution. Clearly, the other solution appears in $(1,+β)$ , since $F(+β)>0$ . Therefore, $F(x)=0$ must have a unique solution $x_{*}$ in $[0,1)$ . Additionally, $x_{*}$ must appear to the left of the minimum of $F$ , which yields $F^{\prime}(x_{*})<0$ . Using the Implicit Function Theorem, we write:
$$
\frac{\mathrm{d}x_{*}}{\mathrm{d}{p}}=\frac{1}{1-mqx_{*}^{m-1}}=-\frac{1}{F^{\prime}(x_{*})}>0 \tag{39}
$$
Thus, we conclude that $x_{*}$ increases monotonically with $p$ . β
Applying Lemmas 1 and 2 to Equation (36), we find that $\epsilon=1$ if and only if $fβ₯\frac{m-1}{m}$ ; otherwise, $\epsilon$ strictly increases with $p$ . Therefore, $f<\frac{m-1}{m}$ indicates that $\epsilon<1$ , leading to $(\alpha+\gamma\epsilon)+\gamma<1$ . Using Lemma 2 again in Equation (35), we have $f<\frac{m-1}{m}\implies\delta<1$ . Conversely, $fβ₯\frac{m-1}{m}$ yields $\epsilon=1$ . and thus $f,\alpha+\gammaβ₯\frac{m-1}{m}\implies\delta=1$ .
First, we consider the special case where $\delta=1$ , which occurs if both $f,\alpha+\gammaβ₯\frac{m-1}{m}$ , namely $mβ€\min\{\frac{1}{1-f},\frac{1}{\beta}\}$ . In this case, we write $\sigma=(1+\delta+Β·s+\delta^{m-1})\beta=m\beta$ . Therefore, we have:
| | $\displaystyle\lim_{nββ}\frac{{\tilde{\rho}(n)}}{\rho(n)}>1$ | $\displaystyle\iff\sigma>\frac{\beta}{1-\alpha}$ | |
| --- | --- | --- | --- |
This leads to the validity condition that $\frac{1}{1-\alpha}<mβ€\min\{\frac{1}{1-f},\frac{1}{\beta}\}$ .
Next, we consider the case where $\delta<1$ , which occurs when $f<\frac{m-1}{m}$ or $\alpha+\gamma<\frac{m-1}{m}$ . This leads to $\betaβ₯\frac{1}{m}>0$ , and we can write:
$$
\displaystyle\delta^{m}=\frac{1}{\beta}\left(\delta-\alpha-\gamma\epsilon^{m}\right), \displaystyle\sigma=\frac{1-\delta^{m}}{1-\delta}=\frac{(1-\delta^{m})\beta}{(1-\alpha)-(\beta\delta^{m}+\gamma\epsilon^{m})}. \tag{40}
$$
Then, we can derive:
| | $\displaystyle\lim_{nββ}\frac{{\tilde{\rho}(n)}}{\rho(n)}<1$ | $\displaystyle\iff\sigma>\frac{\beta}{1-\alpha}$ | |
| --- | --- | --- | --- |
Since we have assumed $\delta<1$ , we have $\epsilon=1>\delta$ if $fβ₯\frac{m-1}{m}$ ; otherwise if $f=\alpha<\frac{m-1}{m}$ , then $\epsilon=\delta$ leads to $\delta=\epsilon$ being a solution of Equation (35). Additionally, from Lemmas 1 and 2, we know that a higher $\alpha$ would increase $(\alpha+\gamma\epsilon)$ , which eventually raises $\delta$ above $\epsilon$ ; conversely, a lower $\alpha$ causes $\delta$ to drop below $\epsilon$ . To summarize, we have the following conditions when $\delta<1$ :
$$
\displaystyle 1\geq\epsilon>\delta \displaystyle\iff\left(\alpha+\gamma<\frac{m-1}{m}\leq f\right)\text{ or }\left(\alpha<f<\frac{m-1}{m}\right) \displaystyle\iff\left(\frac{1}{\beta}<m\leq\frac{1}{1-f}\right)\text{ or }\left(\alpha<f\text{ and }m>\frac{1}{1-f}\right) \tag{42}
$$
Combining the conditions when $\delta=1$ , we have:
$$
\displaystyle\lim_{n\to\infty}\frac{{\tilde{\rho}(n)}}{\rho(n)}>1 \displaystyle\iff \displaystyle\left(\frac{1}{1-\alpha}<m\leq\min\left\{\frac{1}{1-f},\frac{1}{\beta}\right\}\right)\text{ or }\left(\frac{1}{\beta}<m\leq\frac{1}{1-f}\right)\text{ or }\left(\alpha<f\text{ and }m>\frac{1}{1-f}\right) \displaystyle\iff \displaystyle m>\frac{1}{1-\alpha}\text{ and }f>\alpha \tag{44}
$$
Thus far, we have obtained Equation (33). Now, we start proving the two additional statements in Proposition 4.
First, we prove that $\frac{1}{\beta}β€ mβ€\frac{1}{1-f}$ ensures that $\sigma=1$ . First, if $\delta=1$ , we have $mβ€\frac{1}{\beta}β€\frac{1}{1-f}$ . In this case, $\sigma=m\beta$ , and thus $\sigma=1$ when $m=\frac{1}{\beta}$ . Alternatively, if $\delta<1$ , we have $m>\min\{\frac{1}{\beta},\frac{1}{1-f}\}$ . We can express that:
$$
\sigma=\frac{1-\delta^{m}}{1-\delta}\beta=\frac{\beta-(\delta-\alpha-\gamma\epsilon^{m})}{1-\delta}=1-\gamma\frac{1-\epsilon}{(1-\delta)(1-f)} \tag{46}
$$
Using Lemma 2, we know that $\delta$ increases with $\alpha+\gamma\epsilon$ , which increases with $\epsilon$ . Therefore, we have $\frac{\mathrm{d}\delta}{\mathrm{d}\epsilon}>0$ . Then, we obtain
$$
\mathrm{d}\sigma/\mathrm{d}\epsilon=\sum_{i=1}^{m}i\delta^{m-1}\beta\frac{\mathrm{d}\delta}{\mathrm{d}\epsilon}>0, \tag{47}
$$
$$
\displaystyle\sigma \displaystyle=\frac{1-\delta^{m}}{1-\delta}\beta=\frac{\beta-(\delta-\alpha-\gamma\epsilon^{m})}{1-\delta}=\frac{\alpha+\beta+\gamma-(1-\epsilon^{m})\gamma-\delta}{1-\delta} \displaystyle=\frac{1-\delta-\gamma(1-\frac{\epsilon-f}{1-f})}{1-\delta} \tag{48}
$$
Therefore, $\sigma$ increases with $\epsilon$ and reaches its maximum value of $1$ when $\epsilon=1$ . As a result, we conclude that $\frac{1}{\beta}β€ mβ€\frac{1}{1-f}$ ensures that $\sigma=1$ . Combining $\beta=\mu(1-e_{-})$ and $\sigma=\lim_{nββ}\sigma_{m}(n)=\lim_{nββ}\frac{\tilde{\rho}_{m}(n)}{\tilde{\rho}_{m}(n-1)}$ , we have proved that $\lim_{nββ}\frac{\tilde{\rho}_{m}(n)}{\tilde{\rho}_{m}(n-1)}=1$ if $mβ[\frac{1}{\mu(1-e_{-})},\frac{1}{1-f}]$ .
Next, we prove the monotonicity of $\tilde{\rho}_{m}(n)$ with respect to $f$ . To prove this, we first prove the monotonicity of $\epsilon_{m}(t)$ for all $t$ with respect to $f$ .
**Lemma 3**
*For $n>0$ , $\epsilon_{m}(t)$ as defined in Equation 23 increases strictly with $f$ .*
* Proof*
We regard
$$
\epsilon_{m}(0;f)\equiv 0,\qquad\epsilon_{m}(t;f)=f+(1-f)\bigl[\epsilon_{m}(t-1;f)\bigr]^{m} \tag{49}
$$
as a function of $f$ on $[0,1]$ . When $t=1$ we have
$$
\epsilon_{m}(1;f)=f+(1-f)\bigl[\epsilon_{m}(0;f)\bigr]^{m}=f, \tag{50}
$$
so $\frac{β\epsilon_{m}(1;f)}{β f}=1>0$ . Thus $\epsilon_{m}(1;f)$ is strictly increasing with $f$ . Further, assume for some $kβ₯ 1$ that
$$
0\leq\epsilon_{m}(k;f)<1\quad\text{and}\quad\frac{\partial\epsilon_{m}(k;f)}{\partial f}>0\quad\forall f\in[0,1]. \tag{51}
$$
Differentiate the recursion for $t=k+1$ :
$$
\epsilon_{m}(k+1;f)=f+(1-f)\bigl[\epsilon_{m}(k;f)\bigr]^{m}, \tag{52}
$$
$$
\frac{\partial\epsilon_{m}(k+1;f)}{\partial f}=1-\bigl[\epsilon_{m}(k;f)\bigr]^{m}+(1-f)\,m\bigl[\epsilon_{m}(k;f)\bigr]^{m-1}\,\frac{\partial\epsilon_{m}(k;f)}{\partial f}. \tag{53}
$$
By the inductive hypothesis, $\epsilon_{m}(k;f)<1$ implies $\bigl[\epsilon_{m}(k;f)\bigr]^{m}<1$ . Therefore, we have
$$
1-\bigl[\epsilon_{m}(k;f)\bigr]^{m}>0. \tag{54}
$$
Since $1-fβ₯ 0$ , $mβ₯ 1$ , $\bigl[\epsilon_{m}(k;f)\bigr]^{m-1}β₯ 0$ , and $\frac{β\epsilon_{m}(k;f)}{β f}>0$ , the second term is also positive. Hence
$$
\frac{\partial\epsilon_{m}(k+1;f)}{\partial f}>0, \tag{55}
$$
showing that $\epsilon_{m}(k+1;f)$ is strictly increasing. This completes the induction. β
Given Lemma 3, we can also prove the monotonicity of $\delta_{m}(t)$ using mathematical induction: It is easy to write that $\delta_{m}(2)=\alpha+\beta\alpha^{m}+\gamma f^{m}$ , showing that $\delta_{m}(2)$ increases strictly with $f$ . Then, for $t>2$ , assuming that $\delta_{m}(t-1)$ increases strictly with $f$ and using Given Lemma 3, we know that $\delta_{m}(t)$ increases strictly with $f$ from Equation (22).
Therefore, we have $\delta_{m}(1)=\alpha$ and that $\delta_{m}(t)$ strictly increases with $f$ for $nβ₯ 2$ . According to Equation (24), from the above monotonicity of $\delta_{m}(t)$ , it is obvious that $\sigma_{m}(t)$ increases with respect to $f$ for all $t$ . This gives the corollary that $\tilde{\rho}_{m}(n)$ increases with $f$ for all $n$ .
C.4 Derivation of RMTP reasoning cost
In this section, we derive how many steps it costs to find a correct solution in RMTP and thereby prove Proposition 1.
* Proposition1*
The probability of accepting the correct step after the $i$ -th attempt is given by $\alpha^{i-1}\beta$ . Assuming a maximum number of attempts $m$ , the number of attempts consumed at each step satisfies:
$$
\Pr(i\ \text{attempts}\mid\text{correct})=\frac{\alpha^{i-1}\beta}{\beta+\alpha\beta+\cdots+\alpha^{m-1}\beta}=\frac{(1-\alpha)\alpha^{i-1}}{1-\alpha^{m}} \tag{56}
$$
Therefore, the average number of attempts required for a correct reasoning step is given by
$$
A_{m}=\sum_{i=1}^{m}i\cdot\frac{(1-\alpha)\alpha^{i}}{1-\alpha^{m}}=\frac{1-\alpha}{1-\alpha^{m}}\sum_{i=1}^{m}i\alpha^{i-1} \tag{57}
$$ To simplify the summation expression $\sum_{i=1}^{m}i\alpha^{i-1}$ (where $0<\alpha<1$ ), we can use the method of telescoping series. Let $S=\sum_{i=1}^{m}i\alpha^{i-1}$ . We calculate $\alpha S$ :
$$
\alpha S=\sum_{i=1}^{m}i\alpha^{i} \tag{58}
$$
Thus,
$$
S-\alpha S=\sum_{i=1}^{m}i\alpha^{i-1}-\sum_{i=1}^{m}i\alpha^{i} \tag{59}
$$
This gives us
$$
(1-\alpha)S=1+\alpha+\alpha^{2}+\cdots+\alpha^{m-1}-m\alpha^{m}=\frac{1-\alpha^{m}}{1-\alpha}-m\alpha^{m} \tag{60}
$$
Rearranging, we have
$$
\sum_{i=1}^{m}i\alpha^{i-1}=\frac{1-\alpha^{m}-(1-\alpha)m\alpha^{m}}{(1-\alpha)^{2}} \tag{61}
$$
Thus, the average number of attempts can be further expressed as: $$
\displaystyle A_{m} \displaystyle=\frac{1-\alpha}{1-\alpha^{m}}\cdot\frac{1-\alpha^{m}-(1-\alpha)m\alpha^{m}}{(1-\alpha)^{2}} \displaystyle=\frac{1-\alpha^{m}-(1-\alpha)m\alpha^{m}}{(1-\alpha)(1-\alpha^{m})} \displaystyle=\frac{1}{1-\alpha}-m\frac{\alpha^{m}}{1-\alpha^{m}} \tag{62}
$$
Considering the limit as $mββ$ , it can be shown using limit properties that $\lim_{mββ}m\frac{\alpha^{m}}{1-\alpha^{m}}=0$ . If the correct solution is obtained (i.e., correct steps are accepted at each step), the average number of steps taken is given by
$$
\bar{T}=nA_{\infty}=\frac{n}{1-\alpha}=\frac{n}{1-\mu e_{-}-(1-\mu)(1-e_{+})}=\frac{n}{(1-\mu)e_{+}+\mu(1-e_{-})} \tag{66}
$$
β
C.5 Considering posterior risks of rejected attempts
Our previous analysis is based on a coarse binary partition of states (correct and incorrect) for each scale, which enhances clarity yet does not apply to real-world complexity. Therefore, we can introduce stronger constraints by taking into account the posterior distribution of states in $\mathcal{S}_{n}^{+}$ after multiple attempts. For example, if the state has produced several incorrect attempts on state ${S}$ (or rejected several correct attempts), it is more likely that the current state has not been well generalized by the model. Consequently, the chances of making subsequent errors increase. In this case, $\mu$ is likely to decrease with the number of attempts, while $e_{-}$ increases with the number of attempts. Thus, the probability of accepting the correct action will decrease as the number of attempts increases.
Therefore, we consider the scenario where the probabilities of error increase while the correctness rate $\mu$ drops after each attempt. We define $e_{i+},e_{i-},\mu_{i}$ , etc., to represent the probabilities related to the $i$ -th attempt, corresponding to the calculations of $\alpha_{i},\beta_{i},\gamma_{i}$ , etc. We have that $\beta_{i}=\mu_{i}(1-e_{i-})$ is monotonically decreasing, and $\gamma_{i}=(1-\mu_{i})e_{i+}$ is monotonically increasing with the index $i$ of the attempt. In this case, the derivation is similar to that of Proposition 2. Therefore, we skip all unnecessary details and present the results directly.
**Proposition 5**
*Given the above posterior risks, the RTMP accuracy $\tilde{\rho}(n)$ for problems with a scale of $n$ is
$$
\tilde{\rho}(n)=\left(\beta_{1}+\sum_{i=2}^{\infty}\beta_{i}\prod_{j=1}^{i-1}\alpha_{i}\right)^{n} \tag{67}
$$
Let $m$ be the width of RTBS. Let $\delta_{i|m}(n)$ denote the probability of a proposed step being rejected (either instantly or recursively) at the $i$ -th attempt on a correct state, and $\epsilon_{m}(n)$ follows the same definition as in Proposition 2. We have $\delta_{i|m}(0)=\epsilon_{m}(0)=0$ and the following recursive equations for $n>0$ :
$$
\displaystyle\delta_{i|m}(n)=\alpha_{i}+\beta_{i}\prod_{j=1}^{m}\delta_{j|m}(n-1)+\gamma_{i}(\epsilon_{m}(n-1))^{m},\qquad i=1,\cdots,m \displaystyle\epsilon_{m}(n)=f+(1-f)\left(\epsilon_{m}(n-1)\right)^{m} \tag{68}
$$
Then, the RTBS accuracy $\tilde{\rho}_{m}(n)$ for problems of scale $n$ is given by:
$$
\tilde{\rho}_{m}(n)=\prod_{t=1}^{n}\sigma_{m}(t),\qquad\text{where }\sigma_{m}(t)=\beta_{1}+\sum_{j=2}^{m}\beta_{j}\prod_{i=1}^{j-1}\delta_{i|m}(t,m)\beta. \tag{70}
$$
In addition, $\delta_{i|m}(n)$ , $\epsilon_{m}(n)$ , and $\sigma_{m}(n)$ all monotonically increase and converge with respect to $n$ .*
The theoretical result in this new setting becomes much more challenging to derive an exact validity condition that remains clear and understandable. However, it is still useful to derive a bound that sufficiently guarantees the effectiveness of reflection. In the following, we show that a sufficient condition becomes much stricter than that in Proposition 3.
**Proposition 6**
*Assume $\mu_{1}<1$ and $k=βf_{i}\frac{\beta_{i+1}}{\beta_{i}}$ is the lower bound of the decay rate of the probability of accepting the correct step in multiple attempts. Then, a sufficient condition for $\tilde{\rho}(n)>\rho(n)$ is:
$$
\frac{e_{1-}}{k(1-\mu_{1})}+\sup_{i}e_{i+}<1 \tag{71}
$$*
* Proof*
Considering $\alpha_{i}=\mu_{i}e_{i-}+(1-\mu_{i})(1-\max_{i}e_{i+})$ , let $\underline{\alpha}=(1-\mu_{1})(1-\sup_{i}e_{i+})$ be its lower bound. It can be seen that
$$
\displaystyle\beta_{1}+\alpha_{1}\beta_{2}+\alpha_{1}\alpha_{2}\beta_{3}+\cdots+\beta_{m}\prod_{i=1}^{m-1}\alpha_{i}\geq\sum_{j=1}^{m}(\underline{\alpha}k)^{m-1}\beta_{1}=\beta_{1}\frac{1-(\underline{\alpha}k)^{m}}{1-\underline{\alpha}k} \tag{72}
$$
As $mββ$ , the sufficient condition for reflection validity is:
$$
\displaystyle\frac{\beta_{1}}{1-\underline{\alpha}k}>\mu_{1} \displaystyle\iff \displaystyle\frac{1-e_{1-}}{1-k(1-\mu_{1})(1-\sup_{i}e_{i+})}>1 \displaystyle\iff \displaystyle(1-\mu_{1})(1-\sup_{i}e_{i+})>\frac{e_{-}}{k} \displaystyle\iff \displaystyle\frac{e_{1-}}{k}-\sup_{i}e_{i+}(1-\mu_{1})<1 \displaystyle\iff \displaystyle\frac{e_{1-}}{k(1-\mu_{1})}+\sup_{i}e_{i+}<1 \tag{73}
$$
β
Appendix D Implementation details
This section describes the details of the training datasets, model architectures, and hyper-parameters used in experiments. Our implementation derives the models architectures, pretraining, and SFT from LitGPT [1] (version 0.4.12) under Apache License 2.0.
D.1 Algorithmic descriptions of reflective reasoning
Algorithms 1 and 2 presents the pseudo-code of reasoning execution through RMTP and RTBS, respectively. In practice, we introduce a reflection budget $M$ to avoid infinite iteration. If reflective reasoning fails to find a solution within $M$ steps, the algorithm retrogrades to non-reflective reasoning.
To implement RTBS, we maintain a stack to store the reversed list of parental states, allowing them to be restored if needed. Different from our theoretical analysis, our practical implementation does not limit the number of attempts on the input query $Q$ (as long as the total budget $M$ is not used up) as $Q$ has no parent (i.e. the stack is empty).
Algorithm 1 Reflective reasoning through RMTP
0: the query ${Q}$ , the augmented policy $\tilde{\pi}=\{\pi,\mathcal{V}\}$ , transition function $\mathcal{T}$ , and reflective reasoning budget $M$
1: $tβ 0,{S}_{t}β Q$
2: repeat
3: Infer $R_{t+1}\sim\pi(Β·\mid{S}_{t})$
4: $Rejectβ\text{False}$
5: if $tβ€ M$ then
6: Infer ${V}_{t+1}\sim\mathcal{V}(Β·\mid{S}_{t},{R}_{t+1})$
7: $Rejectβ\text{True}$ if β $Γ$ β $β{V}_{t+1}$
8: if $Reject=\text{True}$ then
9: ${S}_{t+1}β{S}_{t}$
10: else
11: ${S}_{t+1}β\mathcal{T}({S}_{t},{R}_{t+1})$
12: if ${R}_{t+1}$ is the answer then
13: $Tβ t+1,{A}β{R}_{t+1}$
14: else
15: $tβ t+1$
16: until The answer $A$ is produced
17: return $A$
Algorithm 2 Reflective trace-back search
0: the query ${Q}$ , the augmented policy $\tilde{\pi}=\{\pi,\mathcal{V}\}$ , transition function $\mathcal{T}$ , search width $m$ , and reflective reasoning budget $M$
1: $tβ 0,{S}_{t}β Q$
2: $iβ 0$ {The index of attempts}
3: Initialize an empty stack $L$
4: repeat
5: Infer $R_{t+1}\sim\pi(Β·\mid{S}_{t})$
6: $iβ i+1$
7: $Rejectβ\text{False}$
8: if $tβ€ M$ then
9: Infer ${V}_{t+1}\sim\mathcal{V}(Β·\mid{S}_{t},{R}_{t+1})$
10: $Rejectβ\text{True}$ if β $Γ$ β $β{V}_{t+1}$
11: if $Reject=\text{True}$ then
12: if $i<m$ then
13: ${S}_{t+1}β{S}_{t}$
14: else
15: {Find a parent state that has remaining number of attempts.}
16: repeat
17: Pop $(s_{k},j)$ from $L$
18: ${S}_{t+1}β s_{k},iβ j$
19: until $i<m$ or $L$ is empty
20: else
21: Push $({S}_{t+1},i)$ into $L$
22: ${S}_{t+1}β\mathcal{T}({S}_{t},{R}_{t+1}),iβ 0$
23: if ${R}_{t+1}$ is the answer then
24: $Tβ t+1,{A}β{R}_{t+1}$
25: else
26: $tβ t+1$
27: until the answer $A$ is produced
28: return $A$
D.2 Example CoT data
We implement predefined programs to generate examples of CoTs and self-verification. Figure 11 presents the example reasoning steps (correct) for non-reflective training and the example detailed verification for reflective training. In our practical implementations, the reasoning steps include additional tokens, such as preprocessing and formatting, to assist planning and transition. To simplify transition function $\mathcal{T}$ , the example steps also include exactly how the states are supposed to be updated, which removes the task-specific prior in $\mathcal{T}$ .
<details>
<summary>x14.png Details</summary>

### Visual Description
## Flowchart: Computational Process with State Transition and Verification
### Overview
The image depicts a three-phase computational process visualized as a horizontal flowchart with color-coded sections: **state** (blue), **planning** (orange), and **verification** (pink). Each phase contains labeled operations, numerical values, and validation checks. Arrows indicate sequential flow from left to right.
---
### Components/Axes
1. **Sections**:
- **State** (blue): Initial state definition with numerical values and operations.
- **Planning** (orange): Arithmetic operations and state updates.
- **Verification** (pink): Validation checks and reflection of results.
2. **Labels**:
- Arrows labeled `step R_{t+1}` (top) and `context window` (right).
- Sub-labels: `<state>`, `<action>`, `<reflect>`.
3. **Flow Direction**: Left-to-right progression from state β planning β verification.
---
### Detailed Analysis
#### State Section (Blue)
- **Content**:
- `<state>` block:
- `145 * 86093`
- `+101500`
- `/<state>`
- **Purpose**: Represents the initial state `S_t` with numerical values and operations.
#### Planning Section (Orange)
- **Content**:
- `<action>` block:
- `left * 8` (operation)
- Arithmetic steps:
- `-40`
- `-32 + 4 = 36`
- `-8 + 3 = 11`
- `-1`
- `cumulate 11600000` (with subsequent operations: `-0 + 0 = 0`, `-5 + 0 = 5`, etc.)
- `get 11701500`
- `<state>` update:
- `145 * 6093`
- `+11701500`
- `/<state>`
- **Purpose**: Computes intermediate values and updates the state to `S_{t+1}`.
#### Verification Section (Pink)
- **Content**:
- `<reflect>` block:
- `<action>`:
- `β: -β -β -β -β -β` (checkmarks for arithmetic steps)
- `cumulate β: -β -β -β -β -β -β -β -β` (checkmarks for cumulate operations)
- `<action>`:
- `<state>`:
- `β * β + β` (validated state)
- `/<reflect>`
- **Purpose**: Validates correctness of operations and updated state.
---
### Key Observations
1. **Numerical Flow**:
- Initial state values (`145 * 86093`, `+101500`) are modified in planning (`145 * 6093`, `+11701500`).
- Verification confirms validity via checkmarks (`β`).
2. **Operational Logic**:
- Arithmetic operations (`-40`, `-32 + 4 = 36`) suggest error correction or adjustment.
- `cumulate` operations imply aggregation of values.
3. **Validation**:
- Checkmarks in verification indicate successful execution of steps.
---
### Interpretation
This flowchart represents a **state transition system** with embedded validation. The process involves:
1. **State Definition**: Initial values (`S_t`) are defined with numerical operations.
2. **Planning**: Arithmetic adjustments and state updates (`S_{t+1}`) are computed.
3. **Verification**: Checks ensure operations and updated states meet predefined criteria (e.g., `β` for valid steps).
The use of `cumulate` and `left * 8` suggests a focus on iterative computation and positional adjustments. The `β` symbols in verification act as a quality-control mechanism, ensuring no errors propagate through the system. The `context window` label implies this process is part of a larger sequence or algorithm, possibly in machine learning or automated decision-making.
---
### Uncertainties
- Exact purpose of `left * 8` and `cumulate` operations (context-dependent).
- Specific meaning of `β` symbols (likely validation, but not explicitly defined).
</details>
(a) Mult
<details>
<summary>x15.png Details</summary>

### Visual Description
## Diagram: Process Flow with State Transformation and Verification
### Overview
The diagram illustrates a sequential process involving state transformation, preprocessing, planning, and verification. It is divided into four colored sections (blue, light blue, orange, pink) representing different stages, with a "context window" arrow indicating temporal progression. The final stage includes a verification step with checkmarks.
### Components/Axes
1. **Sections**:
- **Blue (Left)**: Labeled "state S_t" with a nested `<state>` block containing numerical sequences.
- **Light Blue (Center-Left)**: Labeled "preprocessing" with structured data under `<state>`:
- Rows: `12345789 1679 12456789`
- Cols: `1234589 13456789 12345789`
- Blocks: `1356789 1245789 124679`
- Reduce: `12345678 1234589`
- **Orange (Center-Right)**: Labeled "planning & update" with a nested `<state>` block containing modified numerical sequences.
- **Pink (Right)**: Labeled "verification V_t+1 (detailed)" with a nested `<reflect>` block containing 5 rows of 7 checkmarks (β).
2. **Context Window**: A horizontal arrow labeled "Context window" spans the top of the diagram, pointing right.
3. **Textual Elements**:
- All sections use angle-bracket notation (`<...>`) for structured data.
- The verification section uses `<reflect>` to denote validation checks.
### Detailed Analysis
1. **State S_t (Blue)**:
- Contains 8 rows of 8-digit numerical sequences (e.g., `357820419`, `001790006`).
- Numbers appear to represent encoded or hashed values.
2. **Preprocessing (Light Blue)**:
- **Rows**: 3 values (e.g., `12345789`, `1679`, `12456789`).
- **Cols**: 3 values (e.g., `1234589`, `13456789`, `12345789`).
- **Blocks**: 3 values (e.g., `1356789`, `1245789`, `124679`).
- **Reduce**: 2 values (`12345678`, `1234589`).
3. **Planning & Update (Orange)**:
- Contains 8 rows of 8-digit numerical sequences (e.g., `357826419`, `421793056`).
- Sequences differ from the initial state, suggesting transformations.
4. **Verification (Pink)**:
- `<reflect>` block contains 5 rows Γ 7 columns of checkmarks (β).
- All checks are marked as successful (no failures indicated).
### Key Observations
- **Data Transformation**: The state evolves from `S_t` to `S_t+1` through preprocessing and planning steps, with numerical sequences altered in the process.
- **Structured Preprocessing**: The preprocessing stage organizes data into rows, columns, blocks, and reduced forms, implying dimensionality reduction or feature extraction.
- **Verification Completeness**: The final stage shows 100% success in validation checks, suggesting robust error-checking mechanisms.
### Interpretation
This diagram represents a computational workflow, likely for a machine learning or data processing system. The process begins with an initial state (`S_t`), which undergoes preprocessing to extract structured features (rows, cols, blocks). The planning & update phase modifies the state, possibly through optimization or decision-making. The verification step ensures all components meet predefined criteria, as evidenced by the uniform checkmarks. The "context window" arrow implies that the system maintains temporal awareness, possibly for sequential data or real-time processing. The absence of failures in verification suggests a highly reliable system, though the lack of failure cases limits insights into error-handling robustness.
</details>
(b) Sudoku
Figure 11: Example reasoning steps with detailed verfication for integer multiplication and Sudoku.
D.2.1 Multiplication CoT
Each state is an expression $x_{t}Γ y_{t}+r_{t}$ , where $x_{t}$ and $y_{t}$ are the remaining values of two multiplicands, and $r_{t}$ is the cumulative result. For an input query $xΓ y$ , the expert reasoner assigns $x_{1}=x$ , $y_{1}=y$ , and $r_{1}=0$ .
For each step, the reasoner plans a number $uβ\{1,...,9\}$ to eliminate in $x_{t}$ (or $y_{t}$ ). Specifically, it computes $\delta=uΓ y_{t}$ or ( $\delta=uΓ x_{t}$ ). Next, it finds the digits in $x_{t}$ (or $y_{t}$ ) that are equal to $u$ and set them to $0$ in $x_{t+1}$ (or $y_{t+1}$ ). For each digit set to $0$ , the reasoner cumulates $\deltaΓ 10^{i}$ to $r_{t}$ , where $i$ is the position of the digit (starting from $0$ for the unit digit). An example of a reasoning step is shown in Figure 11(a). Such steps are repeated until either $x_{T}$ or $y_{T}$ becomes $0$ , then the answer is $r_{T}$ .
D.2.2 Sudoku CoT
Each state is a $9Γ 9$ matrix representing the partial solution, where blank numbers are represented by $0$ . On each step, the reasoner preprocesses the state by listing the determined numbers of each row, columns, and blocks. Given these information, the model reduces the blank positions that has only one valid candidate. If no blank can be reduced, the model randomly guess a blank position that has the fewest candidates. Such process continues until there exist no blanks (i.e., zeros) in the matrix.
An example of a reasoning step is shown in the right of Figure 11(b). The planned updates (i.e., which positions are filled with which numbers) is intrinsically included in a new puzzle, which is directly taken as the next state by the transition function $\mathcal{T}$ .
D.2.3 Verification of reasoning steps
Binary Verification
The Binary verification labels are generated using a rule-based checker of each reasoning step. In Multiplication, it simply checks whether the next state $x_{t+1}Γ y_{t+1}+r_{t+1}$ is equal to the current state $x_{t}Γ y_{t}+r_{t}$ . In Sudoku, it checks whether existing numbers in the old matrix are modified and whether the new matrix has duplicated numbers in any row, column, and block.
Detailed Verification
In Multiplication, we output a label for each elemental computation β addition or unit-pair product β is computed correctly. In Sudoku, we output a label for each position in the new matrix, signifying whether the number violates the rule of Sudoku (i.e. conflicts with other numbers in the same row, column, or block) or is inconsistent with the previous matrix. These labels are organized using a consistent format as the CoT data. Examples of detailed reflection for correct steps is in Figure 11(b). If the step contains errors, we replace the corresponding β $\checkmark$ β with β $Γ$ β.
D.3 Model architectures and tokenization
Table 2: The model architectures of models for the transitional implementation.
| Vocabulary size | 128 | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
| Embedding size | 128 | 256 | 512 | 128 | 256 | 512 |
| Number of layers | 5 | | | | | |
| Number of attention Heads | 4 | 8 | 8 | 4 | 8 | 8 |
Our models architectures uses multi-head causal attention with LayerNorm [36] with implementation provided by LitGPT [1]. Table 2 specifies the architecture settings of transformer models with 1M, 4M, 16M parameters.
Tokenizers
We employ the byte-pair encoding algorithm [30] to train tokenizers on reasoning CoT examples for tiny transformers. Special tokens for reflection and reasoning structure (e.g., identifiers for the beginning and ending positions of states and reasoning steps) are manually added to the vocabulary. Since the vocabulary size is small (128 in our experiments), the learned vocabulary is limited to elemental characters and the high-frequency words for formatting.
D.4 Hyperparameters
Table 3 presents the hyper-parameters used in training and testing the tiny-transformer models. In the following sections, we describe how these parameters are involved in our implementation.
Table 3: The main hyper-parameters used in this work.
| Task | Mult | Sudoku | | | | |
| --- | --- | --- | --- | --- | --- | --- |
| Model size | 1M | 4M | 16M | 1M | 4M | 16M |
| Training CoT examples: $N_{CoT}$ | 32K | 36K | | | | |
| Total pretraining tokens: $N_{pre\_tok}$ | 1B | | | | | |
| Pretraining batch size: $B_{pre}$ | 128 | | | | | |
| Pretraining learning rate: $\eta_{pre}$ | 0.001 $β$ 0.00006 | | | | | |
| SFT batch size: $B_{SFT}$ | 128 | | | | | |
| SFT learning rate: $\eta_{SFT}$ | 0.001 $β$ 0.00006 | | | | | |
| Non-reflective SFT epochs: $E_{SFT}$ | 5 | | | | | |
| Reflective sampling temperature: Solving $\tau_{refl:s}$ | 0.75 | | | | | |
| Reflective sampling temperature: Proposing $\tau_{refl:p}$ | 1 | 1.25 | 1.5 | 1 | 1.25 | 1.5 |
| Reflective SFT epochs: $E_{RSFT}$ | 3 | | | | | |
| PPO replay buffer size: $N_{PPO:buf}$ | 512 | | | | | |
| GRPO replay buffer size: $N_{GRPO:buf}$ | 1024 | | | | | |
| RL sampling interval: $E_{RL:int}$ | 4 | | | | | |
| RL sampling temperature: Planning $\tau_{RL:\pi}$ | 1.25 | 1 | 1.25 | 1.25 | | |
| RL sampling temperature: Feedback $\tau_{RL:\pi_{f}}$ | 1 | | | | | |
| RL clipping factor: $\varepsilon$ | 0.1 | | | | | |
| RL KL-divergence factor: $\beta$ | 0.1 | | | | | |
| GRPO group size: $G$ | 8 | | | | | |
| RL total epochs: $E_{RL}$ | 512 | | | | | |
| RL learning rate: $\eta_{RL}$ | 0.00005 $β$ 0.00001 | | | | | |
| PPO warm-up epochs: $E_{PPO:warmup}$ | 64 | | | | | |
| Testing first-attempt temperature: $\tau_{\pi:first}$ | 0 | 1 | | | | |
| Testing revision temperature: $\tau_{\pi:rev}$ | 1 | | | | | |
| Testing verification temperature: $\tau_{\pi_{f}}$ | 0 | | | | | |
| Testing non-reflective steps $T$ : | 32 | | | | | |
| Testing reflective steps $\tilde{T}$ : | 64 | | | | | |
| RTBS width: $m$ | 4 | | | | | |
Non-reflective training
The pretraining and SFT utilize a dataset $N_{CoT}$ of CoT examples generated by an expert reasoning program. Pretraining treats these CoT examples as plain text and minimizes the cross-entropy loss for next-token prediction, using the batch size $B_{pre}$ and the learning rate $\eta_{pre}$ . The pretraining process terminates after predicting a total of $N_{pre\_tok}$ tokens. The non-reflective SFT uses the same dataset as that used in pretraining. It maximizes the likelihood of predicting example outputs (reasoning steps) from prompts (reasoning states), using the batch size $B_{SFT}$ and the learning rate $\eta_{SFT}$ . The total number of non-reflective SFT epochs is $E_{SFT}$ .
Reflective SFT
To perform non-reflective SFT, we use the model after non-reflective training to sample trajectories for each input query in the training set. The reflective sampling involves two decoding temperatures: the lower solving temperature $\tau_{refl:s}$ is used to walk through the solution path, while a higher proposing temperature $\tau_{refl:p}$ is used to generate diverse steps, which are fed into the reflective dataset. Then, the verification examples, which include binary or detailed labels, are generated by an expert verifier program. The reflective SFT includes $E_{RSFT}$ epochs, using the same batch size and learning rate as the non-reflective SFT.
Reinforcement learning
We use online RL algorithms as described in Appendix B, including PPO and GRPO. These algorithms include an experience replay buffer to store $N_{PPO:buf}$ and $N_{GRPO:buf}$ example trajectories, respectively. After every $E_{RL:int}$ epochs trained on the buffer, the buffer is updated by sampling new trajectories, using the temperature $\tau_{RL:\pi}$ for planning steps and the temperature $\tau_{RL:\tilde{\pi}}$ for reflective feedback. According to Equations 8 and 12, the hyper-parameters in both the PPO and GRPO objectives include the clipping factor $\varepsilon$ and the KL-Divergence factor $\beta$ . Additionally, GRPO defines $G$ as the number of trajectories in a group. We run RL algorithms for $E_{RL}$ epochs, using the learning rate $\eta_{RL}$ . PPO involves $E_{PPO:warmup}$ warm-up epochs at the beginning of training, during which only the value model is optimized.
Testing
During testing, we execute the reasoner using three decoding temperatures: $\tau_{\pi:first}$ for the first planning attempt, $\tau_{\pi:rev}$ for the revised planning attempt after being rejected, and $\tau_{\pi_{f}}$ for self-verification. We use low temperatures to improve accuracy for more deterministic decisions, such as self-verifying feedback and the first attempt in Mult. We use higher temperatures for exploratory decisions, such as planning in Sudoku and revised attempts in Mult. We set the non-reflective reasoning budget to $T$ steps and the reflective reasoning budget to $\tilde{T}$ steps. If the reflective budget is exhausted, the reasoner reverts to non-reflective reasoning. We set the search width of RTBS to $m$ .
D.5 Computational resources
Since our models are very small, it is entirely feasible to reproduce all our results on any PC (even laptops) that has a standard NVIDIA GPU. Using our hyper-parameters, the maximum GPU memory used for training the 1M, 4M, and 16M models is approximately 4GB, 12GB, and 16GB, respectively, which can be easily reduced by using smaller batch sizes. To run multiple experiments simultaneously, we utilize cloud servers with a total of 5 GPUs (one NVIDIA RTX-3090 GPU and four NVIDIA A10 GPUs).
For each model size and task, a complete pipeline (non-reflective training, reflective training, and RL) takes about two days on a single GPU. This includes 1-2 hours for non-reflective training, 8-12 hours for data collection for reflective training, 1-2 hours for reflective SFT, 6-12 hours for RL, and 6-12 hours for testing.
Appendix E Supplementary results of experiments
In this section, we present supplementary results from our experiments: 1) we assess the reasoning accuracy of various large language models on integer multiplication and Sudoku tasks; 2) we report the accuracy outcomes of models after implementing different supervised fine-tuning strategies; 3) we provide full results of reasoning accuracy after GRPO; 4) we additionally provide the results of PPO, which is weaker than GRPO in reflective reasoning.
E.1 Evaluation of LLMs
In this section, we provide the reasoning accuracy of LLMs on Mult and Sudoku, including GPT-4o [22], OpenAI o3-mini [21], and DeepSeek-R1 [5]. Since GPT-4o is not a CoT-specialized model, we use the magic prompt βletβs think step-by-stepβ [13] to elicit CoT reasoning. For o3-mini and DeepSeek-R1, we only prompt with the natural description of the queries. As shown in Table 4, among these LLMs, OpenAI o3-mini produces the highest accuracy in both tasks.
To illustrate how well tiny transformers can do in these tasks, we also present the best performance (results selected from Tables 5 and 7) of our 1M, 4M, and 16M models for each difficulty level, respectively, showing a performance close to or even better than some of the LLM reasoners. For example, according to our GRPO results (see Table 7), our best 4M Sudoku reasoner performs (RTBS through optional detailed verification) equally well to OpenAI o3-mini, and our best 16M Mult reasoner (through binary verification) outperforms DeepSeek-R1 in ID difficulties. Note that this paper mainly focuses on fundamental analysis and does not intend to compete with the general-purpose LLM reasoners, which can certainly gain better accuracy if specially trained on our tasks. Such a comparison is inherently unfair due to the massive gap in resource costs and data scale. The purpose of these results is to show how challenging these tasks can be, providing a conceptual notion of how well a tiny model can perform.
Table 4: The accuracy (%) of GRT-4o, DeepSeek-R1, and OpenAI o3-mini in integer multiplication and Sudoku, compared with the best performance of our 1M (1M*), 4M (4M*), and 16M (16M*) transformers. The βOOD-Hardβ for LLMs only refers to the same difficulty as used in testing our tiny transformers, as OOD-Hard questions may have been seen in the training of LLMs.
| Mult ID-Hard OOD-Hard | ID-Easy 32.6 18.6 | 73.2 97.2 96.4 | 100 77.0 61.4 | 96.8 52.7 3.7 | 96.2 77.0 5.8 | 98.7 81.1 9.4 | 99.7 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Sudoku | ID-Easy | 40.7 | 99.6 | 90.4 | 33.9 | 97.2 | 99.8 |
| ID-Hard | 2.8 | 52.8 | 4.4 | 0.4 | 58.1 | 72.2 | |
| OOD-Hard | 0.0 | 0.0 | 0.0 | 0.0 | 6.9 | 14.4 | |
E.2 Results of supervised fine tuning
Table 5 includes our complete results of reasoning accuracy after non-reflective and reflective SFT. As discussed in Section 3.1, our implementation uses Reduced states that maintain only useful information for tiny transformers. To justify this, we also test the vanilla Complete implementation, where each state ${S}_{t}=({Q}_{t},{R}_{1}...,{R}_{t-1})$ includes the full history of past reasoning steps. As a baseline, the Direct thought without intermediate steps is also presented.
Table 5: The reasoning accuracy (%) for 1M, 4M, and 16M transformers after SFT.
| 1M | Mult | ID Easy | 21.8 | 10.6 | 23.6 | 95.8 | 94.5 | 93.4 | 22.0 | 33.4 | 24.2 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| ID Hard | 3.0 | 1.4 | 2.0 | 52.7 | 44.6 | 35.5 | 2.2 | 4.8 | 2.8 | | |
| OOD Hard | 1.4 | 0.3 | 1.0 | 3.7 | 2.2 | 1.2 | 1.0 | 0.8 | 0.4 | | |
| Sudoku | ID Easy | 2.8 | 0 | 1.4 | 33.0 | 32.4 | 2.4 | 17.4 | 18.7 | 19.4 | |
| ID Hard | 0 | 0 | 0 | 0.3 | 0.1 | 0 | 0.1 | 0 | 0 | | |
| OOD Hard | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | |
| 4M | Mult | ID Easy | 15.6 | 17.2 | 92.0 | 97.7 | 97.6 | 97.3 | 94.5 | 93.8 | 93.3 |
| ID Hard | 1.7 | 1.9 | 37.3 | 56.9 | 62.2 | 53.0 | 43.4 | 47.6 | 42.4 | | |
| OOD Hard | 1.2 | 1.0 | 2.2 | 2.9 | 1.8 | 1.1 | 3.7 | 3.3 | 2.7 | | |
| Sudoku | ID Easy | 13.0 | 3.9 | 52.2 | 92.1 | 96.8 | 96.0 | 54.4 | 81.9 | 88.5 | |
| ID Hard | 0.1 | 0 | 3.3 | 40.9 | 46.3 | 53.3 | 5.2 | 16.9 | 45.7 | | |
| OOD Hard | 0 | 0 | 0 | 0.4 | 4.0 | 6.7 | 0.0 | 1.1 | 2.0 | | |
| 16M | Mult | ID Easy | 15.1 | 59.2 | 99.2 | 98.8 | 98.9 | 98.8 | 99.2 | 99.5 | 98.5 |
| ID Hard | 1.6 | 9.6 | 65.9 | 65.2 | 76.7 | 74.9 | 65.9 | 76.4 | 73.5 | | |
| OOD Hard | 1.2 | 1.0 | 2.5 | 1.1 | 1.3 | 1.3 | 9.2 | 9.4 | 7.2 | | |
| Sudoku | ID Easy | 35.8 | 15.9 | 95.7 | 97.1 | 97.9 | 92.5 | 93.0 | 99.0 | 99.7 | |
| ID Hard | 0.4 | 0 | 48.8 | 50.1 | 53.1 | 54.8 | 46.9 | 57.9 | 70.7 | | |
| OOD Hard | 0 | 0 | 0.4 | 0.9 | 4.4 | 6.0 | 0.7 | 8.2 | 14.4 | | |
Reducing the redundancy of states in long CoTs benefits tiny transformers. The left three columns in Table 5 compare the above thought implementations for non-reflective models. We see that both direct and complete thoughts fail to provide an acceptable performance even in ID-Easy difficulty. This proves the importance of avoiding long-context inference by reducing redundancy in representing states. Considering the huge performance gap, we exclude the complete and direct implementations from our main discussion.
Estimated errors of self-verification
For RMTP and RTBS executions, we employ the oracle verifiers to maintain test-time statistics of the average $e_{-}$ and $e_{+}$ (see definition in Section 4) of reasoning states. The results are shown in Table 6, where we also present the difference in how much reflective reasoning raises the performance over non-reflective reasoning. We only count the errors in the first attempts on reasoning states to avoid positive bias, as the reasoner may be trapped in some state and repeat the same error for many steps.
Table 6: The percentage (%) of test-time verification errors (i.e., $e_{-}$ and $e_{+}$ ) after reflective SFT. Additionally, we compute $\Delta$ as the difference of how much reflective reasoning raises the performance over non-reflective reasoning, i.e. RMTP (RTBS) accuracy minus non-reflective accuracy.
| 1M ID Hard OOD Hard | Mult 3.8 16.4 | ID Easy 37.6 32.9 | 19.3 $-8.1$ $-1.5$ | 4.4 3.6 6.0 | $-1.3$ 33.0 22.5 | 14.9 $-17.2$ $-2.5$ | 4.9 0.9 13.6 | $-2.4$ 6.9 2.2 | 10.2 $+2.6$ $-0.2$ | 18.3 14.5 13.2 | $+11.4$ 7.4 2.4 | 24.4 $+0.6$ $-0.6$ | 19.4 | $+2.2$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Sudoku | ID Easy | 9.9 | 35.2 | $-0.6$ | 31.1 | 43.9 | $-30.6$ | 87.1 | 0.1 | $+1.3$ | 85.1 | 0.1 | $+2$ | |
| ID Hard | 21.1 | 31.0 | $-0.2$ | 33.1 | 28.6 | $-0.3$ | 82.8 | 0 | $-0.1$ | 79.4 | 0 | $-0.1$ | | |
| OOD Hard | 60.3 | 7.5 | $0$ | 60.2 | 13.4 | $0$ | 87.9 | 0 | $0$ | 84.5 | 0 | $0$ | | |
| 4M | Mult | ID Easy | 25.1 | 5.9 | $-0.1$ | 58.1 | 8.9 | $-0.4$ | 30.4 | 3.7 | $-0.7$ | 28.7 | 7.5 | $-1.2$ |
| ID Hard | 2.4 | 23.6 | $+5.3$ | 26.0 | 30.8 | $-3.9$ | 3.3 | 25.1 | $+4.2$ | 10.0 | 29.3 | $-1.0$ | | |
| OOD Hard | 7.5 | 42.9 | $-1.1$ | 18.0 | 61.7 | $-1.8$ | 5.9 | 28.1 | $-0.4$ | 10.9 | 28.2 | $-1.0$ | | |
| Sudoku | ID Easy | 39.5 | 9.5 | $+4.7$ | 40.4 | 11.5 | $+3.9$ | 23.8 | 0.1 | $+27.5$ | 46.7 | 0.3 | $+34.1$ | |
| ID Hard | 41.3 | 1.9 | $+5.4$ | 56.0 | 6.7 | $+12.4$ | 17.3 | 0.2 | $+11.7$ | 22.1 | 0.3 | $+40.5$ | | |
| OOD Hard | 78.5 | 0.8 | $+3.6$ | 70.6 | 0.6 | $+6.3$ | 31.5 | 0.1 | $+1.1$ | 35.9 | 0.1 | $+2$ | | |
| 16M | Mult | ID Easy | 11.3 | 8.6 | $+0.1$ | 6.1 | 9.4 | $+0.0$ | 15.7 | 2.1 | $+0.3$ | 3.8 | 2.9 | $-0.7$ |
| ID Hard | 1.4 | 13.9 | $+11.5$ | 1.8 | 16.9 | $+9.7$ | 2.5 | 7.0 | $+10.5$ | 4.4 | 7.2 | $+7.6$ | | |
| OOD Hard | 1.3 | 86.4 | $+0.2$ | 1.5 | 88.2 | $+0.2$ | 8.5 | 18.3 | $+0.2$ | 11.7 | 19.7 | $-2$ | | |
| Sudoku | ID Easy | 40.1 | 3.3 | $+0.8$ | 10.1 | 4.7 | $-4.6$ | 6.6 | 1.7 | $+6$ | 9.1 | 6.4 | $+6.7$ | |
| ID Hard | 50.5 | 4.3 | $+3$ | 37.2 | 9.4 | $+4.7$ | 15.4 | 0.1 | $+11.0$ | 10.6 | 0.6 | $+23.8$ | | |
| OOD Hard | 75.2 | 4.2 | $+3.5$ | 65.0 | 3.1 | $+5.1$ | 28.3 | 0.1 | $+7.5$ | 24.8 | 0.0 | $+13.7$ | | |
Our full results provide more evidence for the findings discussed in Section 5.1:
- Learning to self-verify enhances non-reflective execution for 9 out of 12 models (2 verification types, 3 model sizes, and 2 tasks), such that accuracy does not decrease in any difficulty and increases in at least one difficulty.
- RMTP improves performance over non-reflective execution for all 4M and 16M models. However, RMTP based on binary verification fails to benefit the 1M models, which suffer from a high $e_{-}$ .
- 4M and 16M Sudoku models greatly benefit from RTBS, especially using detailed verification.
E.3 Results of GRPO
The complete results of models after GRPO are given in Table 7. To have a convenient comparison, Table 8 presents the difference of accuracy across Table 5 and Table 7, showing that the difference of accuracy is caused by GRPO.
Table 7: The accuracy (%) of the 1M, 4M, and 16M transformers after GRPO.
| Verification Type | None | Binary | Detailed | Optional Detailed | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Reflective Execution | None | None | RMTP | RTBS | None | RMTP | RTBS | None | RMTP | RTBS | | |
| 1M | Mult | ID Easy | 52.6 | 96.2 | 95.9 | 95.7 | 53.0 | 49.5 | 45.1 | 48.6 | 47.7 | 48.8 |
| ID Hard | 11.6 | 50.0 | 44.0 | 42.0 | 11.4 | 9.7 | 8.1 | 12.2 | 12.7 | 12.6 | | |
| OOD Hard | 1.1 | 2.5 | 1.9 | 1.6 | 1.0 | 0.9 | 0.4 | 1.2 | 1.3 | 1.2 | | |
| Sudoku | ID Easy | 1.3 | 33.9 | 29.2 | 4.5 | 17.6 | 20.7 | 18.7 | 23.0 | 23.0 | 22.6 | |
| ID Hard | 0 | 0.4 | 0 | 0.2 | 0 | 0.1 | 0 | 0.1 | 0.1 | 0 | | |
| OOD Hard | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | |
| 4M | Mult | ID Easy | 98.0 | 98.6 | 98.7 | 98.8 | 98.2 | 98.0 | 98.4 | 98.2 | 98.4 | 98.6 |
| ID Hard | 65.6 | 73.6 | 77.0 | 76.7 | 63.0 | 64.3 | 63.2 | 63.9 | 66.8 | 66.1 | | |
| OOD Hard | 2.3 | 2.7 | 2.7 | 2.3 | 5.8 | 5.3 | 5.3 | 3.3 | 3.2 | 3.3 | | |
| Sudoku | ID Easy | 58.7 | 93.8 | 97.2 | 96.7 | 57.8 | 85.3 | 92.2 | 77.0 | 94 | 98.2 | |
| ID Hard | 3.2 | 43.9 | 53.8 | 58.1 | 5.6 | 24.7 | 47.7 | 21.4 | 37.7 | 61.3 | | |
| OOD Hard | 0 | 0.4 | 4.9 | 6.9 | 0 | 0.4 | 2.0 | 0 | 1.8 | 4.2 | | |
| 16M | Mult | ID Easy | 99.8 | 99.2 | 99.2 | 99.1 | 99.7 | 99.6 | 99.4 | 99.2 | 99.4 | 99.3 |
| ID Hard | 77.2 | 75.2 | 81.1 | 79.6 | 76.3 | 77.8 | 77.6 | 75.9 | 78.4 | 77.7 | | |
| OOD Hard | 1.8 | 1.3 | 1.8 | 1.8 | 8.4 | 8.2 | 7.4 | 6.0 | 5.5 | 5.6 | | |
| Sudoku | ID Easy | 96.3 | 97.6 | 98.8 | 94.6 | 93.3 | 98.8 | 99.8 | 88.7 | 97.6 | 99.0 | |
| ID Hard | 51.3 | 51.7 | 58.0 | 62.3 | 46.7 | 60.4 | 72.2 | 42.2 | 57.3 | 70.9 | | |
| OOD Hard | 0.7 | 0.7 | 6.0 | 7.8 | 0.2 | 6.7 | 12.0 | 0.2 | 6.7 | 11.1 | | |
Table 8: The difference of accuracy (%) of the 1M, 4M, and 16M transformers after GRPO. Positive values mean that GRPO raises the accuracy of the models above SFT.
| 1M | Mult | ID Easy | $+29.0$ | $+0.4$ | $+1.4$ | $+2.3$ | $+31.0$ | $+16.1$ | $+20.9$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| ID Hard | $+9.6$ | $-2.7$ | $-0.6$ | $+6.5$ | $+9.2$ | $+4.9$ | $+5.3$ | | |
| OOD Hard | $+0.1$ | $-1.2$ | $-0.3$ | $+0.4$ | $0.0$ | $+0.1$ | $0.0$ | | |
| Sudoku | ID Easy | $-0.1$ | $+0.9$ | $-3.2$ | $+2.1$ | $+0.2$ | $+2.0$ | $-0.7$ | |
| ID Hard | $0.0$ | $+0.1$ | $-0.1$ | $+0.2$ | $-0.1$ | $+0.1$ | $0.0$ | | |
| OOD Hard | $0.0$ | $0.0$ | $0.0$ | $0.0$ | $0.0$ | $0.0$ | $0.0$ | | |
| 4M | Mult | ID Easy | $+6.0$ | $+0.9$ | $+1.1$ | $+1.5$ | $+3.7$ | $+4.2$ | $+5.1$ |
| ID Hard | $+28.3$ | $+16.7$ | $+14.8$ | $+23.7$ | $+19.6$ | $+16.7$ | $+20.8$ | | |
| OOD Hard | $0.1$ | $-0.2$ | $+0.9$ | $+1.2$ | $+2.1$ | $+2.0$ | $+2.6$ | | |
| Sudoku | ID Easy | $+6.5$ | $+1.7$ | $+0.4$ | $+0.7$ | $+3.4$ | $+3.4$ | $+3.7$ | |
| ID Hard | $-0.1$ | $+3.0$ | $+7.5$ | $+4.8$ | $+0.4$ | $+7.8$ | $+2.0$ | | |
| OOD Hard | $0.0$ | $+0.4$ | $+4.9$ | $+6.9$ | $0$ | $-0.7$ | $0$ | | |
| 16M | Mult | ID Easy | $+0.6$ | $+0.4$ | $+0.3$ | $+0.3$ | $+0.5$ | $+0.1$ | $+0.9$ |
| ID Hard | $+11.3$ | $+10.0$ | $+4.4$ | $+4.7$ | $+10.4$ | $+1.4$ | $+4.1$ | | |
| OOD Hard | $-0.7$ | $+0.2$ | $+0.5$ | $+0.5$ | $-0.8$ | $-1.2$ | $+0.2$ | | |
| Sudoku | ID Easy | $+0.6$ | $+0.5$ | $+0.9$ | $+2.1$ | $+0.3$ | $-0.2$ | $+0.1$ | |
| ID Hard | $+2.5$ | $+1.6$ | $+4.9$ | $+7.5$ | $-0.2$ | $+2.5$ | $+1.5$ | | |
| OOD Hard | $+0.3$ | $-0.2$ | $+1.6$ | $+1.8$ | $-0.5$ | $-1.5$ | $-2.4$ | | |
Reflection usually extends the limit of RL. For reflective models, GRPO samples experience CoTs through RMTP, where self-verification $\mathcal{V}$ and the forward policy $\pi$ are jointly optimized in the form of a self-verifying policy $\tilde{\pi}$ . By comparing the RMTP results (columns 3, 6, and 9) with the non-reflective model (the first column) in Table 7, we find that GRPO usually converges to higher accuracy solving ID-Hard problems in RMTP. This shows that having reflection in long CoTs extends the limit of RL, compared to only exploiting a planning policy.
Interestingly, optional detailed verification generally demonstrates higher performance after GRPO than mandatory verification. A probable explanation is that a mandatory verification may cause the reasoner to overly rely on reflection, which stagnates the learning of the planning policy.
Overall, our full results provide more evidence to better support our findings discussed in Section 5.2:
- RL enhances 24 out of 42 ID-Hard results in Table 8 by no less than 3% (measured in absolute difference). However, only 8 out of 42 OOD-Hard results are improved by no less than 1%.
- In table 9, an increase of $e_{+}$ is observed in 20 out of 25 cases where $e_{-}$ decreases by more than 5% (measured in absolute difference).
E.3.1 The verification errors after GRPO
Furthermore, we also present the estimated errors of verification after GRPO in Table 9, in order to investigate how self-verification evolves during RL. Our main observation is that if a model has a high $e_{-}$ before GPRO, then GRPO tends to reduce $e_{-}$ and also increases $e_{+}$ . This change in verification errors is a rather superficial (lazy) way to obtain improvements. If the model faithfully improves verification through RL, both types of errors should simultaneously decrease β such a case occurs only in the ID-Easy difficulty or when $e_{-}$ is already low after SFT. This highlights a potential retrograde of self-verification ability after RL.
Table 9: The percentage (%) of test-time verification errors (i.e., $e_{-}$ and $e_{+}$ ) after GRPO. The arrows βββ (increase) and βββ (decrease) present the change compared to the results in SFT (Table 6).
| 1M ID Hard OOD Hard | Mult 16.5β 20.3 41.2β 57.6 | ID Easy 17.6β 20.0 1.6β 31.3 | 6.8β 12.5 7.0β 10.6 40.2β 46.2 | 3.3β 1.1 13.7β 19.3 1.5β 24.0 | 5.0β 9.9 54.6β 55.5 53.9β 67.5 | 3.5β 1.4 4.6β 2.3 16.4β 18.6 | 12.4β 22.6 42.1β 56.6 57.3β 70.5 | 17.2β 1.1 5.7β 1.7 19.1β 21.5 | 3.5β 27.9 | 17.6β 1.8 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Sudoku | ID Easy | 1.2β 11.1 | 1.7β 36.9 | 13.1β 18.0 | 4.0β 47.9 | 0.1β 87.2 | 0.0β 0.0 | 0.0β 87.1 | 0.4β 0.5 | |
| ID Hard | 2.9β 24.0 | 0.9β 30.1 | 5.5β 27.6 | 0.4β 29.0 | 0.1β 83.7 | 0.0β 0.0 | 0.1β 80.3 | 0.0β 0.0 | | |
| OOD Hard | 3.1β 63.4 | 0.8β 8.3 | 4.5β 55.7 | 0.9β 14.3 | 0.5β 88.4 | 0.0β 0.0 | 0.6β 85.1 | 0.0β 0.0 | | |
| 4M | Mult | ID Easy | 2.5β 22.6 | 4.8β 1.1 | 30.2β 27.9 | 7.1β 1.8 | 21.1β 9.3 | 3.0β 0.7 | 5.0β 23.7 | 6.8β 0.7 |
| ID Hard | 53.1β 55.5 | 21.8β 1.8 | 30.6β 56.6 | 28.5β 2.3 | 20.9β 24.2 | 20.1β 5.0 | 22.0β 32.0 | 24.8β 4.5 | | |
| OOD Hard | 60.0β 67.5 | 24.3β 18.6 | 52.5β 70.5 | 40.2β 21.5 | 55.7β 61.6 | 20.9β 7.2 | 53.4β 64.3 | 22.9β 5.3 | | |
| Sudoku | ID Easy | 7.9β 31.6 | 3.2β 6.3 | 2.8β 43.2 | 7.7β 3.8 | 11.5β 12.3 | 1.3β 1.4 | 27.1β 19.6 | 1.9β 2.2 | |
| ID Hard | 28.7β 70.0 | 0.5β 1.4 | 2.2β 58.2 | 4.7β 2.0 | 4.5β 12.8 | 1.6β 1.8 | 9.2β 12.9 | 0.1β 0.2 | | |
| OOD Hard | 6.9β 85.4 | 0.7β 0.1 | 11.0β 81.6 | 0.2β 0.4 | 2.1β 29.4 | 0β 0.1 | 5.9β 30.0 | 0.1β 0.2 | | |
| 16M | Mult | ID Easy | 4.2β 7.1 | 7.2β 1.4 | 0.4β 5.7 | 7.8β 1.6 | 8.1β 7.6 | 1.9β 0.2 | 7.8β 11.6 | 2.7β 0.2 |
| ID Hard | 7.9β 9.3 | 12.8β 1.1 | 6.7β 8.5 | 15.6β 1.3 | 22.7β 25.2 | 3.9β 3.1 | 18.6β 23.0 | 4.3β 2.9 | | |
| OOD Hard | 79.0β 80.3 | 47.1β 39.3 | 89.2β 90.7 | 46.4β 41.8 | 46.0β 54.5 | 12.5β 5.8 | 46.4β 58.1 | 14.7β 5.0 | | |
| Sudoku | ID Easy | 24.5β 64.6 | 6.2β 9.5 | 25.6β 35.7 | 8.5β 13.2 | 2.1β 8.7 | 0.6β 1.1 | 3.3β 5.8 | 5.4β 1.0 | |
| ID Hard | 25.3β 75.8 | 0.0β 4.3 | 16.4β 53.6 | 2.0β 7.4 | 0.5β 15.9 | 0.0β 0.1 | 2.4β 13.0 | 0.4β 1.0 | | |
| OOD Hard | 7.9β 83.1 | 3.5β 0.7 | 12.7β 77.7 | 2.1β 1.0 | 7.8β 36.1 | 0.0β 0.1 | 6.8β 31.6 | 0.1β 0.1 | | |
E.3.2 The planning correctness rate after GRPO
Table 10: The planning correctness rate ( $\mu$ ) before and after GRPO. Each result is reported by $\mu_{\text{SFT}}β\mu_{\text{GRPO}}$ .
| Mult 4M 16M | Detailed $98.3β 99.5$ $99.7β 99.9$ | 1M $68.4β 79.1$ $80.0β 85.9$ | $70.2β 81.7$ $35.0β 38.0$ $47.9β 43.4$ | $54.4β 59.5$ | $42.9β 41.9$ |
| --- | --- | --- | --- | --- | --- |
| Binary | 1M | $98.8β 99.1$ | $81.2β 80.3$ | $42.7β 38.6$ | |
| 4M | $99.3β 99.7$ | $77.6β 89.9$ | $57.1β 48.1$ | | |
| 16M | $99.4β 99.8$ | $79.6β 85.1$ | $75.2β 44.8$ | | |
| Sudoku | Detailed | 1M | $34.1β 33.0$ | $13.2β 12.4$ | $9.0β 8.6$ |
| 4M | $85.0β 86.8$ | $65.2β 72.0$ | $70.1β 70.3$ | | |
| 16M | $98.6β 98.1$ | $92.5β 94.0$ | $84.9β 83.9$ | | |
| Binary | 1M | $59.1β 60.3$ | $36.6β 36.1$ | $19.5β 19.9$ | |
| 4M | $97.3β 97.8$ | $80.2β 81.4$ | $74.5β 70.9$ | | |
| 16M | $99.0β 99.2$ | $88.5β 85.1$ | $68.4β 64.6$ | | |
We also report how GRPO influences the step-wise planning ability, measured by $\mu$ (defined in Section 4), across various tasks, verification types, and model sizes. Shown in Table 10, GRPO increases the planning correctness rate $\mu$ in most ID cases, except for the Sudoku models with binary verification. This indicates that the proposed steps are more likely to be correct and further reduces the overall penalties of false positive verification, making an optimistic verification bias (a high $e_{+}$ in exchange for a low $e_{-}$ ) even more rewarding. In particular, the planning ability shows almost no improvement in OOD problems.
E.3.3 Reflection frequency of optional detailed verification
To show how GRPO adapts the reflection frequency for optional detailed verification, Figure 12 shows the reflection frequency of 1M and 16M transformers before and after GRPO, and the reflection frequency of the 4M model is previously shown in Section 5.2. Similarly, Figure 13 shows the reflection frequency for 1M, 4M, and 16M models in Sudoku.
According to results in Table 5, reflective execution does not improve performance for the 1M model, implying its weakness in exploring correct solutions. Therefore, GRPO does not much incentivize reflection for the 1M model. Contrarily, it greatly encourages reflection for 4M and 16M models, for they explore more effectively than the 1M model. These results align with the discussion in Section 5.2 that RL adapts the reflection frequency based on how well the proposing policy can explore higher rewards.
<details>
<summary>x16.png Details</summary>

### Visual Description
## Heatmap: Comparison of Metrics Before and After GRPO
### Overview
The image presents two side-by-side heatmaps comparing numerical values across a 10x10 grid. The left heatmap represents data "Before GRPO," while the right heatmap shows data "After GRPO." Both grids use a color gradient from red (high values) to blue (low values) to encode numerical data. The axes represent the "number of x's digits" (horizontal) and "number of y's digits" (vertical), both ranging from 1 to 10.
---
### Components/Axes
- **X-axis (horizontal)**: "number of x's digits" (1β10)
- **Y-axis (vertical)**: "number of y's digits" (1β10)
- **Color Legend**: Red (high values) to Blue (low values)
- **Key Text**:
- Top-left: "Before GRPO"
- Top-right: "After GRPO"
- Dashed white line separates the two heatmaps
---
### Detailed Analysis
#### Before GRPO
- **Row 1 (y=1)**: 38, 46, 28, 22, 22, 28, 23, 22, 14, 11
- **Row 2 (y=2)**: 33, 28, 27, 25, 23, 17, 20, 23, 18, 14
- **Row 3 (y=3)**: 41, 27, 20, 22, 19, 14, 15, 17, 17, 16
- **Row 4 (y=4)**: 30, 20, 24, 20, 18, 19, 15, 17, 18, 17
- **Row 5 (y=5)**: 30, 32, 25, 22, 21, 16, 15, 20, 18, 19
- **Row 6 (y=6)**: 41, 34, 28, 21, 20, 14, 19, 19, 17, 17
- **Row 7 (y=7)**: 38, 30, 25, 17, 16, 21, 18, 14, 14, 12
- **Row 8 (y=8)**: 32, 22, 15, 18, 23, 18, 14, 13, 12, 12
- **Row 9 (y=9)**: 23, 16, 17, 20, 17, 14, 14, 11, 10, 8
- **Row 10 (y=10)**: 17, 14, 16, 16, 14, 14, 12, 9, 7, 7
#### After GRPO
- **Row 1 (y=1)**: 34, 52, 46, 36, 32, 35, 42, 42, 39, 32
- **Row 2 (y=2)**: 31, 44, 36, 28, 25, 25, 32, 43, 40, 44
- **Row 3 (y=3)**: 27, 34, 20, 14, 13, 26, 33, 38, 42, 42
- **Row 4 (y=4)**: 14, 19, 17, 20, 26, 33, 33, 44, 48, 38
- **Row 5 (y=5)**: 8, 22, 24, 26, 32, 36, 36, 39, 42, 37
- **Row 6 (y=6)**: 21, 30, 36, 34, 33, 34, 44, 49, 38, 36
- **Row 7 (y=7)**: 26, 38, 38, 36, 38, 43, 39, 45, 34, 32
- **Row 8 (y=8)**: 26, 26, 34, 30, 40, 44, 40, 31, 30, 24
- **Row 9 (y=9)**: 23, 29, 33, 30, 34, 34, 34, 33, 22, 30
- **Row 10 (y=10)**: 25, 29, 37, 36, 33, 29, 38, 22, 17, 23
---
### Key Observations
1. **General Trend**:
- "After GRPO" values are **lower** (more blue) than "Before GRPO" in most cells, suggesting GRPO reduces the metric (e.g., error rates, costs).
- Exceptions: Some cells in "After GRPO" show **higher values** (e.g., y=1, x=2: 52; y=6, x=8: 49).
2. **Notable Outliers**:
- **Highest Value**: "After GRPO" at y=6, x=8 (49) exceeds all "Before GRPO" values.
- **Lowest Value**: "Before GRPO" at y=10, x=10 (7) is the smallest value in both heatmaps.
3. **Color Gradient**:
- "Before GRPO" has a **darker red dominance** (higher values), while "After GRPO" shifts toward **blue** (lower values).
---
### Interpretation
- **GRPO Impact**: The heatmaps suggest GRPO generally improves performance (lower values) across most digit combinations. However, the increase in certain cells (e.g., y=1, x=2) indicates potential trade-offs or edge cases where GRPO underperforms.
- **Digit Sensitivity**:
- Lower-digit combinations (e.g., x=1β3, y=1β3) show more significant reductions in "After GRPO," implying GRPO is more effective for simpler digit patterns.
- Higher-digit combinations (e.g., x=8β10, y=8β10) exhibit mixed results, with some cells worsening (e.g., y=8, x=8: 31 vs. 40 in "Before GRPO").
- **Anomalies**: The spike at y=6, x=8 (49) in "After GRPO" warrants investigationβcould indicate a bug, overfitting, or a specific edge case not addressed by GRPO.
This analysis highlights GRPO's effectiveness in reducing the metric for most scenarios but underscores the need for further validation of edge cases.
</details>
(a) 1M
<details>
<summary>x17.png Details</summary>

### Visual Description
## Heatmap: Performance Metrics Before and After GRPO
### Overview
The image presents two side-by-side heatmaps comparing performance metrics (likely accuracy or success rates) across combinations of "number of x's digits" (1-10) and "number of y's digits" (1-10). The left heatmap shows values "Before GRPO," while the right shows values "After GRPO." Color gradients indicate magnitude, with darker colors representing lower values and lighter colors representing higher values.
### Components/Axes
- **X-axis**: "number of x's digits" (1-10)
- **Y-axis**: "number of y's digits" (1-10)
- **Color Scale**:
- **Before GRPO**: Dark red (low) β Black (high)
- **After GRPO**: Yellow (low) β Light blue (high)
- **Data Values**: Numerical annotations in each cell represent metric values.
### Detailed Analysis
#### Before GRPO
- **Range**: 1 (minimum) to 38 (maximum)
- **Distribution**:
- Top-left (1x1): 38 (darkest red)
- Bottom-right (10x10): 6 (dark red)
- Middle values cluster between 4-15 (e.g., 1x2: 15, 5x5: 5)
- **Trend**: Values decrease diagonally from top-left to bottom-right.
#### After GRPO
- **Range**: 51 (minimum) to 98 (maximum)
- **Distribution**:
- Top-left (1x1): 75 (light yellow)
- Bottom-right (10x10): 98 (light blue)
- Middle values cluster between 60-97 (e.g., 5x5: 85, 8x8: 96)
- **Trend**: Values increase diagonally from top-left to bottom-right.
### Key Observations
1. **Magnitude Shift**: All values increased post-GRPO, with the lowest value rising from 1 to 51 and the highest from 38 to 98.
2. **Consistent Improvement**: Every cell shows improvement, with no outliers or declines.
3. **Color Gradient Alignment**: Darker colors (Before) correlate with lower values; lighter colors (After) correlate with higher values.
4. **Diagonal Patterns**: Both heatmaps exhibit diagonal trends, but the direction reverses post-GRPO.
### Interpretation
The data suggests GRPO significantly enhances performance across all digit-length combinations. The most dramatic improvements occur in lower-digit pairs (e.g., 1x1, 2x2), where values nearly tripled (38β75). Higher-digit pairs (e.g., 10x10) show more modest gains (6β98), indicating GRPOβs impact may scale with problem complexity. The consistent upward trend implies GRPO optimizes the underlying process uniformly, though the exact mechanism (e.g., algorithmic efficiency, error reduction) requires further investigation. The absence of declines suggests no negative side effects in the measured metric.
</details>
(b) 16M
Figure 12: The hot-maps of reflection frequency (%) of 1M and 16M multiplication models before and after GRPO, which uses a sampling temperature of $1.25$ . All models are tested using RMTP execution.
<details>
<summary>x18.png Details</summary>

### Visual Description
## Bar Charts: Reflection Frequency Before and After GRPO
### Overview
The image contains two side-by-side bar charts comparing reflection frequency distributions before and after GRPO implementation. Both charts share identical axes but show distinct differences in bar heights. A vertical red dashed line at 54 blanks serves as a reference point in both visualizations.
### Components/Axes
- **X-axis**: "number of blanks" (categorical scale: 9, 18, 27, 36, 45, 54)
- **Y-axis**: "reflection frequency (%)" (linear scale: 0.0 to 1.0)
- **Legend**: No explicit legend present; blue bars represent reflection frequency data
- **Key Elements**:
- Vertical red dashed line at 54 blanks (both charts)
- Blue bars for each category (no color variation between "Before" and "After")
- Chart titles: "Before GRPO" (left) and "After GRPO" (right)
### Detailed Analysis
**Before GRPO**:
- Highest reflection frequency (~0.2%) at 54 blanks
- Secondary peak at 27 blanks (~0.15%)
- Gradual decline from 9 to 18 blanks (0.05% to 0.1%)
- Distributed pattern with multiple mid-range peaks (18-45 blanks: 0.1-0.18%)
**After GRPO**:
- Sharp reduction in all categories
- Highest frequency at 54 blanks (~0.12%) - 40% lower than before
- Secondary peak at 27 blanks (~0.1%)
- Flatter distribution with fewer pronounced peaks
- All values below 0.2% (vs. pre-GRPO maximum of 0.2%)
### Key Observations
1. **Threshold Effect**: The red dashed line at 54 blanks marks the point of maximum reflection frequency in both charts, suggesting this is a critical threshold for reflection occurrence.
2. **Magnitude Reduction**: Post-GRPO frequencies are consistently 40-60% lower across all blank counts compared to pre-GRPO values.
3. **Distribution Shift**: Pre-GRPO shows bimodal distribution (peaks at 27 and 54 blanks), while post-GRPO exhibits a more uniform distribution with diminished peaks.
4. **Consistency**: The red line's position remains identical in both charts, confirming the 54-blank threshold is unchanged by GRPO.
### Interpretation
The data demonstrates that GRPO implementation significantly reduces reflection frequency across all blank counts, with the most notable impact at the 54-blank threshold. The reduction pattern suggests GRPO may:
1. Optimize reflection efficiency at high blank counts
2. Reduce variability in reflection occurrence
3. Potentially improve system performance by lowering maximum reflection rates
The preserved threshold at 54 blanks indicates this value remains a critical parameter in the system's behavior, though GRPO successfully mitigates its impact. The flatter post-GRPO distribution implies more consistent performance across different blank counts, which could be beneficial for applications requiring stable reflection characteristics.
</details>
(a) 1M
<details>
<summary>x19.png Details</summary>

### Visual Description
## Bar Charts: Reflection Frequency Before and After GRPO
### Overview
The image contains two side-by-side bar charts comparing reflection frequency (%) across different numbers of blanks (9β54). The left chart shows data "Before GRPO," while the right chart shows data "After GRPO." Both charts use a red dashed vertical line at 54 blanks as a reference point.
---
### Components/Axes
- **X-axis**: "number of blanks" (discrete values: 9, 18, 27, 36, 45, 54).
- **Y-axis**: "reflection frequency (%)" (continuous scale from 0.0 to 1.0 in increments of 0.2).
- **Legend**: No explicit legend, but the red dashed line at 54 blanks is consistent across both charts.
- **Titles**:
- Left chart: "Before GRPO"
- Right chart: "After GRPO"
---
### Detailed Analysis
#### Before GRPO
- **Trend**: Reflection frequency remains consistently low (β€0.2%) across all blank counts.
- **Values**:
- 9 blanks: ~0.15%
- 18 blanks: ~0.18%
- 27 blanks: ~0.12%
- 36 blanks: ~0.15%
- 45 blanks: ~0.18%
- 54 blanks: ~0.12%
- **Red Dashed Line**: At 54 blanks, reflection frequency is ~0.12%.
#### After GRPO
- **Trend**: Reflection frequency increases monotonically with the number of blanks, reaching 100% at 54 blanks.
- **Values**:
- 9 blanks: ~0.70%
- 18 blanks: ~0.75%
- 27 blanks: ~0.80%
- 36 blanks: ~0.85%
- 45 blanks: ~0.95%
- 54 blanks: ~1.00%
- **Red Dashed Line**: At 54 blanks, reflection frequency is ~1.00%.
---
### Key Observations
1. **Before GRPO**: Reflection frequency is uniformly low (<0.2%), with minor fluctuations but no clear pattern.
2. **After GRPO**: Reflection frequency increases sharply with the number of blanks, achieving 100% at 54 blanks.
3. **Red Dashed Line**: Marks the threshold at 54 blanks, where the effect of GRPO becomes maximal (100% reflection frequency post-GRPO vs. ~0.12% pre-GRPO).
---
### Interpretation
The data demonstrates that GRPO significantly enhances reflection frequency as the number of blanks increases. Pre-GRPO, reflection frequency remains negligible regardless of blank count, suggesting a lack of responsiveness. Post-GRPO, reflection frequency scales linearly with blanks, indicating a direct proportional relationship. The red dashed line at 54 blanks highlights a critical threshold where GRPOβs impact plateaus at 100%, implying that beyond this point, additional blanks do not further improve reflection frequency. This could reflect a saturation effect or a design constraint in the system being analyzed. The stark contrast between the two charts underscores GRPOβs transformative role in optimizing reflection efficiency.
</details>
(b) 4M
<details>
<summary>x20.png Details</summary>

### Visual Description
## Bar Charts: Reflection Frequency Before and After GRPO
### Overview
The image contains two side-by-side bar charts comparing reflection frequency distributions across different numbers of blanks. The left chart shows data "Before GRPO" and the right chart shows data "After GRPO". Both charts use a consistent scale for reflection frequency (0-100%) and number of blanks (9-54).
### Components/Axes
- **X-axis (Horizontal)**: "number of blanks" with discrete categories at 9, 18, 27, 36, 45, and 54
- **Y-axis (Vertical)**: "reflection frequency (%)" with a linear scale from 0.0 to 1.0
- **Legend**: No explicit legend present, but two distinct data series are implied by chart titles
- **Markers**: Red dashed vertical line at x=54 in both charts
- **Chart Titles**:
- Left: "Before GRPO"
- Right: "After GRPO"
### Detailed Analysis
#### Before GRPO
- **Distribution**: Sparse, irregular distribution with most values below 0.2%
- **Peak**: Single prominent peak at 54 blanks (~0.15%)
- **Trend**: Gradual increase toward 54 blanks, with no values above 0.2% except at 54
- **Notable**: 9 blanks shows the highest frequency (~0.12%) among non-54 categories
#### After GRPO
- **Distribution**: Uniform high frequency across all categories
- **Values**:
- 9 blanks: ~0.95%
- 18 blanks: ~0.98%
- 27 blanks: ~0.97%
- 36 blanks: ~0.99%
- 45 blanks: ~0.96%
- 54 blanks: ~0.85% (significant drop)
- **Trend**: Consistent high performance (0.95-0.99%) except at 54 blanks
- **Notable**: 54 blanks shows 13% decrease compared to other categories
### Key Observations
1. **DRAMATIC IMPROVEMENT**: Reflection frequency increases by 7-8x across all blank counts except 54
2. **THRESHOLD EFFECT**: 54 blanks remains an outlier in both datasets, suggesting a potential system limitation
3. **CONSISTENCY**: Post-GRPO data shows minimal variation between categories (range: 0.85-0.99%)
4. **PRE-GRPO ANOMALY**: 54 blanks was already an outlier pre-intervention, but its relative importance decreased post-intervention
### Interpretation
The data demonstrates that GRPO intervention significantly improved reflection frequency across all blank counts except 54, where performance remains suboptimal. This suggests:
1. **System Optimization**: GRPO successfully addressed reflection issues for most configurations
2. **Critical Threshold**: 54 blanks may represent a system boundary or failure mode requiring separate investigation
3. **Performance Parity**: Post-intervention, reflection frequency becomes less sensitive to blank count variations
4. **Potential Trade-off**: The uniform high performance might indicate reduced system adaptability to extreme conditions (54 blanks)
The red dashed line at 54 blanks serves as a visual anchor for this critical threshold, emphasizing its persistent underperformance despite overall system improvements.
</details>
(c) 16M
Figure 13: The histograms of reflection frequency of 1M, 4M, and 16M Sudoku models before and after GRPO, which uses a sampling temperature of $1.25$ . All models are tested using RMTP execution.
E.4 Reflection frequency under controlled verification error rates
To investigate how verification error rates ( $e_{-}$ and $e_{+}$ ) influence the reflection frequency in GRPO, we ran a controlled experiment in which the error rates were fixed by intervening with expert verifications. After each time the transformer generated a non-empty verification, we replaced the verification sequence with the expert verification, where randomized noise is injected to achieve the prescribed false-negative rate $e_{-}$ and false-positive rate $e_{+}$ .
We used the 4M Mult model and ran GRPO (sampling temperature = 1.25) for 25 epochs in the in-distribution setting. We measured the fraction of steps at which the model invoked non-empty reflection (βreflection frequencyβ) after 25 epochs. Especially, we are interested in how reflection frequency changes, given a low $e_{-}=0.1$ or a high $e_{-}=0.4$ . In both cases, we set $e_{+}=0.1$ . The results are as follows:
- Using a low $e_{-}=0.1$ , the reflection frequency increases to $59.8\%$ after 25 GRPO epochs.
- Using a high $e_{-}=0.4$ , the reflection frequency drops to $0.0\%$ after 25 GRPO epochs. That is, the model learns to completely disuse reflection.
Discussion.
When the verifier rejects many correct steps (high $e_{-}$ ), the model learns to avoid invoking reflection, driving the observed reflection frequency to nearly $0\%$ . Conversely, when $e_{-}$ is low (with the same $e_{+}$ ), reflection becomes beneficial and the model increases reflection usage (here to $60\%$ ). Intuitively, reducing excessive false negatives shortens CoT lengths and makes reflection more rewarding; when $e_{-}$ is large, the model can trade off reflection for a no-reflection policy (which corresponds to the extreme $e_{-}=0,e_{+}=1$ ), thereby avoiding costly rejections. This experiment demonstrates that the model learns to reduce $e_{-}$ by strategically bypassing verification.
E.5 Results of PPO
As discussed in Appendix B.1, we prefer GRPO over PPO for tiny transformers, as the value model in PPO increases computational cost and introduces additional approximation bias in computing advantages.
Table 11 presents the reasoning accuracy after PPO, and Table 12 gives the difference compared to the SFT results in Table 5. Our results show that PPO is much weaker than GRPO. Although PPO effectively improves the non-reflective models, the performance of reflective reasoning deteriorates after PPO. To explain this, self-verification in reasoning steps causes a higher complexity of the value function, which may obfuscate tiny transformers. Overall, we suggest that GRPO is a more suitable algorithm to optimize reflective reasoning for tiny transformers.
Table 11: The accuracy (%) of the 1M, 4M, and 16M transformers after PPO.
| Verification Type | None | Binary | Detailed | Optional Detailed | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Reflective Execution | None | None | RMTP | RTBS | None | RMTP | RTBS | None | RMTP | RTBS | | |
| 1M | Mult | ID Easy | 39.6 | 96.5 | 94.1 | 90.6 | 28.3 | 30.1 | 27.2 | 37.9 | 49.0 | 44.4 |
| ID Hard | 7.8 | 49.6 | 43.7 | 32.2 | 2.4 | 3.1 | 2.4 | 5.9 | 9.6 | 7.3 | | |
| OOD Hard | 1.1 | 2.6 | 1.8 | 1.2 | 0.7 | 0.8 | 0.7 | 1.0 | 1.0 | 0.8 | | |
| Sudoku | ID Easy | 1.7 | 36.1 | 33.7 | 5.6 | 17.3 | 20.6 | 20.1 | 23.8 | 21.9 | 20.1 | |
| ID Hard | 0 | 0.4 | 1.0 | 0 | 0 | 0.1 | 0 | 0 | 0 | 0 | | |
| OOD Hard | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | |
| 4M | Mult | ID Easy | 97.7 | 95.5 | 98.6 | 93.8 | 96.6 | 95.7 | 94.9 | 97.2 | 96.9 | 94.6 |
| ID Hard | 63.0 | 52.8 | 68.6 | 54.7 | 54.0 | 54.6 | 45.5 | 58.7 | 61.7 | 56.8 | | |
| OOD Hard | 2.2 | 3.1 | 2.9 | 1.6 | 5.3 | 3.9 | 2.2 | 4.4 | 3.3 | 3.7 | | |
| Sudoku | ID Easy | 56.4 | 88.4 | 97.3 | 97.6 | 49.3 | 82.1 | 80.6 | 76.2 | 94.1 | 97.3 | |
| ID Hard | 0 | 28.6 | 47.4 | 47.7 | 0 | 15.1 | 35.9 | 15.2 | 35.3 | 55.6 | | |
| OOD Hard | 0 | 0.2 | 1.6 | 3.3 | 3.1 | 0.4 | 0.9 | 0 | 1.1 | 2.7 | | |
| 16M | Mult | ID Easy | 99.3 | 99.0 | 99.0 | 98.2 | 98.5 | 98.7 | 97.8 | 99.0 | 99.5 | 99.2 |
| ID Hard | 64.8 | 62.9 | 75.7 | 71.9 | 63.2 | 68.6 | 65.6 | 65.1 | 77.1 | 74.6 | | |
| OOD Hard | 1.9 | 1.0 | 1.2 | 1.1 | 9.1 | 8.1 | 7.5 | 5.4 | 5.6 | 5.4 | | |
| Sudoku | ID Easy | 96.5 | 91.8 | 97.3 | 96.7 | 87.6 | 98.1 | 98.9 | 94.5 | 96.7 | 97.1 | |
| ID Hard | 49.0 | 41.0 | 51.4 | 52.7 | 34.7 | 55.7 | 66.3 | 47.8 | 53.8 | 53.0 | | |
| OOD Hard | 0.6 | 0 | 2.4 | 4.0 | 0 | 1.1 | 2.0 | 0 | 3.8 | 2.9 | | |
Table 12: The difference of accuracy (%) of the 1M, 4M, and 16M transformers after PPO. Positive values mean that PPO raises the accuracy of the models above SFT.
| 1M | Mult | ID Easy | $+16.0$ | $+0.7$ | $-0.4$ | $-2.8$ | $+6.3$ | $-3.3$ | $+3.0$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| ID Hard | $+5.8$ | $-3.1$ | $-0.9$ | $-3.3$ | $+0.2$ | $-1.7$ | $-0.4$ | | |
| OOD Hard | $+0.1$ | $-1.1$ | $-0.4$ | $+0.0$ | $-0.3$ | $+0.0$ | $+0.3$ | | |
| Sudoku | ID Easy | $+0.3$ | $+3.1$ | $+1.3$ | $+3.2$ | $-0.1$ | $+1.9$ | $+0.7$ | |
| ID Hard | $+0.0$ | $+0.1$ | $+0.9$ | $+0.0$ | $-0.1$ | $+0.1$ | $+0.0$ | | |
| OOD Hard | $+0.0$ | $+0.0$ | $+0.0$ | $+0.0$ | $+0.0$ | $+0.0$ | $+0.0$ | | |
| 4M | Mult | ID Easy | $+5.7$ | $-2.2$ | $+1.0$ | $-3.5$ | $+2.1$ | $+1.9$ | $+1.6$ |
| ID Hard | $+25.7$ | $-4.1$ | $+6.4$ | $+1.7$ | $+10.6$ | $+7.0$ | $+3.1$ | | |
| OOD Hard | $+0.0$ | $+0.2$ | $+1.1$ | $+0.5$ | $+1.6$ | $+0.6$ | $-0.5$ | | |
| Sudoku | ID Easy | $+4.2$ | $-3.7$ | $+0.5$ | $+1.6$ | $-5.1$ | $+0.2$ | $-7.9$ | |
| ID Hard | $-3.3$ | $-12.3$ | $+1.1$ | $-5.6$ | $-5.2$ | $-1.8$ | $-9.8$ | | |
| OOD Hard | $+0.0$ | $+0.2$ | $+1.6$ | $+3.3$ | $+2.7$ | $-3.6$ | $-5.8$ | | |
| 16M | Mult | ID Easy | $+0.1$ | $+0.2$ | $+0.1$ | $-0.6$ | $-0.7$ | $-0.8$ | $-0.7$ |
| ID Hard | $-1.1$ | $-2.3$ | $-1.0$ | $-3.0$ | $-2.7$ | $-7.8$ | $-7.9$ | | |
| OOD Hard | $-0.6$ | $-0.1$ | $-0.1$ | $-0.2$ | $-0.1$ | $-1.3$ | $+0.3$ | | |
| Sudoku | ID Easy | $+0.8$ | $-5.3$ | $-0.6$ | $+4.2$ | $-5.4$ | $-0.9$ | $-0.8$ | |
| ID Hard | $+0.2$ | $-9.1$ | $-1.7$ | $-2.1$ | $-12.2$ | $-2.2$ | $-4.4$ | | |
| OOD Hard | $+0.2$ | $-0.9$ | $-2.0$ | $-2.0$ | $-0.7$ | $-7.1$ | $-12.4$ | | |
NeurIPS Paper Checklist
1. Claims
1. Question: Do the main claims made in the abstract and introduction accurately reflect the paperβs contributions and scope?
1. Answer: [Yes]
1. Justification: Our title, abstract, and introduction clearly state our main claim that transformers can benefit from self-verifying reflection. Our theoretical and experimental results support this claim.
1. Guidelines:
- The answer NA means that the abstract and introduction do not include the claims made in the paper.
- The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers.
- The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings.
- It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper.
1. Limitations
1. Question: Does the paper discuss the limitations of the work performed by the authors?
1. Answer: [Yes]
1. Justification: We mention limitations in the conclusion.
1. Guidelines:
- The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper.
- The authors are encouraged to create a separate "Limitations" section in their paper.
- The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be.
- The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated.
- The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon.
- The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size.
- If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness.
- While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that arenβt acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations.
1. Theory assumptions and proofs
1. Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?
1. Answer: [Yes]
1. Justification: The main paper describes the assumptions of our theoretical results. The proof is provided in the appendix.
1. Guidelines:
- The answer NA means that the paper does not include theoretical results.
- All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.
- All assumptions should be clearly stated or referenced in the statement of any theorems.
- The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition.
- Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in the appendix or supplemental material.
- Theorems and Lemmas that the proof relies upon should be properly referenced.
1. Experimental result reproducibility
1. Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?
1. Answer: [Yes]
1. Justification: We include necessary information to reproduce our results in the appendix, such as hyper-parameters, model architecture, data examples, and detailed implementation.
1. Guidelines:
- The answer NA means that the paper does not include experiments.
- If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not.
- If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable.
- Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed.
- While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example
1. If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm.
1. If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully.
1. If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset).
1. We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results.
1. Open access to data and code
1. Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?
1. Answer: [Yes]
1. Justification: Full code is in the supplementary materials. No data is provided as it is generated by the code. βREADME.mdβ introduces the commands to perform the complete pipeline and reproduce our results. We will open-source our code once it is formally accepted.
1. Guidelines:
- The answer NA means that paper does not include experiments requiring code.
- Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
- While we encourage the release of code and data, we understand that this might not be possible, so βNoβ is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark).
- The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
- The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc.
- The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why.
- At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable).
- Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted.
1. Experimental setting/details
1. Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?
1. Answer: [Yes]
1. Justification: Most relevant hyper-parameters and experiment details are in the appendix. Full settings are clearly defined in our code.
1. Guidelines:
- The answer NA means that the paper does not include experiments.
- The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them.
- The full details can be provided either with the code, in appendix, or as supplemental material.
1. Experiment statistical significance
1. Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?
1. Answer: [No]
1. Justification: It is too expensive to run multiple instances of our experiments, which include training 78 models under various settings (sizes, tasks, verification types, etc). Each model is tested using at most 3 different executions. Given our limited resources, it would take several months to compute error bars. Since our paper focuses on analysis instead of best performance or accurate evaluation, it is acceptable not to include error bars.
1. Guidelines:
- The answer NA means that the paper does not include experiments.
- The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper.
- The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions).
- The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.)
- The assumptions made should be given (e.g., Normally distributed errors).
- It should be clear whether the error bar is the standard deviation or the standard error of the mean.
- It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified.
- For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates).
- If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text.
1. Experiments compute resources
1. Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?
1. Answer: [Yes]
1. Justification: We roughly describe the computational resource used in the appendix. Since our models are very small, this paper can be easily reproduced by a single NVIDIA GPU.
1. Guidelines:
- The answer NA means that the paper does not include experiments.
- The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage.
- The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute.
- The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didnβt make it into the paper).
1. Code of ethics
1. Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?
1. Answer: [Yes]
1. Justification: As far as we may perceive, this research does not involve human subjects or negative societal impacts.
1. Guidelines:
- The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.
- If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics.
- The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction).
1. Broader impacts
1. Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?
1. Answer: [N/A]
1. Justification: This paper focuses on the fundamental analysis of reasoning instead and is tied to no practical applications.
1. Guidelines:
- The answer NA means that there is no societal impact of the work performed.
- If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact.
- Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations.
- The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster.
- The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology.
- If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML).
1. Safeguards
1. Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?
1. Answer: [N/A]
1. Justification: This paper poses no such risks.
1. Guidelines:
- The answer NA means that the paper poses no such risks.
- Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters.
- Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images.
- We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort.
1. Licenses for existing assets
1. Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?
1. Answer: [Yes]
1. Justification: Assets used in this paper are cited in the paper. The appendix mentions the version of the asset and the license.
1. Guidelines:
- The answer NA means that the paper does not use existing assets.
- The authors should cite the original paper that produced the code package or dataset.
- The authors should state which version of the asset is used and, if possible, include a URL.
- The name of the license (e.g., CC-BY 4.0) should be included for each asset.
- For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided.
- If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset.
- For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided.
- If this information is not available online, the authors are encouraged to reach out to the assetβs creators.
1. New assets
1. Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?
1. Answer: [N/A]
1. Justification: This paper does not release assets besides our code.
1. Guidelines:
- The answer NA means that the paper does not release new assets.
- Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc.
- The paper should discuss whether and how consent was obtained from people whose asset is used.
- At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file.
1. Crowdsourcing and research with human subjects
1. Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?
1. Answer: [N/A]
1. Justification: This paper does not involve crowdsourcing nor research with human subjects.
1. Guidelines:
- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
- Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper.
- According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector.
1. Institutional review board (IRB) approvals or equivalent for research with human subjects
1. Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?
1. Answer: [N/A]
1. Justification: the paper does not involve crowdsourcing nor research with human subjects.
1. Guidelines:
- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
- Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper.
- We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution.
- For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
1. Declaration of LLM usage
1. Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required.
1. Answer: [N/A]
1. Justification: Although this research is related to LLM reasoning, we focus on tiny transformers. The appendix includes the evaluation of LLMs, yet these results do not impact our core methodology and originality.
1. Guidelines:
- The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components.
- Please refer to our LLM policy (https://neurips.cc/Conferences/2025/LLM) for what should or should not be described.