# Token Assorted: Mixing Latent and Text Tokens for Improved Language Model Reasoning
**Authors**: DiJia Su, Hanlin Zhu, Yingchen Xu, Jiantao Jiao, Yuandong Tian, Qinqing Zheng
Abstract
Large Language Models (LLMs) excel at reasoning and planning when trained on chain-of-thought (CoT) data, where the step-by-step thought process is explicitly outlined by text tokens. However, this results in lengthy inputs where many words support textual coherence rather than core reasoning information, and processing these inputs consumes substantial computation resources. In this work, we propose a hybrid representation of the reasoning process, where we partially abstract away the initial reasoning steps using latent discrete tokens generated by VQ-VAE, significantly reducing the length of reasoning traces. We explore the use of latent trace abstractions in two scenarios: 1) training the model from scratch for the Keys-Finding Maze problem, 2) fine-tuning LLMs on this hybrid data with an extended vocabulary including unseen latent tokens, for both logical and mathematical reasoning problems. To facilitate effective learning, we introduce a simple training procedure that randomly mixes latent and text tokens, which enables fast adaptation to new latent tokens. Our approach consistently outperforms the baselines methods in various benchmarks, such as Math (+4.2%, Llama-3.2-1B), GSM8K (+4.1%, Llama-3.2-3B), and Fresh-Gaokao-Math-2023 (+13.3%, Llama-3.1-8B) with an average reduction of 17% in reasoning trace’s length.
Machine Learning, ICML
1 Introduction
Reasoning capabilities are increasingly recognized as a critical component of Artificial General Intelligence (AGI) systems. Recent research has demonstrated that Large Language Models (LLMs) can exhibit sophisticated reasoning and planning abilities using chain-of-thought (CoT) methodologies, including prompting LLMs with examples where complex problems are broken down into explicit reasoning steps (Wei et al., 2022b; Chen et al., 2022a; Yao et al., 2024). More recently, a number of studies have further shown that when models are trained to articulate the intermediate steps of a reasoning process (Nye et al., 2021b; Lehnert et al., 2024), they achieve significantly higher accuracy. The effectiveness of this approach has been demonstrated across multiple domains, including mathematical problem-solving (Yue et al., 2023; Gandhi et al., 2024; Yu et al., 2023; Su et al., 2025; Tong et al., 2024), logical inference (Lin et al., 2024; Dziri et al., 2024), multistep planning tasks (Lehnert et al., 2024; Su et al., 2024), etc.
However, training with explicit reasoning traces in text space comes with notable computational costs (Deng et al., 2023, 2024), as the models must process lengthy input sequences. In fact, much of the text serves primarily to maintain linguistic coherence, rather than conveying core reasoning information. Several works have attempted to mitigate this issue. For example, Hao et al. (2024) investigate reasoning in continuous latent space as a means of compressing the reasoning trace, and Deng et al. (2024) explore internalizing the intermediate steps through iterative CoT eliminations, see Section 2 for more examples. Nonetheless, these approaches rely on multi-stage training procedures that resemble curriculum learning, which still incur significant computational costs, and their final performances fall behind models trained with complete reasoning traces.
To tackle this challenge, we propose to use discrete latent tokens to abstract the initial steps of the reasoning traces. These latent tokens, obtained through a vector-quantized variational autoencoder (VQ-VAE), provide a compressed representation of the reasoning process by condensing surface-level details. More precisely, we replace the text tokens with their corresponding latent abstractions from left to right until a pre-set location, leaving the remaining tokens unchanged. We then fine-tune LLMs with reasoning traces with such assorted tokens, allowing the models to learn from both abstract representations of the thinking process and detailed textual descriptions. One technical challenge posed for the fine-tuning is that the vocabulary is now extended and contains unseen latent tokens. To facilitate quick adaptation to those new tokens, we employ a randomized replacement strategy: during training, we randomly vary the number of text tokens being substituted by latent tokens for each sample. Our experiments confirm that this simple strategy leads to straightforward accommodation of unseen latent tokens.
We conduct a comprehensive evaluation of our approach on a diverse range of benchmarks spanning multiple domains. Specifically, we assess its performance on multistep planning tasks (Keys-Finding Maze) and logical reasoning benchmarks (ProntoQA (Saparov & He, 2022), ProsQA (Hao et al., 2024)) for training T5 or GPT-2 models from scratch. In addition, we fine-tune different sizes of LLama-3.1 and LLama-3.2 models using our approach and evaluate them on a number of mathematical reasoning benchmarks, including GSM8K (Cobbe et al., 2021a), Math (Hendrycks et al., 2021), and OlympiadBench-Math (He et al., 2024), see Section 4.2 for more details. Across all these tasks and model architectures, our models consistently outperform baseline models trained with text-only reasoning traces, demonstrating the effectiveness of compressing the reasoning process with assorted tokens.
2 Related Work
Explicit Chain-of-Thought Prompting.
The first line of work in Chain-of-Thought (CoT) use the traditional chain of prompt in text tokens (Wei et al., 2022a; Nye et al., 2021a). Research works demonstrated that by adding few-shot examples to the input prompt or even zero-shot, the model can perform better in question answering (Chen et al., 2022b; Kojima et al., 2022; Chung et al., 2024). To further improve the model reasoning performance, there has been research effort into prompting with self-consistency (Wang et al., 2022). Here the model is prompted to generate multiple responses and select the best one based on majority voting. On the other hand, research has shown that top- $k$ alternative tokens in the beginning of the prompt can also improve the model’s reasoning capability (Wang & Zhou, 2024). On top of these empirical results, there has been research on theoretical understanding of why CoT improves the model’s performance through the lens of expressivity (Feng et al., 2024; Li et al., 2024) or training dynamics (Zhu et al., 2024). In a nutshell, CoT improves the model’s effective depth because the generated output is being fed back to the original input. CoT is also important for LLMs to perform multi-hop reasoning according to the analysis of training dynamics (Zhu et al., 2024).
Learning with CoT Data.
In addition to the success of CoT prompting, an emerging line of works have explored training LLMs on data with high-quality reasoning traces, for example, the works of Nye et al. (2021b); Azerbayev et al. (2023); Lehnert et al. (2024); Su et al. (2024); Yu et al. (2024); Yang et al. (2024); Deng et al. (2023, 2024). There is also a surge of interest in synthesizing datasets with diverse intermediate steps for solving problems in various domains, see, e.g., the works of Kim et al. (2023); Tong et al. (2024); Yu et al. (2023); Yue et al. (2023); Lozhkov et al. (2024). Wen et al. (2024) also theoretically studies how training with reasoning trace can improve the sample complexity of certain tasks.
LLM Reasoning in Latent Space.
There has been research investigating LLM reasoning in the latent space. Hao et al. (2024) have proposed to use the last hidden state of a language model as the next input embeddings, allowing the model to continue reasoning within a continuous latent space. The authors show that this approach effectively captures multiple reasoning paths simultaneously, mimicking a breadth-first-search strategy. Goyal et al. (2023) proposes to insert learnable pause tokens into the original text, in order to delay the generation. As a result, the model can leverage additional computation before providing the final answer. Parallel to this, Pfau et al. (2024) have explored filler tokens, which are used to solve computational tasks that are otherwise unattainable without intermediate token generation. In addition, Liu et al. (2024) propose a latent coprocessor method that operates on the transformer’s key-value cache to improve the LLM performance. Nevertheless, none of these methods have shown good performance when integrated into modern-sized LLMs and tested on real-world LLM datasets instead of synthetic ones. Also, Wang et al. (2023) proposed to use the planning token at the start of generation. Orthogonal to these works, Pagnoni et al. (2024) proposes a tokenization-free architecture that encodes input bytes into continuous patch representations, which is then used to train a latent Transformer, and Barrault et al. (2024) perform autoregressive sentence prediction in an embedding space. While these two works both leverage continuous latent spaces, our work focuses on the direct use of discrete latent tokens.
3 Methodology
<details>
<summary>x1.png Details</summary>

### Visual Description
\n
## Diagram: Chain-of-Thought Prompting with Latent Variables
### Overview
This diagram illustrates a comparison between standard Chain-of-Thought (CoT) prompting and a modified version incorporating latent variables. The diagram visually represents the sequence of tokens in both approaches, highlighting the insertion of latent tokens within the CoT sequence.
### Components/Axes
The diagram consists of two horizontal sequences representing the token flow for each prompting method. Below the sequences are definitions for the components used in the diagram. The components are:
* **Prompt:** The initial input to the model.
* **CoT 1, CoT 2… CoT 32, CoT 33… CoT N:** Chain-of-Thought tokens, representing intermediate reasoning steps. 'N' denotes an arbitrary number of CoT tokens.
* **Solution:** The final output of the model.
* **[boLatent]:** A special delimiter indicating the beginning of latent tokens.
* **[eoLatent]:** A special delimiter indicating the end of latent tokens.
* **z1, z2:** Discrete latent tokens.
### Detailed Analysis or Content Details
The diagram presents two prompting sequences:
**Sequence 1 (Standard CoT):**
* Starts with "Prompt" (light green).
* Followed by a series of "CoT" tokens (light blue), ranging from "CoT 1" to "CoT N". The ellipsis indicates that there are multiple CoT tokens between "CoT 32" and "CoT 33".
* Ends with "Solution" (light coral).
**Sequence 2 (CoT with Latent Variables):**
* Starts with "Prompt" (light green).
* Followed by "[boLatent]" (lavender), "z1" (pink), "z2" (pink), and "[eoLatent]" (lavender). These represent the inserted latent tokens.
* Continues with "CoT 33" (light blue) and "CoT N" (light blue) with an ellipsis indicating further CoT tokens.
* Ends with "Solution" (light coral).
The diagram also includes text: "Text" (red) is placed above the CoT 33 and CoT N tokens in the second sequence.
The legend defines the components:
* “[boLatent] [eoLatent]” – Special delimiters that encode the start / end of the latent tokens.
* “z” – Discrete latent tokens.
* “CoT N” – The n-th CoT textual tokens.
### Key Observations
The key difference between the two sequences is the inclusion of latent variables ("z1", "z2") enclosed by delimiters ("[boLatent]", "[eoLatent]") in the second sequence. This suggests a modification to the standard CoT prompting method to incorporate a latent space representation within the reasoning process. The "Text" label suggests that the latent variables are influencing the subsequent CoT tokens.
### Interpretation
The diagram illustrates a method for augmenting Chain-of-Thought prompting with latent variables. The standard CoT approach relies solely on textual tokens to generate reasoning steps. The modified approach introduces a latent space, represented by "z1" and "z2", which are presumably learned representations of information. These latent variables are inserted into the CoT sequence using special delimiters. This suggests that the latent variables are intended to guide or influence the subsequent reasoning steps ("CoT 33" onwards). The inclusion of "Text" above the CoT tokens implies that the latent variables are impacting the generated text.
This approach could potentially improve the quality of reasoning by allowing the model to leverage a more compact and informative representation of knowledge. The use of delimiters ensures that the latent variables are properly integrated into the token sequence. The diagram highlights a research direction aimed at enhancing CoT prompting through the incorporation of latent space representations.
</details>
Figure 3.1: An example illustrating our replacement strategy. With chunk size $L=16$ and compression rate $r=16$ , we encode 32 textual CoT tokens into 2 discrete latent tokens from left to right. The other CoT tokens will remain in their original forms.
In this section, we describe our methodology to enable LLMs to reason with discrete latent tokens. The notations are summarized in Appendix B. Let $X=P\oplus C\oplus S$ denote a sample input, where $P=(p_{1},p_{2},...,p_{t_{p}})$ are the prompt tokens, $C=(c_{1},c_{2},...,c_{t_{c}})$ are the reasoning step (chain-of-thought) tokens, $S=(s_{1},s_{2},...,s_{t_{s}})$ are the solution tokens, and $\oplus$ denotes concatenation. Our training procedure consists of two stages:
1. Learning latent discrete tokens to abstract the reasoning steps, where we train a model to convert $C$ into a sequence of latent tokens $Z=(z_{1},z_{2},...,z_{t_{z}})$ such that $t_{z}<t_{c}$ . The compression rate $r=t_{c}/t_{z}$ controls the level of abstraction.
1. Training the LLM with a partial and high-level abstract of the reasoning steps, where we construct a modified input ${\widetilde{X}}$ by replacing the first $m$ tokens of $C$ by the corresponding latent abstractions:
$$
{\widetilde{X}}=P\oplus[z_{1},\ldots,z_{\frac{m}{r}},c_{m+1},\ldots,c_{t_{c}}]\oplus S. \tag{1}
$$
Figure 3.1 illustrates this replacement strategy. We randomize the value of $m$ during training.
3.1 Learning Latent Abstractions
We employ a vector-quantized variable autoencoder (VQ-VAE) (Van Den Oord et al., 2017) type of architecture to map CoT tokens $C$ into discrete latent tokens $Z$ . To enhance abstraction performance, our VQ-VAE is trained on the whole input sequence $X$ , but only applied to $C$ in the next stage. Following Jiang et al. (2022, 2023), we split $X$ into chunks of length $L$ and encode each chunk into $\frac{L}{r}$ latent codes, where $r$ is a preset compression rate. More precisely, our architecture consists of the following five components:
- ${\mathcal{E}}:$ a codebook containing $|{\mathcal{E}}|$ vectors in ${\mathbb{R}}^{d}$ .
- ${f_{\text{enc}}}:{\mathcal{V}}^{L}\mapsto{\mathbb{R}}^{d×\frac{L}{r}}$ that encodes a sequence of $L$ text tokens to $\frac{L}{r}$ latent embedding vectors $\bar{X}=\bar{x}_{1},...,\bar{x}_{\frac{L}{r}}$ , where ${\mathcal{V}}$ is the vocabulary of text tokens.
- $q:{\mathbb{R}}^{d}\mapsto{\mathcal{E}}$ : the quantization operator that replaces the encoded embedding $\bar{x}$ by the nearest neighbor in ${\mathcal{E}}$ : $q(\bar{x})=\operatorname*{argmin}_{e_{i}∈{\mathcal{E}}}\left\|e_{i}-\bar{x}\right\|^{2}_{2}$ .
- $g:{\mathcal{V}}^{K}\mapsto{\mathbb{R}}^{d}$ that maps $K$ text tokens to a $d$ -dimensional embedding vector. We use $g$ to generate a continuous embedding of the prompt $P$ .
- ${f_{\text{dec}}}:{\mathbb{R}}^{d×\frac{L}{r}}×{\mathbb{R}}^{k}\mapsto{\mathcal{V}}^{L}$ that decodes latent embeddings back to text tokens, conditioned on prompt embedding.
In particular, each continuous vector $e∈{\mathcal{E}}$ in the codebook has an associated latent token $z$ , which we use to construct the latent reasoning steps $Z$ To decode a latent token $z$ , we look up the corresponding embedding $e∈{\mathcal{E}}$ and feed it to ${f_{\text{dec}}}$ ..
<details>
<summary>x2.png Details</summary>

### Visual Description
\n
## Diagram: Sequence-to-Sequence Model with Codebook
### Overview
This diagram illustrates a sequence-to-sequence model architecture incorporating a codebook for representation learning. The model takes an input sequence *X*, encodes it, quantizes the encoded representation using a codebook, and then decodes it to reconstruct the original sequence. The diagram highlights the key components and data flow within this system.
### Components/Axes
The diagram consists of the following components:
* **Input Sequence X:** Represented as a segmented bar, divided into three sections labeled "Prompt", "CoT" (Chain of Thought), and "Solution".
* **Encoder (f<sub>enc</sub>):** A yellow trapezoid transforming the input sequence.
* **Quantization (q):** A process mapping the encoded representation to a codebook.
* **Codebook:** A gray rectangle containing a set of embeddings labeled *e<sub>1</sub>*, *e<sub>2</sub>*, ..., *e<sub>n</sub>*.
* **Decoder (f<sub>dec</sub>):** An orange trapezoid reconstructing the sequence.
* **Reconstructed X:** The output sequence, visually similar to the input *X*.
* **g:** A gray rectangle with an arrow pointing towards the encoder.
### Detailed Analysis or Content Details
The diagram depicts the following data flow:
1. **Input:** The input sequence *X* is divided into three segments: "Prompt", "CoT", and "Solution". The lengths of these segments appear roughly equal, though precise lengths are not specified.
2. **Encoding:** The input sequence *X* is passed through the encoder *f<sub>enc</sub>*, resulting in an intermediate representation consisting of two rows of yellow rectangles. The number of rectangles in each row is equal to the number of segments in *X*.
3. **Quantization:** The encoded representation is then quantized using the function *q*, which maps the encoded vectors to the nearest embedding in the codebook. The codebook contains *n* embeddings, labeled *e<sub>1</sub>* through *e<sub>n</sub>*.
4. **Decoding:** The quantized representation is passed through the decoder *f<sub>dec</sub>*, which reconstructs the original sequence.
5. **Output:** The reconstructed sequence, labeled "Reconstructed X", is visually similar to the original input *X*.
6. **Feedback Loop:** A feedback loop is present, indicated by the arrow from the gray rectangle labeled *g* to the encoder *f<sub>enc</sub>*. The purpose of this loop is not explicitly stated in the diagram.
### Key Observations
* The diagram emphasizes the use of a discrete codebook to represent the encoded information.
* The encoder and decoder are represented as simple trapezoids, suggesting they could be any type of sequence-to-sequence model (e.g., RNN, Transformer).
* The diagram does not provide any specific details about the quantization process *q* or the learning of the codebook embeddings.
* The feedback loop *g* suggests a potential for iterative refinement or learning.
### Interpretation
This diagram illustrates a method for learning discrete representations of sequential data. By quantizing the encoded representation using a codebook, the model can potentially learn more compact and interpretable representations. This approach is particularly relevant in the context of large language models, where discrete representations can be used to reduce computational cost and improve generalization. The "Prompt", "CoT", and "Solution" segments suggest the model is designed for tasks that involve reasoning or problem-solving, where the "CoT" segment represents the intermediate reasoning steps. The feedback loop *g* could represent a mechanism for refining the encoded representation based on the reconstruction error, potentially leading to improved performance. The diagram is a high-level overview and does not provide details on the specific implementation or training procedure.
</details>
Figure 3.2: A graphical illustration of our VQ-VAE. ${f_{\text{enc}}}$ encodes the text tokens into latent embeddings, which are quantized by checking the nearest neighbors in the codebook. ${f_{\text{dec}}}$ decodes those quantized embeddings back to text tokens. When applying the VQ-VAE to compress the text tokens, the discrete latent tokens $Z$ are essentially the index of corresponding embeddings in the codebook.
For simplicity, we assume the lengths of the input $X$ and the prompt $P$ are $L$ and $K$ exactly. Similar to Van Den Oord et al. (2017), we use an objective $\mathcal{L}$ composed of 3 terms:
$$
\displaystyle\mathcal{L}(X)=\underbrace{\log p(X|{f_{\text{dec}}}(q(\bar{X})|g(P)))}_{\text{reconstruction loss}}+ \displaystyle\enspace\sum_{i=1}^{L}\underbrace{\|{\texttt{sg}}[\bar{X}_{i}]-q(\bar{X}_{i})\|_{2}^{2}}_{\text{VQ loss}}+\underbrace{\beta\|\bar{X}_{i}-{\texttt{sg}}[q(\bar{X}_{i})]\|_{2}^{2}}_{\text{commitment loss}}, \tag{2}
$$
where $\bar{X}={f_{\text{enc}}}(X)$ , ${\texttt{sg}}[·]$ is the stop-gradient operator, and $\beta$ is a hyperparameter controlling the strength of the commitment loss. The VQ loss and the commitment loss ensure that the encoder outputs remain close to the codebook, while the reconstruction loss concerns with the decoding efficacy. As standard for VQ-VAE, we pass the gradient $∇_{{f_{\text{dec}}}}(L)$ unaltered to ${f_{\text{enc}}}$ directly as the quantization operator $q(·)$ is non-differentiable. Figure 3.2 illustrates our architecture. In practice, we use a causal Transformer for both ${f_{\text{enc}}}$ and ${f_{\text{dec}}}$ , the model details are discussed in Appendix A.
Thus far we obtain a latent representation both semantically meaningful and conducive to reconstruction, setting the stage for the subsequent training phase where the LLM is trained to perform reasoning with abstractions.
3.2 Reasoning with Discrete Latent Tokens
In this second stage, we apply the obtained VQ-VAE to form modifed samples ${\widetilde{X}}$ with latent abstractions as in Equation 1, then train an LLM to perform next token prediction. Below, we outline the major design choices that are key to our model’s performance, and ablate them in Section 4.3.
Partial Replacement. Unlike previous planning works (Jiang et al., 2022, 2023) that project the whole input sequence onto a compact latent space, we only replace $m<t_{c}$ CoT tokens with their latent abstractions, leaving the remaining tokens unchanged. We delimit the latent tokens by injecting a special <boLatent> and <eoLatent> tokens to encapsulate them.
Left-to-Right (AR) Replacement. We replace the leftmost $m$ tokens of $C$ , rather than subsampling tokens at different locations.
Mixing Samples with Varying Values of $m$ . For fine-tuning an existing LLM on the reasoning dataset with latent tokens, one remarkable challenge is to deal with the extended vocabulary. As the LLM is pretrained with trillions of tokens, it is very hard for it to quickly adapt to tokens (and corresponding embeddings) beyond the original vocabulary. Previous works that aim to replace or eliminate CoT tokens (Deng et al., 2024; Hao et al., 2024) employ a multistage curriculum training approach, where those operations are gradually applied to the entire input sequence. In the context of our approach, this means we increase the values of $m$ in each stage until it reaches a pre-set cap value. However, such training procedure is complex and computationally inefficient, where dedicated optimization tuning is needed. In this work, we employ a simple single stage training approach where the value of $m$ is randomly set for each sample. Surprisingly, this not only makes our training more efficient, but also leads to enhanced performance.
Note that we use a VQVAE with a size of 50M, adding minimal parameter overhead. In addition, it is used only once during data preparation (to convert training data into discrete latent code), not during LLM training or inference. During inference, the LLM directly generates latent tokens without any use of VQVAE.
4 Experiments
We empirically evaluate our approach on two categories of benchmarks:
1. Synthetic datasets including the Keys-Finding Maze, ProntoQA (Saparov & He, 2022), and ProsQA (Hao et al., 2024), where we pretrain T5 or GPT-2 models from scratch using the method in Section 3;
1. Real-world mathematic reasoning problems, where we fine-tune Llama models (Dubey et al., 2024) on the MetaMathQA (Yu et al., 2023) or the Dart-MATH (Tong et al., 2024) dataset, and then test on in-domain datasets Math and GSM-8K, along with out-of-domain datasets including Fresh-Gaokao-Math-2023, DeepMind-Math, College-Math, OlympiaBench-Math, and TheoremQA.
The detailed setup is introduced in Section 4.1.
We compare our approach to the following baselines:
1. Sol-Only: the model is trained with samples that only contains questions and solutions, without any reasoning steps;
1. CoT: the model is trained with samples with complete CoT tokens;
1. iCoT (Deng et al., 2024): a method that utilizes curriculum learning to gradually eliminate the need of CoT tokens in reasoning;
1. Pause Token (Goyal et al., 2023): a method that injects a learnable pause token into the sample during training, in order to offer extra computation before giving out the final answer.
4.1 Benchmarks
4.1.1 Synthetic Benchmarks
Keys-Finding Maze is a complex navigation environment designed to evaluate an agent’s planning capabilities. The agent is randomly positioned within a maze comprising 4 $3× 3$ interconnected rooms, with the objective of reaching a randomly placed goal destination. To successfully reach the destination, the agent must collect keys (designated with green, red, and blue colors) that correspond to matching colored doors. These keys are randomly distributed among the rooms, requiring the agent to develop sophisticated planning strategies for key acquisition and door traversal. The agent is only allowed to take one key at a time. This environment poses a substantial cognitive challenge, as the agent must identify which keys are necessary for reaching the destination, and optimize the order of key collection and door unlocking to establish the most efficient path to the goal. Following Lehnert et al. (2024); Su et al. (2024), we generate intermediate search traces using the nondeterministic A* algorithm (Hart et al., 1968). The dataset contains 100k training samples. See Section A.2 for more information and graphical illustrations.
ProntoQA (Saparov & He, 2022) is a dataset consists of $9000$ logical reasoning problems derived from ontologies - formal representations of relationships between concepts. Each problem in the dataset is constructed to have exactly one correct proof or reasoning path. One distinctive feature of this dataset is its consistent grammatical and logical structure, which enables researchers to systematically analyze and evaluate how LLMs approach reasoning tasks.
ProsQA (Hao et al., 2024) is a more difficult benchmark building on top of ProntoQA. It contains 17,886 logical problems curated by randomly generated directed acyclic graphs. It has larger size of distracting reasoning paths in the ontology, and thus require more complex reasoning and planning capabilities.
4.1.2 Mathematical Reasoning
We fine-tune pretrained LLMs using the MetaMathQA (Yu et al., 2023) or the Dart-MATH (Tong et al., 2024) dataset. MetaMathQA is a curated dataset that augments the existing Math (Hendrycks et al., ) and GSM8K (Cobbe et al., 2021b) datasets by various ways of question bootstrapping, such as (i) rephrasing the question and generating the reasoning path; (ii) generating backward questions, self-verification questions, FOBAR questions (Jiang et al., 2024), etc. This dataset contains 395k samples in total, where 155k samples are bootstrapped from Math and the remaining 240k come from GSM8K. We rerun the MetaMath data pipeline by using Llama-3.1-405B-Inst to generate the response. Dart-MATH (Tong et al., 2024) also synthesizes responses for questions in Math and GSM8K, with the focus on difficult questions via difficulty-aware rejection tuning. For evaluation, we test the models on the original Math and GSM8K datasets, which are in-domain, and also the following out-of-domain benchmarks:
- College-Math (Tang et al., 2024) consists of 2818 college-level math problems taken from 9 textbooks. These problems cover over 7 different areas such as linear algebra, differential equations, and so on. They are designed to evaluate how well the language model can handle complicated mathematical reasoning problems in different field of study.
- DeepMind-Math (Saxton et al., 2019) consists of 1000 problems based on the national school math curriculum for students up to 16 years old. It examines the basic mathematics and reasoning skills across different topics.
- OlympiaBench-Math (He et al., 2024) is a text-only English subset of Olympiad-Bench focusing on advanced level mathematical reasoning. It contains 675 highly difficult math problems from competitions.
- TheoremQA (Chen et al., 2023) contains 800 problems focuses on solving problems in STEM fields (such as math, physics, and engineering) using mathematical theorems.
- Fresh-Gaokao-Math-2023 (Tang et al., 2024) contains 30 math questions coming from Gaokao, or the National College Entrance Examination, which is a national standardized test that plays a crucial role in the college admissions process.
4.2 Main Results
We employ a consistent strategy for training VQ-VAE and replacing CoT tokens with latent discrete codes across all our experiments, as outlined below. The specific model architecture and key hyperparameters used for LLM training are presented alongside the results for each category of benchmarks. All the other details are deferred to Appendix A.
VQ-VAE Training
For each benchmark, we train a VQ-VAE for 100k steps using the Adam optimizer, with learning rate $10^{-5}$ and batch size 32. We use a codebook of size $1024$ and compress every chunk of $L=16$ tokens into a single latent token (i.e., the compression rate $r=16$ ).
Randomized Latent Code Replacement
We introduce a stochastic procedure for partially replacing CoT tokens with latent codes. Specifically, we define a set of predetermined numbers $\mathcal{M}=\{0,72,128,160,192,224,256\}$ , which are all multipliers of $L=16$ . For each training example, we first sample $m_{\max}∈\mathcal{M}$ then sample an integer $m∈[0,16,32,...,m_{\max}]$ uniformly at random. The first $m$ CoT tokens are replaced by their corresponding latent discrete codes, while the later ones remain as raw text. This stochastic replacement mechanism exposes the model to a wide range of latent-text mixtures, enabling it to effectively learn from varying degrees of latent abstraction.
| Sol-Only CoT Latent (ours) | 3 43 62.8 ( $\uparrow$ +19.8) | 645 1312.0 374.6 | 93.8 98.8 100 ( $\uparrow$ +1.2) | 3.0 92.5 7.7 | 76.7 77.5 96.2 ( $\uparrow$ +18.7) | 8.2 49.4 10.9 |
| --- | --- | --- | --- | --- | --- | --- |
Table 4.1: Our latent approach surpasses the other baselines on Keys-Finding Maze, ProntoQA and ProsQA with a large margin . We use top- $k$ ( $k=10$ ) decoding for Keys-Finding Maze and greedy decoding for ProntoQA and ProsQA. In terms of token efficiency, our latent approach also generates much shorter reasoning traces than the CoT baseline, closely tracking or even outperforming the Sol-Only approach. Bold: best results. Underline: second best results. ( $\uparrow$ +Performance gain compared with the second best result.)
Model In-Domain Out-of-Domain Average Math GSM8K Gaokao-Math-2023 DM-Math College-Math Olympia-Math TheoremQA All Datasets Llama-3.2-1B Sol-Only 4.7 6.8 0.0 10.4 5.3 1.3 3.9 4.6 CoT 10.5 42.7 10.0 3.4 17.1 1.5 9.8 14.1 iCoT 8.2 10.5 3.3 11.3 7.6 2.1 10.7 7.7 Pause Token 5.1 5.3 2.0 1.4 0.5 0.0 0.6 2.1 Latent (ours) 14.7 ( $\uparrow$ +4.2) 48.7 ( $\uparrow$ +6) 10.0 14.6 ( $\uparrow$ +3.3) 20.5 ( $\uparrow$ +3.4) 1.8 11.3 ( $\uparrow$ +0.6) 17.8 ( $\uparrow$ +3.7) Llama-3.2-3B Sol-Only 6.1 8.1 3.3 14.0 7.0 1.8 6.8 6.7 CoT 21.9 69.7 16.7 27.3 30.9 2.2 11.6 25.2 iCoT 12.6 17.3 3.3 16.0 14.2 4.9 13.9 11.7 Pause Token 25.2 53.7 4.1 7.4 11.8 0.7 1.0 14.8 Latent (ours) 26.1 ( $\uparrow$ +4.2) 73.8 ( $\uparrow$ +4.1) 23.3 ( $\uparrow$ +6.6) 27.1 32.9 ( $\uparrow$ +2) 4.2 13.5 28.1 ( $\uparrow$ +2.9) Llama-3.1-8B Sol-Only 11.5 11.8 3.3 17.4 13.0 3.8 6.7 9.6 CoT 32.9 80.1 16.7 39.3 41.9 7.3 15.8 33.4 iCoT 17.8 29.6 16.7 20.3 21.3 7.6 14.8 18.3 Pause Token 39.6 79.5 6.1 25.4 25.1 1.3 4.0 25.9 Latent (ours) 37.2 84.1 ( $\uparrow$ +4.0) 30.0 ( $\uparrow$ +13.3) 41.3 ( $\uparrow$ +2) 44.0 ( $\uparrow$ +2.1) 10.2 ( $\uparrow$ +2.6) 18.4 ( $\uparrow$ +2.6) 37.9 ( $\uparrow$ +4.5)
Table 4.2: Our latent approach outperforms the baselines on various types of mathematical reasoning benchmarks. The models are fine-tuned on the MetaMathQA (Yu et al., 2023) dataset. The Math and GSM8K are in-domain datasets since they are used to generate MetaMathQA, while the others are out-of-domain. Bold: best results. Underscore: second best results. $\uparrow$ +: Performance gain compared with the second best result.
Model In-Domain (# of tokens) Out-of-Domain (# of tokens) Average Math GSM8K Gaokao-Math-2023 DM-Math College-Math Olympia-Math TheoremQA All Datasets Llama-3.2-1B Sol-Only 4.7 6.8 0.0 10.4 5.3 1.3 3.9 4.6 CoT 646.1 190.3 842.3 578.7 505.6 1087.0 736.5 655.2 iCoT 328.4 39.8 354.0 170.8 278.7 839.4 575.4 369.5 Pause Token 638.8 176.4 416.1 579.9 193.8 471.9 988.1 495 Latent (ours) 501.6 ( $\downarrow$ -22%) 181.3 ( $\downarrow$ -5%) 760.5 ( $\downarrow$ -11%) 380.1 ( $\downarrow$ -34%) 387.3 ( $\downarrow$ -23%) 840.0 ( $\downarrow$ -22%) 575.5 ( $\downarrow$ -22%) 518 ( $\downarrow$ -21%) Llama-3.2-3B Sol-Only 6.1 8.1 3.3 14.0 7.0 1.8 6.8 6.7 CoT 649.9 212.1 823.3 392.8 495.9 1166.7 759.6 642.9 iCoT 344.4 60.7 564.0 154.3 224.9 697.6 363.6 344.2 Pause Token 307.9 162.3 108.9 251.5 500.96 959.5 212.8 354.7 Latent (ours) 516.7 ( $\downarrow$ -20%) 198.8 ( $\downarrow$ -6%) 618.5 ( $\downarrow$ -25%) 340.0 ( $\downarrow$ -13%) 418.0 ( $\downarrow$ -16%) 832.8 ( $\downarrow$ -29%) 670.2 ( $\downarrow$ -12%) 513.6 ( $\downarrow$ -20%) Llama-3.1-8B Sol-Only 11.5 11.8 3.3 17.4 13.0 3.8 6.7 9.6 CoT 624.3 209.5 555.9 321.8 474.3 1103.3 760.1 578.5 iCoT 403.5 67.3 444.8 137.0 257.1 797.1 430.9 362.5 Pause Token 469.4 119.0 752.6 413.4 357.3 648.2 600.1 480 Latent (ours) 571.9 ( $\downarrow$ -9 %) 193.9 ( $\downarrow$ -8 %) 545.8 ( $\downarrow$ -2 %) 292.1 ( $\downarrow$ -10%) 440.3 ( $\downarrow$ -8%) 913.7 ( $\downarrow$ -17 %) 637.2 ( $\downarrow$ -16 %) 513.7 ( $\downarrow$ -10%)
Table 4.3: The average number of tokens in the generated responses. Compared with the CoT baseline, our latent approach achieves an $17\%$ reduction in response length on average, while surpassing it in final performance according to Table 4.2. The iCoT method generates shorter responses than our approach, yet performs significantly worse, see Table 4.2. $\downarrow$ -: Trace length reduction rate compared with CoT.
4.2.1 Synthetic Benchmarks
Hyperparameters and Evaluation Metric
For our experiments on the ProntoQA and ProsQA datasets, we fine-tune the pretrained GPT-2 model (Radford et al., 2019) for $16$ k steps, where we use a learning rate of $10^{-4}$ with linear warmup for 100 steps, and the batch size is set to 128. To evaluate the models, we use greedy decoding and check the exact match with the ground truth.
For Keys-Finding Maze, due to its specific vocabulary, we trained a T5 model (Raffel et al., 2020) from scratch for 100k steps with a learning rate of $7.5× 10^{-4}$ and a batch size of 1024. We evaluate the models by the 1-Feasible-10 metric. Namely, for each evaluation task, we randomly sample 10 responses with top- $k$ ( $k$ =10) decoding and check if any of them is feasible and reaches the goal location.
Results
As shown in Table 4.1, our latent approach performs better than the baselines for both the Keys-Finding Maze and ProntoQA tasks. Notably, the absolute improvement is 15% for the Keys-Finding Maze problem, and we reach 100% accuracy on the relatively easy ProntoQA dataset. For the more difficult ProsQA, the CoT baseline only obtains 77.5% accuracy, the latent approach achieves $17.5\%$ performance gain.
Model In-Domain Out-of-Domain Average math GSM8K Fresh-Gaokao-Math-2023 DeepMind-Mathematics College-Math Olympia-Math TheoremQA All Datasets Llama-3.2-1B All-Replace 6.7 4.2 0.0 11.8 6.0 2.1 8.5 5.6 Curriculum-Replace 7.1 9.8 3.3 13.0 7.9 2.4 10.5 7.8 Poisson-Replace 13.9 49.5 10.0 12.2 18.9 2.3 9.0 15.1 Latent-AR (ours) 14.7 48.7 10.0 14.6 20.5 1.8 11.3 17.8 Llama-3.2-3B All-Replace 10.7 12.8 10.0 19.4 12.8 5.3 11.8 11.8 Curriculum-Replace 10.2 14.9 3.3 16.8 12.9 3.9 14.4 10.9 Poisson-Replace 23.6 65.9 13.3 17.9 28.9 2.9 11.2 20.5 Latent (ours) 26.1 73.8 23.3 27.1 32.9 4.2 13.5 28.1 Llama-3.1-8B All-Replace 15.7 19.9 6.7 21.1 19.5 5.0 17.5 15.0 Curriculum-Replace 14.6 23.1 13.3 20.3 18.7 3.9 16.6 15.8 Possion-Replace 37.9 83.6 16.6 42.7 44.7 9.9 19.1 36.3 Latent (ours) 37.2 84.1 30.0 41.3 44.0 10.2 18.4 37.9
Table 4.4: Our latent token replacement strategy significantly outperforms the alternative choices: All-Replace (where all the textual CoT tokens are replaced by latent tokens at once), Curriculum-Replace (where we gradually replace the text tokens for the entire CoT subsequence by latent tokens over the course of training) and Poisson-Replace (where individual chunks of text tokens are replaced with probabilities 0.5).
4.2.2 Mathematical Reasoning
Hyperparameters and Evaluation Metrics
We considered 3 different sizes of LLMs from the LLaMa herd: Llama-3.2-1B, Llama-3.2-3B and Llama-3.1-8B models. For all the models, we fine-tune them on the MetaMathQA dataset for 1 epoch. To maximize training efficiency, we use a batch size of 32 with a sequence packing of 4096. We experiment with different learning rates $10^{-5},2.5× 10^{-5},5× 10^{-5},10^{-4}$ and select the one with the lowest validation error. The final choices are $10^{-5}$ for the 8B model and $2.5× 10^{-5}$ for the others. For all the experiments, we use greedy decoding for evaluation.
Accuracy Comparison
Table 4.2 presents the results. Our latent approach consistently outperforms all the baselines across nearly all the tasks, for models of different sizes. For tasks on which we do not observe improvement, our approach is also comparable to the best performance. The gains are more pronounced in specific datasets such as Gaokao-Math-2023. On average, we are observing a $+5.3$ points improvement for the 8B model, $+2.9$ points improvement for the 3B model, and +3.7 points improvement for the 1B model.
Tokens Efficiency Comparison
Alongside the accuracy, we also report the number of tokens contained in the generated responses in Table 4.3, which is the dominating factor of the inference efficiency. Our first observation is that for all the approaches, the model size has little influence on the length of generated responses. Overall, the CoT method outputs the longest responses, while the Sol-Only method outputs the least number of tokens, since it is trained to generate the answer directly. The iCoT method generates short responses as well ( $42.8\%$ reduction compared to CoT), as the CoT data has been iteratively eliminated in its training procedure. However, this comes at the cost of significantly degraded model performance compared with CoT, as shown in Table 4.2. Our latent approach shows an average $17\%$ reduction in token numbers compared with CoT while surpassing it in prediction accuracy.
4.3 Ablation & Understanding Studies
Replacement Strategies
Our latent approach partially replaces the leftmost $m$ CoT tokens, where the value of $m$ varies for each sample. We call such replacement strategies AR-Replace. Here we consider three alternative strategies:
1. All-Replace: all the text CoT tokens are replaced by the latent tokens.
1. Curriculum-Replace: the entire CoT subsequence are gradually replaced over the course of training, similar to the training procedure used by iCoT and COCONUT (Hao et al., 2024). We train the model for 8 epochs. Starting from the original dataset, in each epoch we construct a new training dataset whether we further replace the leftmost 16 textual CoT tokens by a discrete latent token.
1. Poisson-Replace: instead of replacing tokens from left to right, we conduct a Poisson sampling process to select CoT tokens to be replaced: we split the reasoning traces into chunks consisting of 16 consecutive text tokens, where each chunk is randomly replaced by the latent token with probability 0.5.
Table 4.4 reports the results. Our AR-Replace strategy demonstrate strong performance, outperforming the other two strategies with large performance gap. Our intuition is as follows. When all the textual tokens are removed, the model struggles to align the latent tokens with the linguistic and semantic structures it learned during pretraining.
In contrast, partial replacement offers the model a bridge connecting text and latent spaces: the remaining text tokens serve as anchors, helping the model interpret and integrate the latent representations more effectively. Interestingly, the curriculum learning strategy is bridging the two spaces very well, where All-Replace and Curriculum-Replace exhibit similar performance. This is similar to our observation that iCoT performs remarkably worse than CoT for mathematical reasoning problems. Poisson-Replace demonstrates performance marginally worse to our AR-Replace strategy on the 1B and 8B models, but significantly worse on the 3B model. Our intuition is that having a fix pattern of replacement (starting from the beginning and left to right) is always easier for the model to learn. This might be due to the limited finetuning dataset size and model capacity.
Attention Weights Analysis
To understand the reason why injecting latent tokens enhanced the model’s reasoning performance, we randomly selected two questions from the Math and Collegue-Math dataset and generate responses, then analyze the attention weights over the input prompt tokens:
1. What is the positive difference between $120%$ of 30 and $130%$ of 20?
1. Mark has $50 in his bank account. He earns $10 per day at his work. If he wants to buy a bike that costs $300, how many days does Mark have to save his money?
Specifically, we take the last attention layer, compute the average attention weights over different attention heads and show its relative intensity over the prompt tokens We first compute the average attention weights across multiple heads. This gives us a single lower triangular matrix. Then, we take the column sum of this matrix to get an aggregated attention weights for each token. Last, we normalize the weights by their average to obtain the relative intensity. A one line pseudocode is: column_sum(avg(attention_matrices)) / avg(column_sum(avg(attention_matrices))). . We compare the averaged attention weights of our model with the CoT model in Figure 4.1. Interestingly, our model learns to grasp a stronger attention to numbers and words representing mathematical operations. Both Figure 1(a) and Figure 1(b) show that the latent model focus more on the numbers, such as 120, 30, and 130 for the first question. For the second question, our latent model shows a larger attention weights on numbers including 50, 10, and 300, and also tokens semantically related to mathematical operations such as earns (means addition) and cost (means subtraction). This suggests that, by partially compressing the reasoning trace into a mix of latent and text tokens, we allow the model to effectively focus on important tokens that build the internal logical flow. See Section C.1 for the exact response generated by our approach and the CoT baseline.
<details>
<summary>plots/entry_1.png Details</summary>

### Visual Description
\n
## Bar Chart: Average Attention Intensity per Input Prompt Token
### Overview
This is a stacked bar chart visualizing the average attention intensity for each input prompt token. The chart compares two models: "Latent (Ours)" and "CoT (baseline)". The x-axis represents the prompt input tokens, and the y-axis represents the normalized attention intensity. Each bar is divided into two sections, representing the attention intensity contributed by each model.
### Components/Axes
* **Title:** Average Attention Intensity per Input Prompt Token
* **X-axis:** Prompt Input Tokens. The tokens are: "What", "is", "the", "positive", "difference", "between", "≤", "120", "\\", "%", "$", "of", "30", "and", "$", "130", "\\", "%", "of", "20", "?".
* **Y-axis:** Normalized Attention Intensity (ranging from 0.00 to 0.07).
* **Legend:**
* "Latent (Ours)" - represented by a reddish-pink color.
* "CoT (baseline)" - represented by a blue color.
### Detailed Analysis
The chart consists of 21 stacked bars, one for each prompt token. The height of each bar represents the total attention intensity for that token, and the sections within each bar show the contribution of each model.
Here's a breakdown of the attention intensity for each token, with approximate values:
* **What:** Latent: ~0.015, CoT: ~0.025. Total: ~0.04
* **is:** Latent: ~0.015, CoT: ~0.025. Total: ~0.04
* **the:** Latent: ~0.02, CoT: ~0.02. Total: ~0.04
* **positive:** Latent: ~0.03, CoT: ~0.01. Total: ~0.04
* **difference:** Latent: ~0.02, CoT: ~0.025. Total: ~0.045
* **between:** Latent: ~0.045, CoT: ~0.01. Total: ~0.055
* **≤:** Latent: ~0.06, CoT: ~0.01. Total: ~0.07
* **120:** Latent: ~0.04, CoT: ~0.03. Total: ~0.07
* **\\:** Latent: ~0.01, CoT: ~0.02. Total: ~0.03
* **%:** Latent: ~0.01, CoT: ~0.02. Total: ~0.03
* **$:** Latent: ~0.01, CoT: ~0.03. Total: ~0.04
* **of:** Latent: ~0.03, CoT: ~0.015. Total: ~0.045
* **30:** Latent: ~0.04, CoT: ~0.005. Total: ~0.045
* **and:** Latent: ~0.02, CoT: ~0.02. Total: ~0.04
* **$:** Latent: ~0.01, CoT: ~0.02. Total: ~0.03
* **130:** Latent: ~0.02, CoT: ~0.01. Total: ~0.03
* **\\:** Latent: ~0.01, CoT: ~0.01. Total: ~0.02
* **%:** Latent: ~0.01, CoT: ~0.01. Total: ~0.02
* **of:** Latent: ~0.03, CoT: ~0.01. Total: ~0.04
* **20:** Latent: ~0.025, CoT: ~0.015. Total: ~0.04
* **?:** Latent: ~0.03, CoT: ~0.02. Total: ~0.05
**Trends:**
* The "CoT (baseline)" model consistently contributes a significant portion of the attention intensity across all tokens.
* The "Latent (Ours)" model shows higher attention intensity for tokens like "≤", "120", "30", and "?".
* The total attention intensity varies significantly across tokens, with "≤" and "120" receiving the highest attention.
### Key Observations
* The highest total attention intensity is observed for the token "≤", reaching approximately 0.07.
* The "Latent (Ours)" model appears to be more sensitive to specific tokens, particularly those related to numerical values and symbols.
* The "CoT (baseline)" model provides a relatively stable baseline attention intensity across all tokens.
### Interpretation
The chart demonstrates how attention is distributed across different tokens in a prompt when using two different models. The "CoT (baseline)" model seems to provide a consistent level of attention to all tokens, while the "Latent (Ours)" model focuses more intensely on specific tokens, particularly those involving mathematical symbols and numbers. This suggests that the "Latent (Ours)" model may be better at processing and understanding numerical information within the prompt. The high attention intensity for "≤" and "120" could indicate that these tokens are crucial for the task the model is performing. The difference in attention patterns between the two models highlights the impact of different architectures and training strategies on attention mechanisms. The stacked bar chart effectively visualizes the contribution of each model to the overall attention intensity, allowing for a clear comparison of their strengths and weaknesses.
</details>
(a) Prompt: What is the positive difference between $120%$ of 30 and $130%$ of 20?
<details>
<summary>plots/entry_7746.png Details</summary>

### Visual Description
## Bar Chart: Average Attention Intensity per Input Prompt Token
### Overview
This is a stacked bar chart visualizing the average attention intensity for each token in a prompt, comparing a "Latent (Ours)" model against a "CoT (baseline)" model. The x-axis represents the prompt input tokens, and the y-axis represents the normalized attention intensity. The chart displays the contribution of each model to the total attention intensity for each token.
### Components/Axes
* **Title:** Average Attention Intensity per Input Prompt Token
* **X-axis:** Prompt Input Tokens. The tokens are: "Mark", "has", "$", "50", "in", "his", "bank", "account", "He", "earns", "$", "10", "per", "day", "at", "work", "if", "he", "wants", "to", "buy", "a", "bike", "that", "costs", "$", "300", "how", "many", "days", "Mark", "to", "save", "his", "money", "?"
* **Y-axis:** Normalized Attention Intensity (ranging from approximately 0.000 to 0.040)
* **Legend:**
* "Latent (Ours)" - represented by a reddish-pink color.
* "CoT (baseline)" - represented by a blue color.
* **Chart Type:** Stacked Bar Chart
### Detailed Analysis
The chart consists of 33 stacked bars, one for each prompt token. The height of each bar represents the total attention intensity for that token. The portion of the bar colored reddish-pink represents the attention intensity contributed by the "Latent (Ours)" model, while the blue portion represents the contribution of the "CoT (baseline)" model.
Here's a breakdown of the attention intensity for each token, with approximate values:
* **Mark:** Latent: ~0.005, CoT: ~0.028, Total: ~0.033
* **has:** Latent: ~0.005, CoT: ~0.026, Total: ~0.031
* **$:** Latent: ~0.003, CoT: ~0.027, Total: ~0.030
* **50:** Latent: ~0.035, CoT: ~0.002, Total: ~0.037
* **in:** Latent: ~0.003, CoT: ~0.025, Total: ~0.028
* **his:** Latent: ~0.004, CoT: ~0.024, Total: ~0.028
* **bank:** Latent: ~0.004, CoT: ~0.025, Total: ~0.029
* **account:** Latent: ~0.004, CoT: ~0.026, Total: ~0.030
* **He:** Latent: ~0.004, CoT: ~0.026, Total: ~0.030
* **earns:** Latent: ~0.004, CoT: ~0.026, Total: ~0.030
* **$:** Latent: ~0.003, CoT: ~0.027, Total: ~0.030
* **10:** Latent: ~0.004, CoT: ~0.025, Total: ~0.029
* **per:** Latent: ~0.003, CoT: ~0.024, Total: ~0.027
* **day:** Latent: ~0.003, CoT: ~0.023, Total: ~0.026
* **at:** Latent: ~0.003, CoT: ~0.022, Total: ~0.025
* **work:** Latent: ~0.003, CoT: ~0.022, Total: ~0.025
* **if:** Latent: ~0.003, CoT: ~0.022, Total: ~0.025
* **he:** Latent: ~0.003, CoT: ~0.022, Total: ~0.025
* **wants:** Latent: ~0.003, CoT: ~0.022, Total: ~0.025
* **to:** Latent: ~0.003, CoT: ~0.022, Total: ~0.025
* **buy:** Latent: ~0.004, CoT: ~0.023, Total: ~0.027
* **a:** Latent: ~0.003, CoT: ~0.022, Total: ~0.025
* **bike:** Latent: ~0.004, CoT: ~0.023, Total: ~0.027
* **that:** Latent: ~0.003, CoT: ~0.022, Total: ~0.025
* **costs:** Latent: ~0.004, CoT: ~0.023, Total: ~0.027
* **$:** Latent: ~0.003, CoT: ~0.022, Total: ~0.025
* **300:** Latent: ~0.004, CoT: ~0.023, Total: ~0.027
* **how:** Latent: ~0.003, CoT: ~0.022, Total: ~0.025
* **many:** Latent: ~0.003, CoT: ~0.022, Total: ~0.025
* **days:** Latent: ~0.003, CoT: ~0.022, Total: ~0.025
* **Mark:** Latent: ~0.003, CoT: ~0.022, Total: ~0.025
* **to:** Latent: ~0.003, CoT: ~0.022, Total: ~0.025
* **save:** Latent: ~0.003, CoT: ~0.022, Total: ~0.025
* **his:** Latent: ~0.003, CoT: ~0.022, Total: ~0.025
* **money:** Latent: ~0.003, CoT: ~0.022, Total: ~0.025
* **?:** Latent: ~0.003, CoT: ~0.022, Total: ~0.025
### Key Observations
* The "CoT (baseline)" model consistently contributes a larger portion of the attention intensity for most tokens.
* The "Latent (Ours)" model shows a relatively higher attention intensity for the token "50".
* The attention intensity is generally low for all tokens, with most values falling below 0.030.
* There isn't a significant variation in attention intensity across the majority of the tokens.
### Interpretation
The chart suggests that the "CoT (baseline)" model is more heavily relied upon for processing the prompt tokens compared to the "Latent (Ours)" model. The higher attention intensity of the baseline model indicates that it plays a more significant role in understanding and processing the input sequence. The spike in attention for the "50" token with the "Latent (Ours)" model could indicate that this model is particularly sensitive to numerical values or specific quantities within the prompt. The overall low attention intensities suggest that the prompt is relatively straightforward and doesn't require extensive attention allocation for any single token. The consistent baseline attention could also indicate that the Latent model is not effectively utilizing the prompt information. Further investigation would be needed to understand why the Latent model is less attentive to the prompt tokens and whether this impacts its performance on the task.
</details>
(b) Prompt: Mark has $50 in his bank account. He earns $10 per day at his work. If he wants to buy a bike that costs $300, how many days does Mark have to save his money?
Figure 4.1: Comparing with the CoT model, our latent approach have high attention weights on numbers and text tokens representing mathematical operations.
Table 4.5: The Table blow illustrates a clear trend on Llama-3.2-3B model where increasing the compression rate reduces the number of generated tokens due to higher data compression. Notably, a compression rate of 2 shows some improvements over the CoT baseline. Furthermore, there appears to be an optimal ’sweet spot’ where the data is neither overly compressed (rate = 32) nor minimally compressed (rate = 2), optimizing both efficiency and accuracy.
Model In-Domain Out-of-Domain Average math GSM8K Fresh-Gaokao- Math-2023 DeepMind- Mathematics College- Math Olympia- Math TheoremQA All Datasets Llama-3.2-3B CoT (baseline) (Acc.) 21.9 69.7 16.7 27.3 30.9 2.2 11.6 25.2 # of Tokens 649.9 212.1 823.3 392.8 495.9 1166.7 759.6 642.9 Latent- $r=2$ (Acc.) 24.3 71.7 16.7 25.4 32.0 4.7 14.8 27.08 # of Tokens 586.0 207.6 739.6 415.3 471 1036 714 595.6 Latent $r=16$ (Acc.) 26.1 73.8 23.3 27.1 32.9 4.2 13.5 28.1 # of Tokens 516.7 198.8 618.5 340.0 418.0 832.8 670.2 513.6 Latent $r=32$ (Acc.) 25.2 71.5 23.3 26.3 33.3 4.9 14.1 27.9 # of Tokens 496.5 183.3 577.3 311.0 395.2 821.0 585.6 481.4
4.4 Ablations on the Latent $r$ parameters
Throughout this paper we have been using $r$ (or the compression ratio) to be 16, in this section, we will be ablating how would $r$ affects the performance of the downstream Math tasks if we vary this parameter.
To this end, we vary this parameter on the Llama-3.2-3B model. Our result is summarized in Table. 4.5. A graphical illustration is shown in Figure. 4.2. A key takeaway is that our latent approach comes out ahead of the CoT baseline for all $r$ settings in terms of fewer tokens (better efficiency) and higher accuracy. This is a strong signal that the shift to a latent representation itself is fundamentally beneficial. In addition, we see that when the $r$ (compression ratio) increases, we expect each latent token to encode more information (higher compression). As a result, we see that, on average, the number of tokens reduces as $r$ increases. However, in terms of the accuracy metric, we see that the model increases initially from 25.2 (overall accuracy) to 27.1 (when $r=2$ ). It further boosts up to 28.1 at $r=16$ , and then it goes down to 27.9 when $r=32$ . This indicates a sweet spot that $r=16$ , it is neither overly-compressed (which implies information loss), nor under-compressed (which implies information is not encoded abstractly enough). This study indicates an interesting trade-off between accuracy and tokens efficiency in our latent approach. So, $r=16$ appears to strike an optimal balance between compact representation and the preservation of task-critical information.
<details>
<summary>plots/plot_r.png Details</summary>

### Visual Description
## Line Chart: Average Math-benchmark Accuracy vs Compression-Rate on Llama3.2-3B Model
### Overview
This line chart depicts the relationship between latent compression rate and both average math accuracy and token counts for the Llama3.2-3B model. Two data series are presented: average math accuracy (plotted against the primary y-axis) and token counts (plotted against the secondary y-axis). The data appears to be derived from the last column of Table 4.2.
### Components/Axes
* **Title:** Average Math-benchmark Accuracy (last column of Table 4.2) vs Compression-Rate on Llama3.2-3B Model
* **X-axis:** Latent Compression-Rate (ranging from approximately 0 to 30, with markers at 5, 10, 15, 20, 25, and 30)
* **Primary Y-axis (left):** Averaged Math Accuracy (ranging from approximately 27.2 to 28.1)
* **Secondary Y-axis (right):** Token Counts (ranging from approximately 480 to 600)
* **Legend:**
* Blue Line: Average Math Accuracy
* Red Line: Tokens
### Detailed Analysis
**Average Math Accuracy (Blue Line):**
The blue line representing average math accuracy exhibits an upward trend initially, then plateaus.
* At a compression rate of 0, the accuracy is approximately 27.1.
* At a compression rate of 5, the accuracy increases to approximately 27.3.
* At a compression rate of 10, the accuracy increases to approximately 27.7.
* At a compression rate of 15, the accuracy peaks at approximately 28.1.
* At a compression rate of 20, the accuracy remains at approximately 28.1.
* At a compression rate of 25, the accuracy decreases slightly to approximately 27.9.
* At a compression rate of 30, the accuracy decreases to approximately 27.8.
**Token Counts (Red Line):**
The red line representing token counts shows a consistent downward trend.
* At a compression rate of 0, the token count is approximately 590.
* At a compression rate of 5, the token count decreases to approximately 570.
* At a compression rate of 10, the token count decreases to approximately 540.
* At a compression rate of 15, the token count decreases to approximately 525.
* At a compression rate of 20, the token count decreases to approximately 505.
* At a compression rate of 25, the token count decreases to approximately 495.
* At a compression rate of 30, the token count decreases to approximately 480.
### Key Observations
* There is a positive correlation between compression rate and math accuracy up to a compression rate of 15. Beyond this point, accuracy plateaus and then slightly declines.
* There is a strong negative correlation between compression rate and token counts. As the compression rate increases, the number of tokens decreases.
* The peak math accuracy is achieved at a compression rate of 15.
* The most significant drop in token count occurs between compression rates of 0 and 10.
### Interpretation
The data suggests that increasing the compression rate initially improves math accuracy for the Llama3.2-3B model, but this improvement has diminishing returns. Beyond a certain point (around a compression rate of 15), further compression does not lead to increased accuracy and may even slightly reduce it. The consistent decrease in token counts with increasing compression rate indicates that compression is effectively reducing the model's size and computational requirements.
The relationship between accuracy and compression rate could be due to a trade-off between model complexity and information loss. Higher compression rates may lead to a more compact model, but also to a loss of information that is crucial for accurate math reasoning. The plateau and slight decline in accuracy at higher compression rates suggest that the model is reaching a point where further compression is detrimental to its performance.
The data is explicitly linked to "last column of Table 4.2", suggesting this chart is a visualization of a specific data point within a larger study. Further investigation of Table 4.2 would be necessary to understand the context of this data and the specific compression techniques used.
</details>
Figure 4.2: A graphical illustration of the compression rate $r$ trade-off between the accuracy and the token efficiency on the Llama-3.2-3B model.
4.5 Additional Examples and Interpretability Result
We provide 4 additional example responses for questions in the Math and TheoremQA datasets in Appendix D. In Appendix F, we compare all the approaches when the model is trained on the DART-MATH (Tong et al., 2024) dataset, where similar trends are observed.
We also provide interpretable examples in the Appendix E.
5 Conclusion
We present a novel approach to improve the reasoning capabilities of LLMs, by compressing the initial steps of the reasoning traces using discrete latent tokens obtained from VQ-VAE. By integrating both abstract representation and textual details of the reasoning process into training, our approach enables LLMs to capture essential reasoning information with improved token efficiency. Furthermore, by randomizing the number of text tokens to be compressed during training, we unlock fast adaptation to unseen latent tokens. Our comprehensive evaluation demonstrates the effectiveness across multiple domains, outperforming standard methods that rely on complete textual reasoning traces.
Impact Statement
This paper presents a method to enhance the reasoning capability of Large Language Models (LLMs) by combining latent and text tokens in the reasoning trace. In terms of society impact, while reasoning with (opaque) latent tokens may trigger safety concerns, our approach provides a VQVAE decoder that can decode the latent tokens into human readable format, mitigating such concerns.
References
- Azerbayev et al. (2023) Azerbayev, Z., Schoelkopf, H., Paster, K., Santos, M. D., McAleer, S., Jiang, A. Q., Deng, J., Biderman, S., and Welleck, S. Llemma: An open language model for mathematics. arXiv preprint arXiv:2310.10631, 2023.
- Barrault et al. (2024) Barrault, L., Duquenne, P.-A., Elbayad, M., Kozhevnikov, A., Alastruey, B., Andrews, P., Coria, M., Couairon, G., Costa-jussà, M. R., Dale, D., et al. Large concept models: Language modeling in a sentence representation space. arXiv e-prints, pp. arXiv–2412, 2024.
- Chen et al. (2022a) Chen, W., Ma, X., Wang, X., and Cohen, W. W. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588, 2022a.
- Chen et al. (2022b) Chen, W., Ma, X., Wang, X., and Cohen, W. W. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588, 2022b.
- Chen et al. (2023) Chen, W., Yin, M., Ku, M., Lu, P., Wan, Y., Ma, X., Xu, J., Wang, X., and Xia, T. Theoremqa: A theorem-driven question answering dataset. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 7889–7901, 2023.
- Chung et al. (2024) Chung, H. W., Hou, L., Longpre, S., Zoph, B., Tay, Y., Fedus, W., Li, Y., Wang, X., Dehghani, M., Brahma, S., et al. Scaling instruction-finetuned language models. Journal of Machine Learning Research, 25(70):1–53, 2024.
- Cobbe et al. (2021a) Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021a.
- Cobbe et al. (2021b) Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021b.
- Deng et al. (2023) Deng, Y., Prasad, K., Fernandez, R., Smolensky, P., Chaudhary, V., and Shieber, S. Implicit chain of thought reasoning via knowledge distillation. arXiv preprint arXiv:2311.01460, 2023.
- Deng et al. (2024) Deng, Y., Choi, Y., and Shieber, S. From explicit cot to implicit cot: Learning to internalize cot step by step. arXiv preprint arXiv:2405.14838, 2024.
- Dubey et al. (2024) Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., Mathur, A., Schelten, A., Yang, A., Fan, A., et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024.
- Dziri et al. (2024) Dziri, N., Lu, X., Sclar, M., Li, X. L., Jian, L., Lin, B. Y., West, P., Bhagavatula, C., Bras, R. L., Hwang, J. D., Sanyal, S., Welleck, S., Ren, X., Ettinger, A., Harchaoui, Z., and Choi, Y. Faith and fate: Limits of transformers on compositionality. Advances in Neural Information Processing Systems, 36, 2024.
- Feng et al. (2024) Feng, G., Zhang, B., Gu, Y., Ye, H., He, D., and Wang, L. Towards revealing the mystery behind chain of thought: a theoretical perspective. Advances in Neural Information Processing Systems, 36, 2024.
- Gandhi et al. (2024) Gandhi, K., Lee, D., Grand, G., Liu, M., Cheng, W., Sharma, A., and Goodman, N. D. Stream of search (sos): Learning to search in language. arXiv preprint arXiv:2404.03683, 2024.
- Goyal et al. (2023) Goyal, S., Ji, Z., Rawat, A. S., Menon, A. K., Kumar, S., and Nagarajan, V. Think before you speak: Training language models with pause tokens. arXiv preprint arXiv:2310.02226, 2023.
- Hao et al. (2024) Hao, S., Sukhbaatar, S., Su, D., Li, X., Hu, Z., Weston, J., and Tian, Y. Training large language models to reason in a continuous latent space. arXiv preprint arXiv:2412.06769, 2024.
- Hart et al. (1968) Hart, P. E., Nilsson, N. J., and Raphael, B. A formal basis for the heuristic determination of minimum cost paths. IEEE transactions on Systems Science and Cybernetics, 4(2):100–107, 1968.
- He et al. (2024) He, C., Luo, R., Bai, Y., Hu, S., Thai, Z. L., Shen, J., Hu, J., Han, X., Huang, Y., Zhang, Y., et al. Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems. arXiv preprint arXiv:2402.14008, 2024.
- (19) Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., and Steinhardt, J. Measuring mathematical problem solving with the math dataset. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
- Hendrycks et al. (2021) Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., and Steinhardt, J. Measuring mathematical problem solving with the math dataset. NeurIPS, 2021.
- Jiang et al. (2024) Jiang, W., Shi, H., Yu, L., Liu, Z., Zhang, Y., Li, Z., and Kwok, J. Forward-backward reasoning in large language models for mathematical verification. In Findings of the Association for Computational Linguistics ACL 2024, pp. 6647–6661, 2024.
- Jiang et al. (2022) Jiang, Z., Zhang, T., Janner, M., Li, Y., Rocktäschel, T., Grefenstette, E., and Tian, Y. Efficient planning in a compact latent action space. arXiv preprint arXiv:2208.10291, 2022.
- Jiang et al. (2023) Jiang, Z., Xu, Y., Wagener, N., Luo, Y., Janner, M., Grefenstette, E., Rocktäschel, T., and Tian, Y. H-gap: Humanoid control with a generalist planner. arXiv preprint arXiv:2312.02682, 2023.
- Kim et al. (2023) Kim, S., Joo, S. J., Kim, D., Jang, J., Ye, S., Shin, J., and Seo, M. The cot collection: Improving zero-shot and few-shot learning of language models via chain-of-thought fine-tuning. arXiv preprint arXiv:2305.14045, 2023.
- Kojima et al. (2022) Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., and Iwasawa, Y. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199–22213, 2022.
- Lehnert et al. (2024) Lehnert, L., Sukhbaatar, S., Su, D., Zheng, Q., McVay, P., Rabbat, M., and Tian, Y. Beyond a*: Better planning with transformers via search dynamics bootstrapping. In First Conference on Language Modeling, 2024. URL https://openreview.net/forum?id=SGoVIC0u0f.
- Li et al. (2024) Li, Z., Liu, H., Zhou, D., and Ma, T. Chain of thought empowers transformers to solve inherently serial problems, 2024. URL https://arxiv.org/abs/2402.12875.
- Lin et al. (2024) Lin, B. Y., Bras, R. L., and Choi, Y. Zebralogic: Benchmarking the logical reasoning ability of language models, 2024. URL https://huggingface.co/spaces/allenai/ZebraLogic.
- Liu et al. (2024) Liu, L., Pfeiffer, J., Wu, J., Xie, J., and Szlam, A. Deliberation in latent space via differentiable cache augmentation. 2024. URL https://arxiv.org/abs/2412.17747.
- Lozhkov et al. (2024) Lozhkov, A., Ben Allal, L., Bakouch, E., von Werra, L., and Wolf, T. Finemath: the finest collection of mathematical content, 2024. URL https://huggingface.co/datasets/HuggingFaceTB/finemath.
- Nye et al. (2021a) Nye, M., Andreassen, A. J., Gur-Ari, G., Michalewski, H., Austin, J., Bieber, D., Dohan, D., Lewkowycz, A., Bosma, M., Luan, D., et al. Show your work: Scratchpads for intermediate computation with language models. arXiv preprint arXiv:2112.00114, 2021a.
- Nye et al. (2021b) Nye, M., Andreassen, A. J., Gur-Ari, G., Michalewski, H., Austin, J., Bieber, D., Dohan, D., Lewkowycz, A., Bosma, M., Luan, D., et al. Show your work: Scratchpads for intermediate computation with language models. arXiv preprint arXiv:2112.00114, 2021b.
- Pagnoni et al. (2024) Pagnoni, A., Pasunuru, R., Rodriguez, P., Nguyen, J., Muller, B., Li, M., Zhou, C., Yu, L., Weston, J., Zettlemoyer, L., Ghosh, G., Lewis, M., Holtzman, A., and Iyer, S. Byte latent transformer: Patches scale better than tokens. 2024. URL https://arxiv.org/abs/2412.09871.
- Pfau et al. (2024) Pfau, J., Merrill, W., and Bowman, S. R. Let’s think dot by dot: Hidden computation in transformer language models. arXiv preprint arXiv:2404.15758, 2024.
- Radford et al. (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
- Raffel et al. (2020) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1–67, 2020. URL http://jmlr.org/papers/v21/20-074.html.
- Saparov & He (2022) Saparov, A. and He, H. Language models are greedy reasoners: A systematic formal analysis of chain-of-thought. arXiv preprint arXiv:2210.01240, 2022.
- Saxton et al. (2019) Saxton, D., Grefenstette, E., Hill, F., and Kohli, P. Analysing mathematical reasoning abilities of neural models. arXiv preprint arXiv:1904.01557, 2019.
- Su et al. (2024) Su, D., Sukhbaatar, S., Rabbat, M., Tian, Y., and Zheng, Q. Dualformer: Controllable fast and slow thinking by learning with randomized reasoning traces. arXiv preprint arXiv:2410.09918, 2024.
- Su et al. (2025) Su, D., Gu, A., Xu, J., Tian, Y., and Zhao, J. Galore 2: Large-scale llm pre-training by gradient low-rank projection. arXiv preprint arXiv:2504.20437, 2025.
- Tang et al. (2024) Tang, Z., Zhang, X., Wang, B., and Wei, F. Mathscale: Scaling instruction tuning for mathematical reasoning. arXiv preprint arXiv:2403.02884, 2024.
- Tong et al. (2024) Tong, Y., Zhang, X., Wang, R., Wu, R., and He, J. Dart-math: Difficulty-aware rejection tuning for mathematical problem-solving. arXiv preprint arXiv:2407.13690, 2024.
- Van Den Oord et al. (2017) Van Den Oord, A., Vinyals, O., et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017.
- Wang & Zhou (2024) Wang, X. and Zhou, D. Chain-of-thought reasoning without prompting. 2024. URL https://arxiv.org/abs/2402.10200.
- Wang et al. (2022) Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., Narang, S., Chowdhery, A., and Zhou, D. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022.
- Wang et al. (2023) Wang, X., Caccia, L., Ostapenko, O., Yuan, X., Wang, W. Y., and Sordoni, A. Guiding language model reasoning with planning tokens. arXiv preprint arXiv:2310.05707, 2023.
- Wei et al. (2022a) Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q. V., Zhou, D., et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824–24837, 2022a.
- Wei et al. (2022b) Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q. V., Zhou, D., et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824–24837, 2022b.
- Wen et al. (2024) Wen, K., Zhang, H., Lin, H., and Zhang, J. From sparse dependence to sparse attention: Unveiling how chain-of-thought enhances transformer sample efficiency. arXiv preprint arXiv:2410.05459, 2024.
- Yang et al. (2024) Yang, A., Yang, B., Zhang, B., Hui, B., Zheng, B., Yu, B., Li, C., Liu, D., Huang, F., Wei, H., et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115, 2024.
- Yao et al. (2024) Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T., Cao, Y., and Narasimhan, K. Tree of thoughts: Deliberate problem solving with large language models. Advances in Neural Information Processing Systems, 36, 2024.
- Yu et al. (2023) Yu, L., Jiang, W., Shi, H., Yu, J., Liu, Z., Zhang, Y., Kwok, J. T., Li, Z., Weller, A., and Liu, W. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284, 2023.
- Yu et al. (2024) Yu, P., Xu, J., Weston, J., and Kulikov, I. Distilling system 2 into system 1. arXiv preprint arXiv:2407.06023, 2024.
- Yue et al. (2023) Yue, X., Qu, X., Zhang, G., Fu, Y., Huang, W., Sun, H., Su, Y., and Chen, W. Mammoth: Building math generalist models through hybrid instruction tuning. arXiv preprint arXiv:2309.05653, 2023.
- Zhu et al. (2024) Zhu, H., Huang, B., Zhang, S., Jordan, M., Jiao, J., Tian, Y., and Russell, S. Towards a theoretical understanding of the’reversal curse’via training dynamics. arXiv preprint arXiv:2405.04669, 2024.
Appendix A Experiment Details
A.1 VQ-VAE Model Details
The codebook size $|\mathcal{E}|$ is 64 for ProntoQA and ProsQA, 512 for the Keys-Finding Maze, and 1024 for math reasoning problems. For both encoder ${f_{\text{enc}}}$ and decoder ${f_{\text{dec}}}$ , we use a 2-layer transformer with 4 heads, where the embedding size is 512 and the block size is 512. We set the max sequence to be 2048 for the synthetic dataset experiments and 256 for the math reasoning experiments.
A.2 Keys-Finding Maze
A.2.1 Environment Details
In this section, we introduce our synthetic keys-finding maze environment. Figure A.1 shows an example maze that consists of $m× m$ rooms, where the size of each room is $n× n$ ( $m=3$ and $n=5$ ). The goal of the agent (represented by the black circle) is to reach the gold diamond using the minimum number of steps. The agent cannot cross the wall. Also, there are three doors (represented by squares) of different colors (i.e., red, green, and blue) which are closed initially. The agent have to pick up keys to open the door in the same color. Note that the agent can not carry more than one key at the same time.
Figure A.2 shows an example optimal trajectory of the maze in Figure A.1. The agent first picks up the blue key and opens the blue door to obtain the red key. Then the agent navigates to the red door and opens it. Finally the agent is able to reach the objective.
<details>
<summary>plots/maze_env.png Details</summary>

### Visual Description
\n
## Diagram: Spatial Arrangement of Objects
### Overview
The image depicts a simplified, top-down view of a space, likely an indoor environment, represented by gray lines forming a grid-like structure. Within this space, several colored shapes are positioned, potentially representing objects or points of interest. There are no axes or scales present. The diagram appears to be a schematic representation of a layout.
### Components/Axes
The diagram consists of:
* **Gray Lines:** Forming a grid, defining the boundaries of the space and potentially representing walls or pathways.
* **Red Rectangles:** Two rectangles, one positioned in the bottom-left and another in the top-right.
* **Green Rectangles:** Two rectangles, one positioned in the top-center and another in the center.
* **Blue Rectangles:** Three rectangles, one positioned in the top-right, another in the bottom-right, and a third in the bottom-center.
* **Black Circle:** Located in the center of the diagram.
* **Yellow Diamond:** Located in the bottom-left.
There are no explicit labels or axes. The shapes themselves serve as the primary identifiers.
### Detailed Analysis or Content Details
The diagram shows the following spatial arrangement:
* **Red Rectangle 1:** Located in the bottom-left corner. Approximate coordinates: (15% from left, 85% from top).
* **Red Rectangle 2:** Located in the top-right corner. Approximate coordinates: (85% from left, 15% from top).
* **Green Rectangle 1:** Located in the top-center. Approximate coordinates: (50% from left, 20% from top).
* **Green Rectangle 2:** Located in the center. Approximate coordinates: (50% from left, 50% from top).
* **Blue Rectangle 1:** Located in the top-right. Approximate coordinates: (70% from left, 25% from top).
* **Blue Rectangle 2:** Located in the bottom-right. Approximate coordinates: (80% from left, 80% from top).
* **Blue Rectangle 3:** Located in the bottom-center. Approximate coordinates: (50% from left, 80% from top).
* **Black Circle:** Located in the center. Approximate coordinates: (50% from left, 50% from top).
* **Yellow Diamond:** Located in the bottom-left. Approximate coordinates: (20% from left, 80% from top).
The shapes do not appear to be arranged in any obvious numerical or sequential order. Their placement seems deliberate, suggesting a specific layout or configuration.
### Key Observations
* The black circle is centrally located, potentially indicating a focal point or important area.
* The red rectangles are positioned at opposite corners, creating a diagonal relationship.
* The blue rectangles are clustered towards the right and bottom of the diagram.
* The yellow diamond is isolated in the bottom-left corner.
* The green rectangles are positioned in the upper and central regions.
### Interpretation
The diagram likely represents a floor plan or a schematic layout of a space. The colored shapes could represent furniture, equipment, or designated areas within the space. The arrangement suggests a deliberate design, potentially optimizing flow or functionality. The central location of the black circle might indicate a key activity area or a point of convergence. The isolated yellow diamond could represent a unique element or a less frequently used space. Without additional context, it is difficult to determine the specific purpose of the layout or the meaning of the different colored shapes. The diagram is a visual representation of spatial relationships, and its interpretation depends on the intended application. It is a static representation, lacking any dynamic information or time-series data.
</details>
Figure A.1: An example of the keys-finding maze environment.
<details>
<summary>plots/maze_traj1.png Details</summary>

### Visual Description
\n
## Diagram: Floor Plan with Movement Indicators
### Overview
The image depicts a simplified floor plan, likely of a building or complex, with a network of corridors and rooms. Several colored markers are overlaid on the plan, indicating movement or locations. The markers are shaped as lines and diamonds, and their colors are red, green, blue, and yellow. There are also several gray circles scattered throughout the plan. The background is a grid of gray rectangles representing walls or room boundaries.
### Components/Axes
The diagram lacks explicit axes or scales. The components are:
* **Walls/Rooms:** Represented by gray rectangles.
* **Corridors:** White spaces between the gray rectangles.
* **Markers:** Colored lines and diamonds indicating locations or movement paths.
* **Circles:** Gray circles scattered throughout the plan.
### Detailed Analysis or Content Details
The diagram contains the following markers:
* **Yellow Diamond:** Located in the bottom-left corner of the plan.
* **Red Line:** Positioned in the top-right quadrant, oriented horizontally.
* **Green Line:** Located in the center of the plan, oriented horizontally.
* **Blue Circle with a Dot:** Located in the bottom-right quadrant.
* **Red Rectangle:** Located in the bottom-left quadrant.
* **Green Rectangle:** Located in the top-center quadrant.
* **Gray Circles:** Approximately 10 gray circles are distributed throughout the central and lower-central areas of the plan. They appear to follow a somewhat linear path.
### Key Observations
* The colored lines and diamond appear to indicate a path or sequence of locations.
* The gray circles may represent a separate, related path or a series of points of interest.
* The arrangement of the gray rectangles suggests a structured environment, such as a building or facility.
* There is no numerical data or quantitative information present in the diagram.
### Interpretation
The diagram likely represents a movement pattern or a sequence of events within a defined space. The colored markers could indicate the path of a person, object, or process. The gray circles might represent a secondary path or a set of data points related to the primary movement. The lack of labels or context makes it difficult to determine the specific meaning of the diagram. It could be a security patrol route, a delivery path, or a simulation of movement within a building. The diagram is a visual representation of spatial relationships and movement, but it requires additional information to be fully understood. The arrangement of the gray rectangles suggests a deliberate layout, indicating a designed environment rather than a random arrangement. The placement of the markers suggests a purposeful route or sequence of actions. Without further context, the diagram remains open to interpretation.
</details>
(a) Phase 1
<details>
<summary>plots/maze_traj2.png Details</summary>

### Visual Description
\n
## Diagram: Floor Plan with Object Locations
### Overview
The image depicts a simplified floor plan, likely of a building or complex, represented in a top-down view. The floor plan is composed of interconnected rectangular spaces separated by gray walls. Several colored shapes are positioned within the floor plan, presumably representing objects or entities. There are no axes or scales present. The diagram appears to be a visual representation of a state or configuration, rather than a dynamic process.
### Components/Axes
The diagram consists of:
* **Walls:** Gray rectangular shapes defining the boundaries of rooms and corridors.
* **Rooms/Spaces:** White rectangular areas enclosed by walls.
* **Objects:** Colored shapes (red, green, black, and yellow) positioned within the floor plan.
* **Gray Dots:** Scattered throughout the floor plan, possibly representing a grid or texture.
There are no explicit labels or axes.
### Detailed Analysis or Content Details
The following objects are present, with approximate locations described relative to the floor plan's overall structure:
* **Red Square:** Located in the bottom-left quadrant of the floor plan, within a rectangular room.
* **Green Square:** Located in the top-center quadrant, within a rectangular room.
* **Black Circle with Red Dot:** Located in the top-right quadrant, within a smaller rectangular space connected to a larger central area.
* **Yellow Diamond:** Located in the bottom-left quadrant, in a separate rectangular room.
* **Green Rectangle:** Located in the center of the floor plan, within the large central area.
The gray dots are distributed throughout the central area and the rooms connected to it. They do not appear to form any specific pattern or represent any quantifiable data.
### Key Observations
* The floor plan is symmetrical in its overall structure, with a large central area connected to several smaller rooms.
* The objects are distributed throughout the floor plan, with no obvious clustering or pattern.
* The red square and yellow diamond are both located in the bottom-left quadrant, but in separate rooms.
* The black circle with a red dot is the only object with multiple visual components.
### Interpretation
The diagram likely represents a simplified map or layout, potentially for a game, simulation, or security system. The colored shapes could represent agents, targets, or points of interest. The lack of labels or scales suggests that the diagram is intended for qualitative understanding rather than precise measurement.
The distribution of objects may indicate a scenario or state within the system. For example, the red square could represent an enemy, the green squares could represent allies, and the black circle with a red dot could represent a critical objective. The gray dots could represent obstacles or areas of limited visibility.
Without additional context, it is difficult to determine the specific meaning of the diagram. However, it clearly conveys a spatial relationship between different elements within a defined environment. The diagram's simplicity suggests that it is intended to be easily understood and interpreted, even without detailed knowledge of the underlying system. The arrangement of the objects could be a snapshot of a dynamic situation, or a planned configuration. The presence of the red dot within the black circle could signify a vulnerability or a point of attention.
</details>
(b) Phase 2
<details>
<summary>plots/maze_traj3.png Details</summary>

### Visual Description
\n
## Diagram: Room Layout with Objects
### Overview
The image depicts a top-down view of a room layout, resembling a floor plan. The room is divided into several sections by gray walls. Within the room, there are four distinct objects represented by different colored shapes: a red circle, a green rectangle, a green triangle, and a yellow diamond. The floor is covered with a pattern of light gray dots.
### Components/Axes
There are no explicit axes or scales in this diagram. The layout itself defines the spatial relationships between the walls and objects. The objects are differentiated by color and shape.
### Detailed Analysis or Content Details
The room is composed of interconnected rectangular spaces. The gray walls define the boundaries of these spaces.
* **Red Circle:** Located in the bottom-center of the image, within a rectangular space. It is approximately 0.5cm in diameter.
* **Green Rectangle:** Positioned in the top-center of the image, within a rectangular space. It is approximately 1cm long and 0.5cm wide.
* **Green Triangle:** Located in the center-right of the image, within a rectangular space. It is approximately 0.5cm on each side.
* **Yellow Diamond:** Positioned in the bottom-left of the image, within a rectangular space. It is approximately 0.5cm tall and 0.5cm wide.
The gray dotted pattern covers most of the floor area, except for the areas occupied by the walls and the colored shapes. The walls are approximately 0.2cm thick.
### Key Observations
The objects are distributed throughout the room, with no apparent clustering. The red circle appears to be the most centrally located object. The room layout is relatively simple, with a clear distinction between open spaces and walled areas.
### Interpretation
This diagram likely represents a simplified map or layout of a space, potentially for navigation or planning purposes. The colored shapes could represent objects of interest, such as furniture, equipment, or points of interest. The lack of labels or additional information suggests that the diagram is intended to be used in conjunction with other information or context. The dotted floor pattern could represent a specific type of flooring or simply a visual element to enhance the diagram's clarity. The diagram does not provide any quantitative data or trends, but rather a qualitative representation of spatial relationships. It is a static representation of a space, and does not convey any information about movement, time, or other dynamic factors.
</details>
(c) Phase 3
<details>
<summary>plots/maze_traj4.png Details</summary>

### Visual Description
\n
## Diagram: Building Floor Plan
### Overview
The image depicts a simplified floor plan of a building or complex. The plan is rendered in a minimalist style, using solid colors and dotted lines to represent walls and pathways. There are several distinct areas within the structure, connected by corridors and open spaces. The diagram includes three distinct markers: a yellow diamond, a black circle, and two green shapes (a rectangle and a small square).
### Components/Axes
The diagram does not have traditional axes or a legend in the conventional sense. However, the elements can be categorized as follows:
* **Walls:** Represented by solid gray blocks.
* **Pathways/Open Spaces:** Represented by white areas and dotted lines.
* **Markers:**
* Yellow Diamond
* Black Circle
* Green Rectangle
* Green Square
### Detailed Analysis or Content Details
The floor plan consists of several interconnected rectangular areas. A large central area is defined by a dotted line perimeter, suggesting an open courtyard or garden. This central area is connected to surrounding rooms via corridors.
* **Yellow Diamond:** Located in the bottom-left corner of the diagram, within a white rectangular space.
* **Black Circle:** Positioned slightly to the right and above the yellow diamond, also within a white rectangular space.
* **Green Rectangle:** Located in the top-center of the diagram, within a white rectangular space. It is oriented horizontally.
* **Green Square:** Positioned in the center-left of the diagram, adjacent to a gray wall and a dotted line pathway. It is oriented vertically.
The diagram does not contain any numerical data or specific measurements. The relative positions of the markers are the only quantifiable information available.
### Key Observations
The arrangement of the rooms and pathways suggests a deliberate design, possibly for a public building or institution. The markers could represent points of interest, such as entrances, exits, or specific features within the building. The central courtyard area is a prominent feature, potentially serving as a focal point or gathering space.
### Interpretation
The diagram likely represents a schematic layout of a building, intended to convey the spatial relationships between different areas. The markers may indicate important locations or destinations within the building. Without additional context, it is difficult to determine the specific purpose of the building or the significance of the markers. The simplicity of the diagram suggests it is intended for general orientation rather than detailed architectural planning. The lack of labels or annotations limits the amount of information that can be extracted from the image. It is possible that the diagram is part of a larger set of plans or a navigational aid. The diagram is a visual representation of spatial organization, and its meaning is dependent on the context in which it is used.
</details>
(d) Phase 4
Figure A.2: An (optimal) trajectory of the maze in Figure A.1. Phase 1: the agent picks up the blue key; Phase 2: the agent opens the blue door to obtain the red key; Phase 3: the agent carries the red key to the red door; Phase 4: the agent opens the red door and reaches the objective.
A.2.2 Dataset Details
Our dataset consists of 100k training data points, 500 validation data points, and 300 data points for testing. For each data point, the structure of the prompt and response is as follows:
- [Prompt]: maze_size: $M× M$ agent: $(x_{a0},y_{a0}),$ walls: $(x_{1},y_{1}),(x_{2},y_{2}),...$ objective: $(x_{o},y_{o}),$ keys: [red_key]: $(x_{rk},y_{rk}),...$ doors: [red_door]: $(x_{rd},y_{rd}),...$
- [Response]: create-node $(x_{a1},y_{a1},f_{a1},h_{a1})$ , create-node $(x_{a2},y_{a2},f_{a2},h_{a2})$ , … agent $(x_{a1},y_{a1}),(x_{a2},y_{a2}),...,(x_{aT},y_{aT}),$
Below, we show the prompt and response for an example training data pint.
Prompt
initial_state: maze_size: 19x19 wall: (0,0), (0,1), (0,2), (0,3), (0,4), (0,5), (0,6), (0,7), (0,8), (0,9), (0,10), (0,11), (0,12), (0,13), (0,14), (0,15), (0,16), (0,17), (0,18), (1,0), (1,6), (1,12), (1,18), (2,0), (2,6), (2,12), (2,18), (3,0), (3,6), (3,12), (3,18), (4,0), (4,6), (4,12), (4,18), (5,0), (5,6), (5,12), (5,18), (6,0), (6,1), (6,3), (6,4), (6,5), (6,6), (6,7), (6,8), (6,9), (6,10), (6,11), (6,12), (6,13), (6,14), (6,15), (6,16), (6,17), (6,18), (7,0), (7,12), (7,18), (8,0), (8,6), (8,12), (8,18), (9,0), (9,6), (9,12), (9,18), (10,0), (10,6), (10,12), (10,18), (11,0), (11,6), (11,12), (11,18), (12,0), (12,1), (12,2), (12,3), (12,4), (12,6), (12,8), (12,9), (12,10), (12,11), (12,12), (12,13), (12,14), (12,15), (12,16), (12,17), (12,18), (13,0), (13,12), (13,18), (14,0), (14,6), (14,12), (14,18), (15,0), (15,6), (15,12), (15,18), (16,0), (16,6), (16,12), (16,18), (17,0), (17,6), (17,12), (17,18), (18,0), (18,1), (18,2), (18,3), (18,4), (18,5), (18,6), (18,7), (18,8), (18,9), (18,10), (18,11), (18,12), (18,13), (18,14), (18,15), (18,16), (18,17), (18,18), agent: (11,5), objective: (1,1), [red_key]: (16,1), [green_key]: (16,9), [blue_key]: (7,2), [red_door]: (12,5), [green_door]: (6,2), [blue_door]: (12,7)
Response (CoT and Answer)
create-node: (11,5), heuristics: 14, cost: 0 create-node: (11,4), heuristics: 13, cost: 1 create-node: (11,3), heuristics: 12, cost: 2 create-node: (10,3), heuristics: 11, cost: 3 create-node: (9,3), heuristics: 10, cost: 4 create-node: (9,2), heuristics: 9, cost: 5 create-node: (8,2), heuristics: 8, cost: 6 create-node: (7,2), heuristics: 7, cost: 7 create-node: (7,2), heuristics: 7, cost: 8 create-node: (7,3), heuristics: 8, cost: 9 create-node: (7,4), heuristics: 9, cost: 10 create-node: (7,5), heuristics: 10, cost: 11 create-node: (7,6), heuristics: 11, cost: 12 create-node: (7,7), heuristics: 12, cost: 13 create-node: (8,7), heuristics: 13, cost: 14 create-node: (9,7), heuristics: 14, cost: 15 create-node: (10,7), heuristics: 15, cost: 16 create-node: (11,7), heuristics: 16, cost: 17 create-node: (11,7), heuristics: 16, cost: 18 create-node: (12,7), heuristics: 17, cost: 19 create-node: (13,7), heuristics: 18, cost: 20 create-node: (14,7), heuristics: 19, cost: 21 create-node: (14,8), heuristics: 20, cost: 22 create-node: (15,8), heuristics: 21, cost: 23 create-node: (15,9), heuristics: 22, cost: 24 create-node: (16,9), heuristics: 23, cost: 25 create-node: (16,9), heuristics: 23, cost: 26 create-node: (16,8), heuristics: 22, cost: 27 create-node: (16,7), heuristics: 21, cost: 28 create-node: (15,7), heuristics: 20, cost: 29 create-node: (14,7), heuristics: 19, cost: 30 create-node: (13,7), heuristics: 18, cost: 31 create-node: (12,7), heuristics: 17, cost: 32 create-node: (11,7), heuristics: 16, cost: 33 create-node: (10,7), heuristics: 15, cost: 34 create-node: (9,7), heuristics: 14, cost: 35 create-node: (8,7), heuristics: 13, cost: 36 create-node: (7,7), heuristics: 12, cost: 37 create-node: (7,6), heuristics: 11, cost: 38 create-node: (7,5), heuristics: 10, cost: 39 create-node: (7,4), heuristics: 9, cost: 40 create-node: (7,3), heuristics: 8, cost: 41 create-node: (7,2), heuristics: 7, cost: 42 create-node: (7,2), heuristics: 7, cost: 43 create-node: (6,2), heuristics: 6, cost: 44 create-node: (5,2), heuristics: 5, cost: 45 create-node: (5,1), heuristics: 4, cost: 46 create-node: (4,1), heuristics: 3, cost: 47 create-node: (3,1), heuristics: 2, cost: 48 create-node: (2,1), heuristics: 1, cost: 49 create-node: (1,1), heuristics: 0, cost: 50 agent: (11,5), current_key: none, remaining_key: [red_key]: [green_key]: [blue_key]: remaining_door: [red_door]: [green_door]: [blue_door]: agent: (11,4), current_key: none, remaining_key: [red_key]: [green_key]: [blue_key]: remaining_door: [red_door]: [green_door]: [blue_door]: agent: (11,3), current_key: none, remaining_key: [red_key]: [green_key]: [blue_key]: remaining_door: [red_door]: [green_door]: [blue_door]: agent: (10,3), current_key: none, remaining_key: [red_key]: [green_key]: [blue_key]: remaining_door: [red_door]: [green_door]: [blue_door]: agent: (9,3), current_key: none, remaining_key: [red_key]: [green_key]: [blue_key]: remaining_door: [red_door]: [green_door]: [blue_door]: agent: (9,2), current_key: none, remaining_key: [red_key]: [green_key]: [blue_key]: remaining_door: [red_door]: [green_door]: [blue_door]: agent: (8,2), current_key: none, remaining_key: [red_key]: [green_key]: [blue_key]: remaining_door: [red_door]: [green_door]: [blue_door]: agent: (7,2), current_key: none, remaining_key: [red_key]: [green_key]: [blue_key]: remaining_door: [red_door]: [green_door]: [blue_door]: agent: (7,2), current_key: [blue_key], remaining_key: [red_key]: [green_key]: remaining_door: [red_door]: [green_door]: [blue_door]: agent: (7,3), current_key: [blue_key], remaining_key: [red_key]: [green_key]: remaining_door: [red_door]: [green_door]: [blue_door]: agent: (7,4), current_key: [blue_key], remaining_key: [red_key]: [green_key]: remaining_door: [red_door]: [green_door]: [blue_door]: agent: (7,5), current_key: [blue_key], remaining_key: [red_key]: [green_key]: remaining_door: [red_door]: [green_door]: [blue_door]: agent: (7,6), current_key: [blue_key], remaining_key: [red_key]: [green_key]: remaining_door: [red_door]: [green_door]: [blue_door]: agent: (7,7), current_key: [blue_key], remaining_key: [red_key]: [green_key]: remaining_door: [red_door]: [green_door]: [blue_door]: agent: (8,7), current_key: [blue_key], remaining_key: [red_key]: [green_key]: remaining_door: [red_door]: [green_door]: [blue_door]: agent: (9,7), current_key: [blue_key], remaining_key: [red_key]: [green_key]: remaining_door: [red_door]: [green_door]: [blue_door]: agent: (10,7), current_key: [blue_key], remaining_key: [red_key]: [green_key]: remaining_door: [red_door]: [green_door]: [blue_door]: agent: (11,7), current_key: [blue_key], remaining_key: [red_key]: [green_key]: remaining_door: [red_door]: [green_door]: [blue_door]: agent: (11,7), current_key: none, remaining_key: [red_key]: [green_key]: remaining_door: [red_door]: [green_door]: agent: (12,7), current_key: none, remaining_key: [red_key]: [green_key]: remaining_door: [red_door]: [green_door]: agent: (13,7), current_key: none, remaining_key: [red_key]: [green_key]: remaining_door: [red_door]: [green_door]: agent: (14,7), current_key: none, remaining_key: [red_key]: [green_key]: remaining_door: [red_door]: [green_door]: agent: (14,8), current_key: none, remaining_key: [red_key]: [green_key]: remaining_door: [red_door]: [green_door]: agent: (15,8), current_key: none, remaining_key: [red_key]: [green_key]: remaining_door: [red_door]: [green_door]: agent: (15,9), current_key: none, remaining_key: [red_key]: [green_key]: remaining_door: [red_door]: [green_door]: agent: (16,9), current_key: none, remaining_key: [red_key]: [green_key]: remaining_door: [red_door]: [green_door]: agent: (16,9), current_key: [green_key], remaining_key: [red_key]: remaining_door: [red_door]: [green_door]: agent: (16,8), current_key: [green_key], remaining_key: [red_key]: remaining_door: [red_door]: [green_door]: agent: (16,7), current_key: [green_key], remaining_key: [red_key]: remaining_door: [red_door]: [green_door]: agent: (15,7), current_key: [green_key], remaining_key: [red_key]: remaining_door: [red_door]: [green_door]: agent: (14,7), current_key: [green_key], remaining_key: [red_key]: remaining_door: [red_door]: [green_door]: agent: (13,7), current_key: [green_key], remaining_key: [red_key]: remaining_door: [red_door]: [green_door]: agent: (12,7), current_key: [green_key], remaining_key: [red_key]: remaining_door: [red_door]: [green_door]: agent: (11,7), current_key: [green_key], remaining_key: [red_key]: remaining_door: [red_door]: [green_door]: agent: (10,7), current_key: [green_key], remaining_key: [red_key]: remaining_door: [red_door]: [green_door]: agent: (9,7), current_key: [green_key], remaining_key: [red_key]: remaining_door: [red_door]: [green_door]: agent: (8,7), current_key: [green_key], remaining_key: [red_key]: remaining_door: [red_door]: [green_door]: agent: (7,7), current_key: [green_key], remaining_key: [red_key]: remaining_door: [red_door]: [green_door]: agent: (7,6), current_key: [green_key], remaining_key: [red_key]: remaining_door: [red_door]: [green_door]: agent: (7,5), current_key: [green_key], remaining_key: [red_key]: remaining_door: [red_door]: [green_door]: agent: (7,4), current_key: [green_key], remaining_key: [red_key]: remaining_door: [red_door]: [green_door]: agent: (7,3), current_key: [green_key], remaining_key: [red_key]: remaining_door: [red_door]: [green_door]: agent: (7,2), current_key: [green_key], remaining_key: [red_key]: remaining_door: [red_door]: [green_door]: agent: (7,2), current_key: none, remaining_key: [red_key]: remaining_door: [red_door]: agent: (6,2), current_key: none, remaining_key: [red_key]: remaining_door: [red_door]: agent: (5,2), current_key: none, remaining_key: [red_key]: remaining_door: [red_door]: agent: (5,1), current_key: none, remaining_key: [red_key]: remaining_door: [red_door]: agent: (4,1), current_key: none, remaining_key: [red_key]: remaining_door: [red_door]: agent: (3,1), current_key: none, remaining_key: [red_key]: remaining_door: [red_door]: agent: (2,1), current_key: none, remaining_key: [red_key]: remaining_door: [red_door]: agent: (1,1), current_key: none, remaining_key: [red_key]: remaining_door: [red_door]:
The prompt describes the maze in a structured language. The maze size $M=m(n+1)+1$ (e.g., in Figure A.1, the maze size $M=19$ ). The positions of walls are $(x_{1},y_{1}),(x_{2},y_{2}),...$ , and so on. The position of the agent in time step $t$ is $(x_{at},y_{at})$ , where $t=0$ corresponds to the initial position The position of the objective is $(x_{o},y_{o})$ , and the position of keys and doors in color $c$ (where $c$ = $r$ , $g$ , $b$ ) are $(x_{ck},y_{ck})$ and $(x_{cd},y_{cd})$ , respectively. The response describes an optimal path (i.e., with minimal total times steps $T$ ) for the agent to reach the objective.
A.2.3 Model Details
Following Su et al. (2024); Lehnert et al. (2024), we employ a similar encode-decoder transformer architecture with rotary embeddings and no drop-out. Our model consisted of 6 layers with 3 attention heads, and the embedding size is 64.
A.3 ProntoQA and ProsQA
We used the pretrained GPT-2 model which has the following parameters:
| Number of Layers (Transformer Blocks) | 12 |
| --- | --- |
| Hidden Size (Embedding Size) | 768 |
| Number of Attention Heads | 12 |
| Vocabulary Size | 50,257 |
| Total Number of Parameters | 117 million |
Table A.1: Hyperparameters of the pretrained GPT-2 model used for ProntoQA and ProsQA.
A.4 LLM experiments
We use the Llama Cookbook https://github.com/meta-llama/llama-cookbook codebase to fine-tune the Llama models.
As described in Section 4.2, we use a batch size of 32 with a sequence packing of 4096. We experiment with different learning rates $10^{-5},2.5× 10^{-5},5× 10^{-5},10^{-4}$ and select the one with the lowest validation error. The final choices are $10^{-5}$ for Llama-3.2-8B and $2.5× 10^{-5}$ for Llama-3.2-1B and Llama-3.2-3B.
Appendix B Notations
Table B.1 summarizes the notations we used throughout the paper.
| $X=P\oplus C\oplus S$ $P$ $p_{i}$ | input text sample where $\oplus$ means concatenation prompt of length $t_{p}$ the $i$ -th token of prompt (in text) |
| --- | --- |
| $C$ | reasoning trace of length $t_{c}$ |
| $c_{i}$ | the $i$ -th token of trace (in text) |
| $S$ | solution of length $t_{s}$ |
| $s_{i}$ | the $i$ -th token of solution (in text) |
| $Z$ | the complete latent reasoning traces of length $t_{z}$ |
| $z_{i}$ | the $i$ -th token of latent trace |
| $r=t_{c}/t_{z}$ | compression rate |
| $m$ | number of trace tokens to be replaced by latent tokens during training |
| ${\widetilde{X}}$ | modified input with mixed text and latent tokens |
| $\mathcal{E}$ | codebook of VQ-VAE |
| $e_{i}$ | the $i$ -th vector in the codebook, which corresponds to the $i$ -th latent token |
| $d$ | dimension of $e_{i}$ s |
| ${\mathcal{V}}$ | vocabulary of text tokens |
| $L$ | chunk size |
| ${f_{\text{enc}}}(·)$ | encodes a chunk of $L$ text tokens to $\frac{L}{r}$ embedding vectors |
| $\bar{X}=\bar{x}_{1},...,\bar{x}_{\frac{L}{r}}$ | embedding vectors of $X$ outputted by ${f_{\text{enc}}}(·)$ |
| $q(·)$ | quantization operator that replaces, e.g., $\bar{x}_{1}$ by its nearest neighbor in ${\mathcal{E}}$ : |
| $g(\bar{x}_{1})=\operatorname*{argmin}_{e_{i}∈{\mathcal{E}}}\left\|e_{i}-\bar{x}_{1}\right\|^{2}$ | |
| $g(·)$ | maps prompt to a $d$ -dimensional embedding vector |
| ${f_{\text{dec}}}(·,·)$ | decodes $L/r$ quantized embedding vectors in ${\mathcal{E}}$ back to text tokens, |
| conditioning on prompt embedding generated by $g(·)$ | |
Table B.1: Mathematical notations used throughout the paper.
Appendix C Details of Attention Weights Analysis
C.1 Generated Responses
Question 1
Prompt: What is the positive difference between $120%$ of 30 and $130%$ of 20?
Response of Our Latent Approach: <boLatent> <Latent-code-278> <Latent-code-416> <eoLatent> % of 30 and $130%$ of 20. First, let’s calculate $120%$ of 30: $120%$ of 30 = 1.2 x 30 = 36. Next, let’s calculate $130%$ of 20: $130%$ of 20 = 1.3 x 20 = 26. Now, let’s find the positive difference between 36 and 26: 36 - 26 = 10
The answer is 10.
Response of Baseline (CoT): To find the positive difference between $120\%$ of 30 and $130\%$ of 20, we need to calculate each value first. $120\%$ of 30 is calculated as 1.2 * 30 = 36. $130\%$ of 20 is calculated as 1.3 * 20 = 26. Now, we need to find the positive difference between these two values. To do this, we subtract the smaller value from the larger value: 36 - 26 = 10.
The final answer is 10.
Question 2
Prompt: Mark has $50 in his bank account. He earns $10 per day at his work. If he wants to buy a bike that costs $300, how many days does Mark have to save his money?
Response of Our Latent Approach: <boLatent> <Latent-code-123> <eoLatent> need to calculate how much more money he needs to buy the bike and then divide that amount by his daily earnings. The bike costs $300, and Mark already has $50. So, he needs: $300 (bike cost) - $50 (current savings) = $250 (more money needed)
Mark earns $10 per day. To find out how many days he needs to save, we divide the amount he needs by his daily earnings: $250 (more money needed) / $10 (daily earnings) = 25 days. The answer is 25.
Response of Baseline (CoT): To find out how many days Mark needs to save his money, we need to subtract the cost of the bike from the amount of money he has, then divide the result by the amount he earns per day. Mark has $50 and wants to buy a bike that costs $300. First, subtract the cost of the bike from the amount of money Mark has: $50 - $300 = -$250 Since Mark can’t have a negative amount of money, we need to convert the negative amount to a positive one by adding the cost of the bike to the amount of money Mark has: $50 + $300 = $350 Now, divide the result by the amount Mark earns per day: $350 / $10 = 35
The final answer is 35.
Appendix D Other Text Generation Examples
Example Generation of Our Latent Approach
Prompt: Evaluate the expression $a^{3}· a^{2}$ if $a=5$ Generated Text: <boLatent> <Latent-code-130> <eoLatent> We can use the rule of exponents that states $a^{m}· a^{n}=a^{m+n}$ . So, $a^{3}· a^{2}=a^{3+2}=a^{5}$ . Now, we substitute $a=5$ into the expression: $5^{5}$ . $5^{5}=5· 5· 5· 5· 5=3125$ . The answer is 3125.
Example Generation of Our Latent Approach
Prompt: Compute $\dbinom{15}{2}$ . Generated Text: <boLatent> <Latent-code-888> <Latent-code-744> <Latent-code-196> <Latent-code-92> <Latent-code-557> <eoLatent> $\frac{15!}{2!(15-2)!}=\frac{15!}{2!13!}$ To simplify this expression, we can cancel out the common factors in the numerator and denominator: $\dbinom{15}{2}=\frac{15× 14}{2}=105$ . The answer is 105.
Example Generation of Our Latent Approach
Question: How many vertical asymptotes does the graph of $y=\frac{2}{x^{2}+x-6}$ have? Answer: <boLatent> <Latent-code-773> <Latent-code-514> <eoLatent> the denominator, which is $x^{2}+x-6$ . We can factor this quadratic expression as $(x+3)(x-2)$ . Now, we can see that the denominator will be equal to zero when either $(x+3)=0$ or $(x-2)=0$ . Solving for these values, we get $x=-3$ or $x=2$ . Therefore, the vertical asymptotes of the graph of $y=\\
frac{2}{x^{2}+x-6}$ are $x=-3$ and $x=2$ . Since there are two distinct values of $x$ that make the denominator equal to zero, the graph has 2 vertical asymptotes. The answer is 2.
Example Generation of Our Latent Approach
Question: If there exists an ordered numbering of the nodes such that for each node there are no links going to a lower-numbered node, then there are no directed cycles in a directed graph. True or false? Answer: <boLatent> <Latent-code-526> <eoLatent> a directed graph. A directed cycle is a sequence of nodes in a graph, where each node is connected to the next node by a directed edge, and the last node is connected back to the first node by a directed edge. Now, let’s consider the given condition: ”If there exists an ordered numbering of the nodes such that for each node there are no links going to a lower-numbered node, then there are no directed cycles in a directed graph.” This condition essentially means that if we number the nodes in a specific way, and there are no links going to a lower-numbered node, then there are no directed cycles in the graph. To see why this is true, let’s consider what would happen if there were a directed cycle in the graph. If there were a directed cycle, then there would be a sequence of nodes where each node is connected to the next node by a directed edge, and the last node is connected back to the first node by a directed edge. However, if we number the nodes in the way described, and there are no links going to a lower-numbered node, then this directed cycle would be impossible. This is because the last node in the cycle would have to be connected to a lower-numbered node, which would violate the condition. Therefore, if we number the nodes in this way, then there can be no directed cycles in the graph. The answer is True.
Appendix E Interpreability Examples
To examine the interpretability of the latent-codes, we use the decoder from the trained VQ-VAE to convert the latent-code back to the text space. In general, it is interpretable. We bolded the decoded part of the text from the latent code, which is delimited by the <boLatent> … <eoLatent> tags. In general, we find the latent tokens capture long/verbose sentences which may trigger useful follow-up thinking pattern.
Example Generation of Our Latent Approach
Prompt: In a 8 fluid ounce bottle of Vitamin Water, there are 125 calories. How many calories would be contained in a 12 fluid ounce bottle? Express your answer in decimal form. Generated Text: <boLatent> To convert the number from in in a 8 ounces <eoLatent> we need to find the ratio of calories to fluid ounces in the 8 fluid ounce bottle and then multiply it by 12. First, let’s find the ratio of calories to fluid ounces in the 8 fluid ounce bottle: Calories per fluid ounce = Total calories / Total fluid ounces = 125 calories / 8 fluid ounces = 15.625 calories per fluid ounce Now, let’s multiply this ratio by 12 fluid ounces to find the total calories in the 12 fluid ounce bottle: Total calories in 12 fluid ounces = Calories per fluid ounce * Total fluid ounces = 15.625 calories per fluid ounce * 12 fluid ounces = 187.5 calories The answer is 187.5.
Here, in the example above, the decoded latent token ”To convert the number from in in a 8 ounces” captures the core concept of proportional conversion from the 8-ounce bottle to the 12-ounce bottle. Despite its grammar mistake, it still precisely states the mathematical operation needed to solve the problem.
Example Generation of Our Latent Approach
Prompt: What is the greatest common divisor of 315 and 108? Generated Text: <boLatent> We find the greatest of CD of::63 common find <eoLatent> we can use the Euclidean algorithm. Here’s how to do it: 1. Divide 315 by 108: 315 = 108 × 2 + 99 2. Divide 108 by 99: 108 = 99 × 1 + 9 3. Divide 99 by 9: 99 = 9 × 11 + 0 Since the remainder is 0, the GCD is the divisor in the last step, which is 9. The answer is 9.
Here, in the example above, the decoded latent token ”We find the greatest of CD of::63 common find” abstracts the initiation of the Euclidean algorithm, leading directly into the process that determines the GCD.
Appendix F Additional Experiments
We present results of different approaches for fine-tuning a Llama-3.1-8B model on the DART-MATH (Tong et al., 2024) dataset. The observations are similar to those we presented in Section 4.2.
Model (Dart-Math) In-Domain Out-of-Domain Average math GSM8K Fresh-Gaokao-Math-2023 DeepMind-Mathematics College-Math Olympia-Math TheoremQA All Datasets Llama-3.1-8B Sol-only 13.3 16.4 0.0 18.2 15.9 4.7 16.9 12.2 CoT 43.1 84.5 30.7 47.8 45.7 10.1 21.2 40.4 iCoT 35.2 61.8 30.0 30.6 37.6 8.3 19.5 31.8 Latent (Ours) 43.2 ( $\uparrow$ +0.1) 83.9 33.3 ( $\uparrow$ +2.6) 44.7 47.1 ( $\uparrow$ +1.4) 13.3 ( $\uparrow$ +3.2) 20.3 40.8 ( $\uparrow$ +0.4)
Table F.1: Our approach surpasses the iCoT and Sol-Only baseline when trained on the DART-MATH dataset (Tong et al., 2024), while marginally outperforming the CoT baseline.
Model (Dart-Math) In-Domain (# of tokens) Out-of-Domain (# of tokens) Average (# of tokens) math GSM8K Fresh-Gaokao-Math-2023 DeepMind-Mathematics College-Math Olympia-Math TheoremQA All Datasets Llama-3.1-8B Sol-only 10.9 8.1 10.2 8.4 11.2 16.1 16.13 11.6 CoT 522.7 181.0 628.8 343.2 486.3 893.7 648.3 529.1 iCoT 397.1 118.6 440.8 227.9 321.9 614.4 485.7 372.3 Latent (Ours) 489.1 ( $\downarrow$ -6.4%) 163.5 ( $\downarrow$ -9.7%) 462.1 ( $\downarrow$ -26.5%) 265.6 ( $\downarrow$ -22.6%) 396.3 ( $\downarrow$ -18.5%) 801.3 ( $\downarrow$ -10.3%) 591.3 452.7 ( $\downarrow$ -16%)
Table F.2: The average number of tokens in the generated responses. Our approach generates shorter reasoning traces then the CoT baseline. $\downarrow$ -: Trace length reduction rate compared with CoT.