# \scalerel* \method: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration
> Corresponding Author
Abstract
Speculative decoding (SD) has emerged as a widely used paradigm to accelerate LLM inference without compromising quality. It works by first employing a compact model to draft multiple tokens efficiently and then using the target LLM to verify them in parallel. While this technique has achieved notable speedups, most existing approaches necessitate either additional parameters or extensive training to construct effective draft models, thereby restricting their applicability across different LLMs and tasks. To address this limitation, we explore a novel plug-and-play SD solution with layer-skipping, which skips intermediate layers of the target LLM as the compact draft model. Our analysis reveals that LLMs exhibit great potential for self-acceleration through layer sparsity and the task-specific nature of this sparsity. Building on these insights, we introduce \method, an on-the-fly self-speculative decoding algorithm that adaptively selects intermediate layers of LLMs to skip during inference. \method does not require auxiliary models or additional training, making it a plug-and-play solution for accelerating LLM inference across diverse input data streams. Our extensive experiments across a wide range of models and downstream tasks demonstrate that \method can achieve over a $1.3×$ $\sim$ $1.6×$ speedup while preserving the original distribution of the generated text. We release our code in https://github.com/hemingkx/SWIFT.
1 Introduction
Large Language Models (LLMs) have exhibited outstanding capabilities in handling various downstream tasks (OpenAI, 2023; Touvron et al., 2023a; b; Dubey et al., 2024). However, their token-by-token generation necessitated by autoregressive decoding poses efficiency challenges, particularly as model sizes increase. To address this, speculative decoding (SD) has been proposed as a promising solution for lossless LLM inference acceleration (Xia et al., 2023; Leviathan et al., 2023; Chen et al., 2023). At each decoding step, SD first employs a compact draft model to efficiently predict multiple tokens as speculations for future decoding steps of the target LLM. These tokens are then validated by the target LLM in parallel, ensuring that the original output distribution remains unchanged.
Recent advancements in SD have pushed the boundaries of the latency-accuracy trade-off by exploring various strategies (Xia et al., 2024), including incorporating lightweight draft modules into LLMs (Cai et al., 2024; Ankner et al., 2024; Li et al., 2024a; b), employing fine-tuning strategies to facilitate efficient LLM drafting (Kou et al., 2024; Yi et al., 2024; Elhoushi et al., 2024), and aligning draft models with the target LLM (Liu et al., 2023a; Zhou et al., 2024; Miao et al., 2024). Despite their promising efficacy, these approaches require additional modules or extensive training, which limits their broad applicability across different model types and causes significant inconvenience in practice. To tackle this issue, another line of research has proposed the Jacobi-based drafting (Santilli et al., 2023; Fu et al., 2024) to facilitate plug-and-play SD. As illustrated in Figure 1 (a), these methods append pseudo tokens to the input prompt, enabling the target LLM to generate multiple tokens as drafts in a single decoding step. However, the Jacobi-decoding paradigm misaligns with the autoregressive pretraining objective of LLMs, resulting in suboptimal acceleration effects.
<details>
<summary>x1.png Details</summary>

### Visual Description
## Diagram: Jacobi-based vs. Sparsity-based Drafting for LLMs
### Overview
The image compares two methods for refining large language models (LLMs):
- **(a) Jacobi-based Drafting**: A full-parameter LLM with iterative refinement steps.
- **(b) Sparsity-based Drafting**: A sparse LLM with reduced connectivity, emphasizing parameter efficiency.
### Components/Axes
- **Labels**:
- **(a)** "Jacobi-based Drafting" (title), "Full-parameter LLM" (central block), "Refine x N" (arrows), and dotted arrows for iterative refinement.
- **(b)** "Sparsity-based Drafting" (title), "Sparse LLM" (central block), and dashed arrows indicating sparsity.
- **Visual Elements**:
- Rectangular blocks represent LLM components.
- Arrows denote refinement/sparsity processes.
- Dotted/dashed lines differentiate refinement (solid) from sparsity (dashed).
### Detailed Analysis
- **(a) Jacobi-based Drafting**:
- A full-parameter LLM is refined iteratively via "Refine x N" steps (solid arrows).
- Dotted arrows suggest feedback loops or additional refinement stages.
- **(b) Sparsity-based Drafting**:
- A sparse LLM is depicted with reduced connectivity (dashed arrows).
- The "Sparse" label emphasizes parameter efficiency.
### Key Observations
- Jacobi-based drafting focuses on iterative refinement of a full-parameter model.
- Sparsity-based drafting prioritizes reduced parameter usage via sparse connections.
- No numerical data or quantitative values are provided in the diagram.
### Interpretation
The diagrams illustrate two contrasting approaches to LLM optimization:
1. **Jacobi-based**: Emphasizes iterative refinement, likely improving model accuracy at the cost of computational resources.
2. **Sparsity-based**: Prioritizes efficiency by reducing parameter density, potentially lowering computational costs but possibly sacrificing some performance.
- The absence of numerical data limits direct comparison of performance metrics (e.g., accuracy, speed).
- The use of "Refine x N" and "Sparse" labels suggests a trade-off between model complexity and efficiency.
</details>
Figure 1: Illustration of prior solution and ours for plug-and-play SD. (a) Jacobi-based drafting appends multiple pseudo tokens to the input prompt, enabling the target LLM to generate multiple tokens as drafts in a single step. (b) \method adopts sparsity-based drafting, which exploits the inherent sparsity in LLMs to facilitate efficient drafting. This work is the first exploration of plug-and-play SD using sparsity-based drafting.
In this work, we introduce a novel research direction for plug-and-play SD: sparsity-based drafting, which leverages the inherent sparsity in LLMs to enable efficient drafting (see Figure 1 (b)). Specifically, we exploit a straightforward yet practical form of LLM sparsity – layer sparsity – to accelerate inference. Our approach is based on two key observations: 1) LLMs possess great potential for self-acceleration through layer sparsity. Contrary to the conventional belief that layer selection must be carefully optimized (Zhang et al., 2024), we surprisingly found that uniformly skipping layers to draft can still achieve a notable $1.2×$ speedup, providing a strong foundation for plug-and-play SD. 2) Layer sparsity is task-specific. We observed that each task requires its own optimal set of skipped layers, and applying the same layer configuration across different tasks would cause substantial performance degradation. For example, the speedup drops from $1.47×$ to $1.01×$ when transferring the configuration optimized for a storytelling task to a reasoning task.
Building on these observations, we introduce \method, the first on-the-fly self-speculative decoding algorithm that adaptively optimizes the set of skipped layers in the target LLM during inference, facilitating the lossless acceleration of LLMs across diverse input data streams. \method integrates two key innovations: (1) a context-based layer set optimization mechanism that leverages LLM-generated context to efficiently identify the optimal set of skipped layers corresponding to the current input stream, and (2) a confidence-aware inference acceleration strategy that maximizes the use of draft tokens, improving both speculation accuracy and verification efficiency. These innovations allow \method to strike an expected balance between the latency-accuracy trade-off in SD, providing a new plug-and-play solution for lossless LLM inference acceleration without the need for auxiliary models or additional training, as demonstrated in Table 1.
We conduct experiments using LLaMA-2 and CodeLLaMA models across multiple tasks, including summarization, code generation, mathematical reasoning, etc. \method achieves a $1.3×$ $\sim$ $1.6×$ wall-clock time speedup compared to conventional autoregressive decoding. Notably, in the greedy setting, \method consistently maintains a $98\%$ $\sim$ $100\%$ token acceptance rate across the LLaMA2 series, indicating the high alignment potential of this paradigm. Further analysis validated the effectiveness of \method across diverse data streams and its compatibility with various LLM backbones.
Our key contributions are:
1. We performed an empirical analysis of LLM acceleration on layer sparsity, revealing both the potential for LLM self-acceleration via layer sparsity and its task-specific nature, underscoring the necessity for adaptive self-speculative decoding during inference.
1. Building on these insights, we introduce \method, the first plug-and-play self-speculative decoding algorithm that optimizes the set of skipped layers in the target LLM on the fly, enabling lossless acceleration of LLM inference across diverse input data streams.
1. We conducted extensive experiments across various models and tasks, demonstrating that \method consistently achieves a $1.3×$ $\sim$ $1.6×$ speedup without any auxiliary model or training, while theoretically guaranteeing the preservation of the generated text’s distribution.
2 Related Work
Speculative Decoding (SD)
Due to the sequential nature of autoregressive decoding, LLM inference is constrained by memory-bound computations (Patterson, 2004; Shazeer, 2019), with the primary latency bottleneck arising not from arithmetic computations but from memory reads/writes of LLM parameters (Pope et al., 2023). To mitigate this issue, speculative decoding (SD) introduces utilizing a compact draft model to predict multiple decoding steps, with the target LLM then validating them in parallel (Xia et al., 2023; Leviathan et al., 2023; Chen et al., 2023). Recent SD variants have sought to enhance efficiency by incorporating additional modules (Kim et al., 2023; Sun et al., 2023; Du et al., 2024; Li et al., 2024a; b) or introducing new training objectives (Liu et al., 2023a; Kou et al., 2024; Zhou et al., 2024; Gloeckle et al., 2024). However, these approaches necessitate extra parameters or extensive training, limiting their applicability across different models. Another line of research has explored plug-and-play SD methods with Jacobi decoding (Santilli et al., 2023; Fu et al., 2024), which predict multiple steps in parallel by appending pseudo tokens to the input and refining them iteratively. As shown in Table 1, our work complements these efforts by investigating a novel plug-and-play SD method with layer-skipping, which exploits the inherent sparsity of LLM layers to accelerate inference. The most related approaches to ours include Self-SD (Zhang et al., 2024) and LayerSkip (Elhoushi et al., 2024), which also skip intermediate layers of LLMs to form the draft model. However, both methods require a time-consuming offline training process, making them neither plug-and-play nor easily generalizable across different models and tasks.
| Eagle (Li et al., 2024a; b) | Draft Heads | Yes | ✗ | ✓ | ✓ | ✓ | - |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Rest (He et al., 2024) | Context Retrieval | Yes | ✗ | ✓ | ✓ | ✓ | - |
| Self-SD (Zhang et al., 2024) | Layer Skipping | No | ✗ | ✓ | ✓ | ✗ | - |
| \hdashline Parallel (Santilli et al., 2023) | Jacobi Decoding | No | ✓ | ✓ | ✗ | ✗ | $0.9×$ $\sim$ $1.0×$ |
| Lookahead (Fu et al., 2024) | Jacobi Decoding | No | ✓ | ✓ | ✓ | ✓ | $1.2×$ $\sim$ $1.4×$ |
| \method (Ours) | Layer Skipping | No | ✓ | ✓ | ✓ | ✓ | $1.3×$ $\sim$ $1.6×$ |
Table 1: Comparison of \method with existing SD methods. “ AM ” denotes whether the method requires auxiliary modules such as additional parameters or data stores. “ Greedy ”, “ Sampling ”, and “ Token Tree ” denote whether the method supports greedy decoding, multinomial sampling, and token tree verification, respectively. \method is the first plug-and-play layer-skipping SD method, which is orthogonal to those Jacobi-based methods such as Lookahead (Fu et al., 2024).
Efficient LLMs Utilizing Sparsity
LLMs are powerful but often over-parameterized (Hu et al., 2022). To address this issue, various methods have been proposed to accelerate inference by leveraging different forms of LLM sparsity. One promising research direction is model compression, which includes approaches such as quantization (Dettmers et al., 2022; Frantar et al., 2023; Ma et al., 2024), parameter pruning (Liu et al., 2019; Hoefler et al., 2021; Liu et al., 2023b), and knowledge distillation (Touvron et al., 2021; Hsieh et al., 2023; Gu et al., 2024). These approaches aim to reduce model sparsity by compressing LLMs into more compact forms, thereby decreasing memory usage and computational overhead during inference. Our proposed method, \method, focuses specifically on sparsity within LLM layer computations, providing a more streamlined approach to efficient LLM inference that builds upon recent advances in layer skipping (Corro et al., 2023; Zhu et al., 2024; Jaiswal et al., 2024; Liu et al., 2024). Unlike these existing layer-skipping methods that may lead to information loss and performance degradation, \method investigates the utilization of layer sparsity to enable lossless acceleration of LLM inference.
3 Preliminaries
3.1 Self-Speculative Decoding
Unlike most SD methods that require additional parameters, self-speculative decoding (Self-SD) first proposed utilizing parts of an LLM as a compact draft model (Zhang et al., 2024). In each decoding step, this approach skips intermediate layers of the LLM to efficiently generate draft tokens; these tokens are then validated in parallel by the full-parameter LLM to ensure that the output distribution of the target LLM remains unchanged. The primary challenge of Self-SD lies in determining which layers, and how many, should be skipped – referred to as the skipped layer set – during the drafting stage, which is formulated as an optimization problem. Formally, given the input data $\mathcal{X}$ and the target LLM $\mathscr{M}_{T}$ with $L$ layers (including both attention and MLP layers), Self-SD aims to identify the optimal skipped layer set $\bm{z}$ that minimizes the average inference time per token:
$$
\bm{z}^{*}=\underset{\bm{z}}{\arg\min}\frac{\sum_{\bm{x}\in\mathcal{X}}f\left(%
\bm{x}\mid\bm{z};\bm{\theta}_{\mathscr{M}_{T}}\right)}{\sum_{\bm{x}\in\mathcal%
{X}}|\bm{x}|},\quad\text{ s.t. }\bm{z}\in\{0,1\}^{L}, \tag{1}
$$
where $f(·)$ is a black-box function that returns the inference latency of sample $\bm{x}$ , $\bm{z}_{i}∈\{0,1\}$ denotes whether layer $i$ of the target LLM is skipped when drafting, and $|\bm{x}|$ represents the sample length. Self-SD addresses this problem through a Bayesian optimization process (Jones et al., 1998). Before inference, this process iteratively selects new inputs $\bm{z}$ based on a Gaussian process (Rasmussen & Williams, 2006) and evaluates Eq (1) on the training set of $\mathcal{X}$ . After a specified number of iterations, the best $\bm{z}$ is considered an approximation of $\bm{z}^{*}$ and is held fixed for inference.
While Self-SD has proven effective, its reliance on a time-intensive Bayesian optimization process poses certain limitations. For each task, Self-SD must sequentially evaluate all selected training samples during every iteration to optimize Eq (1); Moreover, the computational burden of Bayesian optimization escalates substantially with the number of iterations. As a result, processing just eight CNN/Daily Mail (Nallapati et al., 2016) samples for 1000 Bayesian iterations requires nearly 7.5 hours for LLaMA-2-13B and 20 hours for LLaMA-2-70B on an NVIDIA A6000 server. These computational demands restrict the generalizability of Self-SD across different models and tasks.
3.2 Experimental Observations
This subsection delves into Self-SD, exploring the plug-and-play potential of this layer-skipping SD paradigm for lossless LLM inference acceleration. Our key findings are detailed below.
<details>
<summary>x2.png Details</summary>

### Visual Description
## Line Graph and Bar Chart: Speedup Analysis
### Overview
The image contains two visualizations:
1. **(a) Speedups with a Unified Skipping Pattern**: A line graph comparing token acceptance rates for Top-k and Top-1 candidates across varying numbers of skipped sub-layers.
2. **(b) Speedup Variations under Domain Shift**: A bar chart comparing speedup magnitudes across four evaluation tasks (Summarization, Reasoning, Storytelling, Translation) for four language-specific (LS) methods (Sum. LS, Story. LS, Rea. LS, Trans. LS).
---
### Components/Axes
#### Chart (a):
- **X-axis**: "Number of Sub-layers to Skip" (25–45, integer increments).
- **Y-axis**: "Token Acceptance Rate" (0.2–1.0, linear scale).
- **Legend**: Located at the bottom-right, with two entries:
- **Top-k candidates** (green circles, shaded green).
- **Top-1 candidates** (blue triangles, shaded blue).
#### Chart (b):
- **X-axis**: "Evaluation Tasks" (Summarization, Reasoning, Storytelling, Translation).
- **Y-axis**: "Speedup" (1.0–1.5, linear scale).
- **Legend**: Located at the top-right, with four entries:
- **Sum. LS** (green bars).
- **Story. LS** (blue bars).
- **Rea. LS** (orange bars).
- **Trans. LS** (red bars).
---
### Detailed Analysis
#### Chart (a):
- **Top-k candidates** (green):
- Starts at ~0.8 (25 sub-layers skipped).
- Peaks at ~0.95 (40 sub-layers skipped).
- Declines sharply to ~0.6 (45 sub-layers skipped).
- **Top-1 candidates** (blue):
- Starts at ~0.6 (25 sub-layers skipped).
- Peaks at ~0.8 (35 sub-layers skipped).
- Declines to ~0.4 (45 sub-layers skipped).
- **Trend**: Both lines show initial improvement with skipped sub-layers, followed by a decline. Top-k maintains higher acceptance rates overall.
#### Chart (b):
- **Summarization**:
- Sum. LS: 1.28 (highest).
- Story. LS: 1.20.
- Rea. LS: 0.99 (lowest).
- Trans. LS: 1.17.
- **Reasoning**:
- Sum. LS: 1.10.
- Story. LS: 1.01 (lowest).
- Rea. LS: 1.12 (highest).
- Trans. LS: 1.04.
- **Storytelling**:
- Sum. LS: 1.34.
- Story. LS: 1.47 (highest).
- Rea. LS: 1.28.
- Trans. LS: 1.24.
- **Translation**:
- Sum. LS: 1.05.
- Story. LS: 1.06.
- Rea. LS: 1.08.
- Trans. LS: 1.15 (highest).
---
### Key Observations
1. **Chart (a)**:
- Top-k candidates outperform Top-1 across all skipped sub-layers.
- Optimal skipping occurs at ~40 sub-layers for Top-k and ~35 for Top-1.
- Confidence intervals (shaded areas) suggest moderate uncertainty in Top-1 performance.
2. **Chart (b)**:
- **Storytelling** achieves the highest speedup (1.47) with Story. LS.
- **Summarization** benefits most from Sum. LS (1.28).
- **Translation** shows minimal speedup across all LS methods (<1.2).
- Rea. LS underperforms in Summarization (0.99) but excels in Reasoning (1.12).
---
### Interpretation
- **Chart (a)** demonstrates that skipping sub-layers improves token acceptance up to a threshold, after which performance degrades. Top-k candidates are more robust to skipping than Top-1.
- **Chart (b)** reveals task-specific dependencies:
- Storytelling and Summarization benefit from LS methods aligned with their domain (e.g., Story. LS for Storytelling).
- Rea. LS underperforms in Summarization, suggesting task-LS mismatches reduce efficiency.
- Translation shows minimal speedup, indicating limited gains from skipping in this domain.
The data underscores the importance of task-specific optimization when applying sub-layer skipping and LS methods. Outliers like Rea. LS in Summarization highlight potential pitfalls of generic approaches.
</details>
Figure 2: (a) LLMs possess self-acceleration potential via layer sparsity. By utilizing drafts from the top- $k$ candidates, we found that uniformly skipping half of the layers during drafting yields a notable $1.2×$ speedup. (b) Layer sparsity is task-specific. Each task requires its own optimal set of skipped layers, and applying the skipped layer configuration from one task to another can lead to substantial performance degradation. “ X LS ” represents the skipped layer set optimized for task X.
3.2.1 LLMs Possess Self-Acceleration Potential via Layer Sparsity
We begin by investigating the potential of behavior alignment between the target LLM and its layer-skipping variant. Unlike previous work (Zhang et al., 2024) that focused solely on greedy draft predictions, we leverage potential draft candidates from top- $k$ predictions, as detailed in Section 4.2. We conducted experiments using LLaMA-2-13B across the CNN/Daily Mail (Nallapati et al., 2016), GSM8K (Cobbe et al., 2021), and TinyStories (Eldan & Li, 2023) datasets. We applied a uniform layer-skipping pattern with $k$ set to 10. The experimental results, illustrated in Figure 2 (a), demonstrate a $30\%$ average improvement in the token acceptance rate by leveraging top- $k$ predictions, with over $90\%$ of draft tokens accepted by the target LLM. Consequently, compared to Self-SD, which achieved a maximum speedup of $1.01×$ in this experimental setting, we revealed that the layer-skipping SD paradigm could yield an average wall-clock speedup of $1.22×$ over conventional autoregressive decoding with a uniform layer-skipping pattern. This finding challenges the prevailing belief that the selection of skipped layers must be meticulously curated, suggesting instead that LLMs possess greater potential for self-acceleration through inherent layer sparsity.
3.2.2 Layer Sparsity is Task-specific
We further explore the following research question: Is the skipped layer set optimized for one specific task applicable to other tasks? To address this, we conducted domain shift experiments using LLaMA-2-13B on the CNN/Daily Mail, GSM8K, TinyStories, and WMT16 DE-EN datasets. The experimental results, depicted in Figure 2 (b), reveal two critical findings: 1) Each task requires its own optimal skipped layer set. As illustrated in Figure 2 (b), the highest speedup performance is consistently achieved by the skipped layer configuration specifically optimized for each task. The detailed configuration of these layers is presented in Appendix A, demonstrating that the optimal configurations differ across tasks. 2) Applying the static skipped layer configuration across different tasks can lead to substantial efficiency degradation. For example, the speedup decreases from $1.47×$ to $1.01×$ when the optimized skipped layer set from a storytelling task is applied to a mathematical reasoning task, indicating that the optimized skipped layer set for one specific task does not generalize effectively to others.
These findings lay the groundwork for our plug-and-play solution within layer-skipping SD. Section 3.2.1 provides a strong foundation for real-time skipped layer selection, suggesting that additional optimization using training data may be unnecessary; Section 3.2.2 highlights the limitations of static layer-skipping patterns for dynamic input data streams across various tasks, underscoring the necessity for adaptive layer optimization during inference. Building on these insights, we present our on-the-fly self-speculative decoding method for efficient and adaptive layer set optimization.
4 SWIFT: On-the-Fly Self-Speculative Decoding
We introduce \method, the first plug-and-play self-speculative decoding approach that optimizes the skipped layer set of the target LLM on the fly, facilitating lossless LLM acceleration across diverse input data streams. As shown in Figure 3, \method divides LLM inference into two distinct phases: (1) context-based layer set optimization (§ 4.1), which aims to identify the optimal skipped layer set given the input stream, and (2) confidence-aware inference acceleration (§ 4.2), which employs the determined configuration to accelerate LLM inference.
<details>
<summary>x3.png Details</summary>

### Visual Description
## Diagram: Process Flow for Context-aware Inference Acceleration
### Overview
The diagram illustrates a sequential workflow for generating tokens through three distinct phases: **Context-based Layer Set Optimization**, **Confidence-aware Inference Acceleration**, and **Generated Tokens**. The timeline is divided into discrete intervals labeled as multiples of `N` (e.g., `0`, `N`, `2N`, ..., `mN`, `(m+1)N`, `(m+2)N`), with colored blocks representing computational stages.
---
### Components/Axes
- **X-axis**: Time intervals marked as `0`, `N`, `2N`, ..., `mN`, `(m+1)N`, `(m+2)N`, ..., with ellipses (`...`) indicating continuation.
- **Y-axis**: Implicitly represents computational stages (no explicit label).
- **Legend**:
- **Yellow**: Context accumulation
- **Red**: Layer set optimization
- **Green**: Acceleration
- **Key Elements**:
- Colored blocks (rectangles) aligned along the timeline.
- Arrow labeled "Generated Tokens" pointing rightward from the final phase.
---
### Detailed Analysis
1. **Phase 1: Context-based Layer Set Optimization (0 to mN)**:
- Alternating yellow (context accumulation) and red (layer set optimization) blocks.
- Blocks are evenly spaced at intervals of `N` (e.g., `0→N→2N→...→mN`).
- Example: At `0`, a yellow block precedes a red block; this pattern repeats until `mN`.
2. **Phase 2: Confidence-aware Inference Acceleration ((m+1)N onward)**:
- A single continuous green block spans from `(m+1)N` to `(m+2)N` and beyond, indicated by ellipses.
- No further yellow or red blocks appear in this phase.
3. **Output**:
- An arrow labeled "Generated Tokens" originates from the end of the green block, pointing rightward.
---
### Key Observations
- **Sequential Dependency**: The workflow progresses strictly from left to right, with no overlap between phases.
- **Color Consistency**: All blocks match the legend (yellow = context, red = optimization, green = acceleration).
- **Temporal Granularity**: The use of `N` intervals suggests modular computation steps, with acceleration occurring after `m` optimization cycles.
---
### Interpretation
This diagram represents a **pipeline for efficient token generation** in a machine learning or NLP context. The initial phases focus on optimizing computational resources (context accumulation and layer selection), followed by a sustained acceleration phase that prioritizes confidence-aware inference. The final output ("Generated Tokens") implies the culmination of these stages into actionable results. The absence of numerical values suggests the diagram emphasizes **process structure** over quantitative metrics, likely serving as a conceptual model for system design or optimization strategies.
</details>
Figure 3: Timeline of \method inference. N denotes the maximum generation length per instance.
4.1 Context-based Layer Set Optimization
Layer set optimization is a critical challenge in self-speculative decoding, as it determines which layers of the target LLM should be skipped to form the draft model (see Section 3.1). Unlike prior methods that rely on time-intensive offline optimization, our work emphasizes on-the-fly layer set optimization, which poses a greater challenge to the latency-accuracy trade-off: the optimization must be efficient enough to avoid delays during inference while ensuring accurate drafting of subsequent decoding steps. To address this, we propose an adaptive optimization mechanism that balances efficiency with drafting accuracy. Our method minimizes overhead by performing only a single forward pass of the draft model per step to validate potential skipped layer set candidates. The core innovation is the use of LLM-generated tokens (i.e., prior context) as ground truth, allowing for simultaneous validation of the draft model’s accuracy in predicting future decoding steps.
In the following subsections, we illustrate the detailed process of this optimization phase for each input instance, which includes context accumulation (§ 4.1.1) and layer set optimization (§ 4.1.2).
4.1.1 Context Accumulation
Given an input instance in the optimization phase, the draft model is initialized by uniformly skipping layers in the target LLM. This initial layer-skipping pattern is maintained to accelerate inference until a specified number of LLM-generated tokens, referred to as the context window, has been accumulated. Upon reaching this window length, the inference transitions to layer set optimization.
<details>
<summary>x4.png Details</summary>

### Visual Description
## Flowchart: Layer Optimization and Draft Model Evaluation Process
### Overview
The diagram illustrates a two-stage technical process for optimizing layer sets in a language model (LLM) and evaluating draft models. It combines probabilistic search methods with parallel candidate evaluation, featuring explicit token tracking and model verification steps.
### Components/Axes
**Legend (bottom of diagram):**
- Orange: Input tokens
- Green: LLM-generated tokens
- Blue: Draft tokens
- Orange: Accepted tokens
**Key Components:**
1. **Efficient Layer Set Suggestion (Left Section)**
- LLM Inputs → Random Search (np.random.choice()) → Bayes Optimization (graph with optimization curve)
- Accepted tokens → Target LLM Verification (blue box with checkmark)
- Layer configuration visualization (MLP/Attention blocks with binary flags)
2. **Parallel Candidate Evaluation (Right Section)**
- Original Outputs → Parallel Draft (blue tokens)
- Calculate Matchness → Conditional Update (if best)
- Alter Skipped Layer Set → Compact Draft Model (green box)
- Gaussian Update (purple box with arrow from original outputs)
### Detailed Analysis
**Left Section Flow:**
1. Input tokens (orange) feed into dual optimization processes:
- Random Search (dice icon with "np.random.choice()")
- Bayes Optimization (graph showing optimization curve with yellow data points)
2. Accepted tokens (orange) from optimization feed into:
- Target LLM Verification (blue box with checkmark icon)
3. Layer configuration visualization shows:
- MLP blocks (orange) with binary flags (0/1)
- Attention blocks (yellow) with binary flags
- Z-axis parameter (bottom right)
**Right Section Flow:**
1. Original outputs (green tokens) split into:
- Parallel Draft (blue tokens)
- Gaussian Update (purple box)
2. Draft tokens flow through:
- Calculate Matchness (decision point)
- Conditional Update (if best)
3. Final output:
- Compact Draft Model (green box)
- Alter Skipped Layer Set (icon with bidirectional arrows)
### Key Observations
1. Token color consistency:
- Input/accepted tokens share orange color
- Draft tokens maintain blue throughout
- LLM-generated tokens use green
2. Optimization duality:
- Random search (stochastic) vs. Bayes optimization (deterministic)
3. Parallel processing:
- Simultaneous evaluation of multiple draft candidates
4. Model adaptation:
- Dynamic layer skipping based on matchness evaluation
5. Typographical correction:
- "Gaussian" instead of "Guassian" in update component
### Interpretation
This diagram represents a hybrid optimization framework combining:
1. **Layer Selection Optimization** (left):
- Uses probabilistic methods (random search) and Bayesian optimization to identify optimal layer configurations
- Verifies selections against target LLM performance
2. **Candidate Evaluation System** (right):
- Implements parallel processing of draft models
- Employs matchness calculation for quality assessment
- Features dynamic model adaptation through layer skipping
3. **Token Tracking System**:
- Color-coded token flow visualization enables monitoring of:
- Input preservation (orange)
- Draft generation (blue)
- Accepted modifications (orange)
- LLM-generated content (green)
The process suggests an iterative optimization approach where layer configurations are continuously refined through probabilistic search, parallel evaluation, and dynamic adaptation based on performance metrics. The color-coded token tracking system provides visual transparency into the model's processing pipeline, while the dual optimization methods balance exploration (random search) and exploitation (Bayes optimization) in layer selection.
</details>
Figure 4: Layer set optimization process in \method. During the optimization stage, \method performs an optimization step prior to each LLM decoding step to adjust the skipped layer set, which involves: (a) Efficient layer set optimization. \method integrates random search with interval Bayesian optimization to propose layer set candidates; (b) Parallel candidate evaluation. \method uses LLM-generated tokens (i.e., prior context) as ground truth, enabling simultaneous validation of the proposed candidates. The best-performing layer set is selected to accelerate the current decoding step.
4.1.2 Layer Set Optimization
During this stage, as illustrated in Figure 4, we integrate an optimization step before each LLM decoding step to refine the skipped layer set, which comprises two substeps:
Efficient Layer Set Suggestion
This substep aims to suggest a potential layer set candidate. Formally, given a target LLM $\mathscr{M}_{T}$ with $L$ layers, our goal is to identify an optimal skipped layer set $\bm{z}∈\{0,1\}^{L}$ to form the compact draft model. Unlike Zhang et al. (2024), which relies entirely on a time-consuming Bayesian optimization process, we introduce an efficient strategy that combines random search with Bayesian optimization. In this approach, random sampling efficiently handles most of the exploration. Specifically, given a fixed skipping ratio $r$ , \method applies Bayesian optimization at regular intervals of $\beta$ optimization steps (e.g., $\beta=25$ ) to suggest the next layer set candidate, while random search is employed during other optimization steps.
$$
\bm{z}=\left\{\begin{array}[]{ll}\operatorname{Bayesian\_Optimization}(\bm{l})%
&\text{ if }o\text{ \% }\beta=0\\
\operatorname{Random\_Search}(\bm{l})&\text{ otherwise }\end{array},\right. \tag{2}
$$
where $1≤ o≤ S$ is the current optimization step; $S$ denotes the maximum number of optimization steps; $\bm{l}=\binom{L}{rL}$ denotes the input space, i.e., all possible combinations of layers that can be skipped.
Parallel Candidate Evaluation
\method
leverages LLM-generated context to simultaneously validate the candidate draft model’s performance in predicting future decoding steps. Formally, given an input sequence $\bm{x}$ and the previously generated tokens within the context window, denoted as $\bm{y}=\{y_{1},...,y_{\gamma}\}$ , the draft model $\mathscr{M}_{D}$ , which skips the designated layers $\bm{z}$ of the target LLM, is employed to predict these context tokens in parallel:
$$
y^{\prime}_{i}=\arg\max_{y}\log P\left(y\mid\bm{x},\bm{y}_{<i};\bm{\theta}_{%
\mathscr{M}_{D}}\right),1\leq i\leq\gamma, \tag{3}
$$
where $\gamma$ represents the context window. The cached key-value pairs in the target LLM $\mathscr{M}_{T}$ are reused by $\mathscr{M}_{D}$ , presumably aligning $\mathscr{M}_{D}$ ’s distribution with $\mathscr{M}_{T}$ and reducing the redundant computation. The matchness score is defined as the exact match ratio between $\bm{y}$ and $\bm{y}^{\prime}$ :
$$
\texttt{matchness}=\frac{\sum_{i}\mathbb{I}\left(y_{i}=y^{\prime}_{i}\right)}{%
\gamma},1\leq i\leq\gamma, \tag{4}
$$
where $\mathbb{I}(·)$ denotes the indicator function. This score serves as the optimization objective during optimization, reflecting $\mathscr{M}_{D}$ ’s accuracy in predicting future decoding steps. As shown in Figure 4, the matchness score at each step is integrated into the Gaussian process model to guide Bayesian optimization, with the highest-scoring layer set candidate being retained to form the draft model.
As illustrated in Figure 3, the process of context accumulation and layer set optimization alternates for each instance until a termination condition is met – either the maximum number of optimization steps is reached or the best candidate remains unchanged over multiple iterations. Once the optimization phase concludes, the inference process transitions to the confidence-aware inference acceleration phase, where the optimized draft model is employed to speed up LLM inference.
4.2 Confidence-aware Inference Acceleration
<details>
<summary>x5.png Details</summary>

### Visual Description
## Diagram: Early-stopping Drafting and Dynamic Verification Process
### Overview
The image depicts two interconnected diagrams illustrating a natural language processing (NLP) model's decision-making process. Diagram (a) shows an "Early-stopping Drafting" mechanism with probabilistic thresholds, while diagram (b) demonstrates "Dynamic Verification" through attention weight visualization. The system evaluates text generation at multiple stages, using attention mechanisms and probability calculations to determine continuation or termination of text production.
### Components/Axes
**Diagram (a): Early-stopping Drafting**
- **Funnel Structure**:
- Left branch: "Continue" path (blue)
- Right branch: "Early Stop!" path (red)
- **Key Elements**:
- Probability thresholds:
- `P_is = 0.85 > ε` (Continue condition)
- `P_all = 0.65 < ε` (Early Stop condition)
- Text components:
- "is", "will", "that" (Continue path)
- "all", "the", "best" (Early Stop path)
- Components:
- `M_D` (Model Decision node)
- "Attention" (Input/processing stage)
**Diagram (b): Dynamic Verification**
- **Attention Grid**:
- 5x5 matrix visualizing attention weights
- Words listed vertically: "is", "all", "will", "the", "best"
- Color coding:
- Yellow squares indicate attention weights
- Intensity varies by square darkness
- **Legend**:
- "Attention" label with yellow highlighting
### Detailed Analysis
**Diagram (a) Trends**:
1. Continue path (blue) has higher probability (0.85) than Early Stop path (0.65)
2. Text components differ between paths:
- Continue: "is", "will", "that"
- Early Stop: "all", "the", "best"
3. Both paths originate from `M_D` and "Attention" input
**Diagram (b) Trends**:
1. Attention weights show:
- Strongest focus on "is" (darkest yellow)
- Moderate attention to "all" and "will"
- Weakest attention to "best" (lightest yellow)
2. Grid structure suggests sequential attention pattern across words
### Key Observations
1. Probabilistic thresholding determines text generation continuation
2. Attention mechanism prioritizes different words based on context
3. Early-stopping occurs when continuation probability falls below 0.65
4. Attention weights correlate with text component importance
### Interpretation
This system demonstrates a hybrid approach to text generation:
1. **Probabilistic Control**: The model uses calculated probabilities (`P_is`, `P_all`) to decide between continuing text generation or terminating early, with ε representing a confidence threshold.
2. **Attention-Driven Verification**: The attention grid reveals how the model dynamically verifies text components, with stronger attention given to critical words like "is" and "all".
3. **Contextual Decision Making**: The divergence between Continue and Early Stop paths suggests the model evaluates different text segments independently, using attention weights to inform its decisions.
4. **Efficiency Optimization**: The early-stopping mechanism likely prevents unnecessary computation for low-confidence text segments, while maintaining quality through attention-based verification.
The diagrams collectively illustrate an adaptive text generation system that balances computational efficiency with semantic coherence through probabilistic thresholds and attention-based verification.
</details>
Figure 5: Confidence-aware inference process of \method. (a) The drafting terminates early if the confidence score drops below threshold $\epsilon$ . (b) Draft candidates are dynamically selected based on confidence and then verified in parallel by the target LLM.
During the acceleration phase, the optimization step is removed. \method applies the best-performed layer set to form the compact draft model and decodes following the draft-then-verify paradigm. Specifically, at each decoding step, given the input $\bm{x}$ and previous LLM outputs $\bm{y}$ , the draft model $\mathscr{M}_{D}$ predicts future LLM decoding steps in an autoregressive manner:
$$
y^{\prime}_{j}=\arg\max_{y}\log P\left(y\mid\bm{x},\bm{y},\bm{y}^{\prime}_{<j}%
;\bm{\theta}_{\mathscr{M}_{D}}\right), \tag{5}
$$
where $1≤ j≤ N_{D}$ is the current draft step, $N_{D}$ denotes the maximum draft length, $\bm{y}^{\prime}_{<j}$ represents previous draft tokens, and $P(·)$ denotes the probability distribution of the next draft token. The KV cache of the target LLM $\mathscr{M}_{T}$ and preceding draft tokens $\bm{y}^{\prime}_{<j}$ is reused to reduce the computational cost.
Let $p_{j}=\max P(·)$ denote the probability of the top-1 draft prediction $y^{\prime}_{j}$ , which can be regarded as a confidence score. Recent research (Li et al., 2024b; Du et al., 2024) shows that this score is highly correlated with the likelihood that the draft token $y^{\prime}_{j}$ will pass verification – higher confidence scores indicate a greater chance of acceptance. Therefore, following previous studies (Zhang et al., 2024; Du et al., 2024), we leverage the confidence score to prune unnecessary draft steps and select valuable draft candidates, improving both speculation accuracy and verification efficiency.
As shown in Figure 5, we integrate \method with two confidence-aware inference strategies These confidence-aware inference strategies are also applied during the optimization phase, where the current optimal layer set is used to form the draft model and accelerate the corresponding LLM decoding step.: 1) Early-stopping Drafting. The autoregressive drafting process halts if the confidence $p_{j}$ falls below a specified threshold $\epsilon$ , avoiding any waste of subsequant drafting computation. 2) Dynamic Verification. Each $y^{\prime}_{j}$ is dynamically extended with its top- $k$ draft predictions for parallel verification to enhance speculation accuracy, with $k$ determined by the confidence score $p_{j}$ . Concretely, $k$ is set to 10, 5, 3, and 1 for $p$ in the ranges of $(0,0.5]$ , $(0.5,0.8]$ , $(0.8,0.95]$ , and $(0.95,1]$ , respectively. All draft candidates are linearized into a single sequence and verified in parallel by the target LLM using a special causal attention mask (see Figure 5 (b)).
5 Experiments
5.1 Experimental Setup
Implementation Details
We mainly evaluate \method on LLaMA-2 (Touvron et al., 2023b) and CodeLLaMA series (Rozière et al., 2023) across various tasks, including summarization, mathematical reasoning, storytelling, and code generation. The evaluation datasets include CNN/Daily Mail (CNN/DM) (Nallapati et al., 2016), GSM8K (Cobbe et al., 2021), TinyStories (Eldan & Li, 2023), and HumanEval (Chen et al., 2021). The maximum generation lengths on CNN/DM, GSM8K, and TinyStories are set to 64, 64, and 128, respectively. We conduct 1-shot evaluation for CNN/DM and TinyStories, and 5-shot evaluation for GSM8K. We compare pass@1 and pass@10 for HumanEval. We randomly sample 1000 instances from the test set for each dataset except HumanEval. The maximum generation lengths for HumanEval and all analyses are set to 512. During optimization, we employ both random search and Bayesian optimization https://github.com/bayesian-optimization/BayesianOptimization to suggest skipped layer set candidates. Following prior work, we adopt speculative sampling (Leviathan et al., 2023) as our acceptance strategy with a batch size of 1. Detailed setups are provided in Appendix B.1 and B.2.
Baselines
In our main experiments, we compare \method to two existing plug-and-play methods: Parallel Decoding (Santilli et al., 2023) and Lookahead Decoding (Fu et al., 2024), both of which employ Jacobi decoding for efficient LLM drafting. It is important to note that \method, as a layer-skipping SD method, is orthogonal to these Jacobi-based SD methods, and integrating \method with them could further boost inference efficiency. We exclude other SD methods from our comparison as they necessitate additional modules or extensive training, which limits their generalizability.
Evaluation Metrics
We report two widely-used metrics for \method evaluation: mean generated length $M$ (Stern et al., 2018) and token acceptance rate $\alpha$ (Leviathan et al., 2023). Detailed descriptions of these metrics can be found in Appendix B.3. In addition to these metrics, we report the actual decoding speed (tokens/s) and wall-time speedup ratio compared with vanilla autoregressive decoding. The acceleration of \method theoretically guarantees the preservation of the target LLMs’ output distribution, making it unnecessary to evaluate the generation quality. However, to provide a point of reference, we present the evaluation scores for code generation tasks.
| LLaMA-2-13B Parallel Lookahead | Vanilla 1.04 1.38 | 1.00 0.95 $×$ 1.16 $×$ | 1.00 $×$ 1.11 1.50 | 1.00 0.99 $×$ 1.29 $×$ | 1.00 $×$ 1.06 1.62 | 1.00 0.97 $×$ 1.37 $×$ | 1.00 $×$ 19.49 25.46 | 20.10 0.97 $×$ 1.27 $×$ | 1.00 $×$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| \method | 4.34 | 1.37 $×$ † | 3.13 | 1.31 $×$ † | 8.21 | 1.53 $×$ † | 28.26 | 1.41 $×$ | |
| LLaMA-2-13B -Chat | Vanilla | 1.00 | 1.00 $×$ | 1.00 | 1.00 $×$ | 1.00 | 1.00 $×$ | 19.96 | 1.00 $×$ |
| Parallel | 1.06 | 0.96 $×$ | 1.08 | 0.97 $×$ | 1.10 | 0.98 $×$ | 19.26 | 0.97 $×$ | |
| Lookahead | 1.35 | 1.15 $×$ | 1.57 | 1.31 $×$ | 1.66 | 1.40 $×$ | 25.69 | 1.29 $×$ | |
| \method | 3.54 | 1.28 $×$ | 2.95 | 1.25 $×$ | 7.42 | 1.50 $×$ † | 26.80 | 1.34 $×$ | |
| LLaMA-2-70B | Vanilla | 1.00 | 1.00 $×$ | 1.00 | 1.00 $×$ | 1.00 | 1.00 $×$ | 4.32 | 1.00 $×$ |
| Parallel | 1.05 | 0.95 $×$ | 1.07 | 0.97 $×$ | 1.05 | 0.96 $×$ | 4.14 | 0.96 $×$ | |
| Lookahead | 1.36 | 1.15 $×$ | 1.54 | 1.30 $×$ | 1.59 | 1.35 $×$ | 5.45 | 1.26 $×$ | |
| \method | 3.85 | 1.43 $×$ † | 2.99 | 1.39 $×$ † | 6.17 | 1.62 $×$ † | 6.41 | 1.48 $×$ | |
Table 2: Comparison between \method and prior plug-and-play methods. We report the mean generated length M, speedup ratio, and average decoding speed (tokens/s) under greedy decoding. † indicates results with a token acceptance rate $\alpha$ above 0.98. More details are provided in Appendix C.1.
| HumanEval (pass@1) \method HumanEval (pass@10) | Vanilla 4.75 Vanilla | 1.00 0.98 1.00 | - 0.311 - | 0.311 1.40 $×$ 0.628 | 1.00 $×$ 3.79 1.00 $×$ | 1.00 0.88 1.00 | - 0.372 - | 0.372 1.46 $×$ 0.677 | 1.00 $×$ 1.00 $×$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| \method | 3.55 | 0.93 | 0.628 | 1.29 $×$ | 2.79 | 0.90 | 0.683 | 1.30 $×$ | |
Table 3: Experimental results of \method on code generation tasks. We report the mean generated length M, acceptance rate $\alpha$ , accuracy (Acc.), and speedup ratio for comparison. We use greedy decoding for pass@1 and random sampling with a temperature of 0.6 for pass@10.
5.2 Main Results
Table 2 presents the comparison between \method and previous plug-and-play methods on text generation tasks. The experimental results demonstrate the following findings: (1) \method shows superior efficiency over prior methods, achieving consistent speedups of $1.3×$ $\sim$ $1.6×$ over vanilla autoregressive decoding across various models and tasks. (2) The efficiency of \method is driven by the high behavior consistency between the target LLM and its layer-skipping draft variant. As shown in Table 2, \method produces a mean generated length M of 5.01, with a high token acceptance rate $\alpha$ ranging from $90\%$ to $100\%$ . Notably, for the LLaMA-2 series, this acceptance rate remains stable at $98\%$ $\sim$ $100\%$ , indicating that nearly all draft tokens are accepted by the target LLM. (3) Compared with 13B models, LLaMA-2-70B achieves higher speedups with a larger layer skip ratio ( $0.45$ $→$ $0.5$ ), suggesting that larger-scale LLMs exhibit greater layer sparsity. This underscores \method ’s potential to deliver even greater speedups as LLM scales continue to grow. A detailed analysis of this finding is presented in Section 5.3, while additional experimental results for LLaMA-70B models, including LLaMA-3-70B, are presented in Appendix C.2.
Table 3 shows the evaluation results of \method on code generation tasks. \method achieves speedups of $1.3×$ $\sim$ $1.5×$ over vanilla autoregressive decoding, demonstrating its effectiveness across both greedy decoding and random sampling settings. Additionally, speculative sampling theoretically guarantees that \method maintains the original output distribution of the target LLM. This is empirically validated by the task performance metrics in Table 3. Despite a slight variation in the pass@10 metric for CodeLLaMA-34B, \method achieves identical performance to autoregressive decoding.
5.3 In-depth Analysis
<details>
<summary>x6.png Details</summary>

### Visual Description
## Line Chart: Matchness vs. Number of Instances with Speedup Metrics
### Overview
The image contains a line chart comparing "Matchness" and "Speedup" metrics across varying numbers of instances, alongside a latency breakdown table. The chart includes two data series (Overall Speedup and Instance Speedup) and a vertical "Optimization Stop!" marker. The table quantifies latency contributions from different processing modules.
### Components/Axes
**Left Chart (Line Chart):**
- **X-axis**: "# of Instances" (0 to 100, linear scale)
- **Y-axis (Left)**: "Matchness" (0.0 to 1.0, linear scale)
- **Y-axis (Right)**: "Speedup" (1.2 to 1.6, linear scale)
- **Legend**: Located at bottom-right, with:
- Green line/circles: "Overall Speedup"
- Gray line/circles: "Instance Speedup"
- **Annotations**:
- Red dashed vertical line labeled "Optimization Stop!" at x=10
- Horizontal dashed line labeled "Average" at y=0.8 (Matchness)
**Right Table (Latency Breakdown):**
- **Columns**:
- Modules (Optimize, Draft, Verify, Others, Total)
- Latency (ms) with uncertainty (e.g., 0.24 ± 0.02)
- Ratio (%)
- **Rows**:
- Optimize: 0.24 ms (±0.02), 0.8%
- Draft: 19.93 ms (±1.36), 64.4%
- Verify: 8.80 ms (±2.21), 28.4%
- Others: 1.98 ms (±0.13), 6.4%
- Total: 30.95 ms (±2.84), 100%
### Detailed Analysis
**Left Chart Trends:**
1. **Overall Speedup (Green)**:
- Starts at 0.2 (x=0) and rises sharply to 0.8 by x=10.
- Plateaus at 0.8 for x > 10, with minor fluctuations.
- Shaded green area (confidence interval) widens slightly after x=10.
2. **Instance Speedup (Gray)**:
- Begins at 1.2 (x=0) and stabilizes near 1.5 for x > 5.
- Shows minor oscillations but remains above 1.4 throughout.
3. **Key Thresholds**:
- "Optimization Stop!" at x=10 aligns with the plateau in Overall Speedup.
- "Average" Matchness line at y=0.8 intersects the plateau.
**Right Table Data:**
- **Latency Distribution**:
- Draft dominates latency (64.4%) despite low speedup ratio.
- Verify contributes 28.4% latency, 0% speedup ratio.
- Optimize has minimal latency (0.24 ms) but 0.8% ratio.
- Total latency: 30.95 ms (±2.84).
### Key Observations
1. **Chart**:
- Matchness plateaus at 0.8 after 10 instances, suggesting diminishing returns.
- Instance Speedup remains significantly higher than Overall Speedup.
- Confidence intervals for Overall Speedup widen post-optimization stop.
2. **Table**:
- Draft module accounts for 64.4% of latency but contributes 0% to speedup.
- Verify has high latency (8.80 ms) but no speedup impact.
- Optimize has negligible latency but minimal speedup contribution.
### Interpretation
The data reveals a trade-off between latency and speedup optimization:
- **Chart Insights**:
- The "Optimization Stop!" at 10 instances marks the point where further instance scaling yields no Matchness improvement.
- Instance Speedup (1.2–1.5) suggests parallel processing efficiency, while Overall Speedup (0.2–0.8) reflects system-wide performance.
- The widening confidence interval after x=10 implies uncertainty in Matchness gains beyond this threshold.
- **Table Insights**:
- Draft and Verify modules are latency bottlenecks despite their speedup ratios being zero or negligible.
- Optimize’s low latency (0.24 ms) and 0.8% ratio indicate it’s a minor contributor to total processing time.
- Total latency (30.95 ms) is dominated by Draft (19.93 ms) and Verify (8.80 ms), suggesting these modules require optimization for performance gains.
**Critical Anomalies**:
- The "Average" Matchness line (0.8) aligns with the Optimization Stop, implying this is the target threshold.
- Verify’s high latency (8.80 ms) with 0% speedup ratio highlights inefficiency in this module.
- Draft’s 64.4% latency share but 0% speedup ratio suggests it’s a non-parallelizable or critical-path process.
**Conclusion**:
The system achieves optimal Matchness (0.8) at 10 instances, but latency remains concentrated in Draft and Verify modules. Further optimization should focus on reducing Draft/Verify latency, as these dominate total processing time despite their minimal speedup contributions.
</details>
Figure 6: Illustration and latency breakdown of \method inference. As the left figure shows, after the context-based layer set optimization phase, the overall speedup of \method steadily increases, reaching the average instance speedup during the acceleration phase. The additional optimization steps account for only $\bf{0.8\%}$ of the total inference latency, as illustrated in the right figure.
Illustration of Inference
As described in Section 4, \method divides the LLM inference process into two distinct phases: optimization and acceleration. Figure 6 (left) illustrates the detailed acceleration effect of \method during LLM inference. Specifically, the optimization phase begins at the start of inference, where an optimization step is performed before each decoding step to adjust the skipped layer set forming the draft model. As shown in Figure 6, in this phase, the matchness score of the draft model rises sharply from 0.45 to 0.73 during the inference of the first instance. This score then gradually increases to 0.98, which triggers the termination of the optimization process. Subsequently, the inference transitions to the acceleration phase, during which the optimization step is removed, and the draft model remains fixed to accelerate LLM inference. As illustrated, the instance speedup increases with the matchness score, reaching an average of $1.53×$ in the acceleration phase. The overall speedup gradually rises as more tokens are generated, eventually approaching the average instance speedup. This dynamic reflects a key feature of \method: the efficiency of \method improves with increasing input length and the number of instances.
Breakdown of Computation
Figure 6 (right) presents the computation breakdown of different modules in \method with 1000 CNN/DM samples using LLaMA-2-13B. The results demonstrate that the optimization step only takes $\bf{0.8\%}$ of the overall inference process, indicating the efficiency of our strategy. Compared with Self-SD (Zhang et al., 2024) that requires a time-consuming optimization process (e.g., 7.5 hours for LLaMA-2-13B on CNN/DM), \method achieves a nearly 180 $×$ optimization time reduction, facilitating on-the-fly inference acceleration. Besides, the results show that the drafting stage of \method consumes the majority of inference latency. This is consistent with our results of mean generated length in Table 2 and 3, which shows that nearly $80\%$ output tokens are generated by the efficient draft model, demonstrating the effectiveness of our \method framework.
<details>
<summary>x7.png Details</summary>

### Visual Description
## Bar Chart: Model Performance Comparison Across Tasks
### Overview
The chart compares three models (Vanilla, Self-SD, SWIFT) across five tasks (Summarization, Reasoning, Instruction, Translation, QA) using two metrics: **Speedup** (left y-axis) and **Token Acceptance** (right y-axis). Speedup values are represented as bars, while Token Acceptance is shown as lines. The legend at the top maps colors to models and line styles.
---
### Components/Axes
- **X-axis**: Tasks (Summarization, Reasoning, Instruction, Translation, QA).
- **Left Y-axis (Speedup)**: Scale from 1.0 to 1.6 (multiplicative factor).
- **Right Y-axis (Token Acceptance)**: Scale from 0.5 to 1.0.
- **Legend**:
- **Vanilla**: Orange bars.
- **Self-SD**: Teal bars.
- **SWIFT**: Blue bars.
- **SWIFT Token Acceptance**: Dashed green line.
- **Self-SD Token Acceptance**: Dotted gray line.
---
### Detailed Analysis
#### Speedup (Bars)
- **Summarization**:
- Vanilla: 1.00x
- Self-SD: 1.28x
- SWIFT: 1.56x
- **Reasoning**:
- Vanilla: 1.00x
- Self-SD: 1.10x
- SWIFT: 1.45x
- **Instruction**:
- Vanilla: 1.00x
- Self-SD: 1.08x
- SWIFT: 1.47x
- **Translation**:
- Vanilla: 1.00x
- Self-SD: 1.05x
- SWIFT: 1.27x
- **QA**:
- Vanilla: 1.00x
- Self-SD: 1.02x
- SWIFT: 1.35x
#### Token Acceptance (Lines)
- **SWIFT** (dashed green):
- Summarization: ~1.0
- Reasoning: ~1.0
- Instruction: ~1.0
- Translation: ~1.0
- QA: ~1.0
- **Self-SD** (dotted gray):
- Summarization: ~1.0
- Reasoning: ~1.0
- Instruction: ~1.0
- Translation: ~1.0
- QA: ~1.0
---
### Key Observations
1. **Speedup Trends**:
- SWIFT consistently achieves the highest speedup across all tasks (1.27x–1.56x).
- Self-SD shows moderate improvements (1.02x–1.28x).
- Vanilla remains at 1.00x (baseline).
2. **Token Acceptance**:
- Both SWIFT and Self-SD maintain near-perfect token acceptance (~1.0) across all tasks.
- No significant deviation from the baseline (1.0).
---
### Interpretation
- **Model Efficiency**: SWIFT demonstrates superior computational efficiency, achieving speedups of 1.27x–1.56x over Vanilla without compromising token acceptance. This suggests architectural or algorithmic optimizations in SWIFT.
- **Self-SD Performance**: While Self-SD improves speed moderately (1.02x–1.28x), its gains are less pronounced than SWIFT’s, indicating potential trade-offs in its design.
- **Token Acceptance Stability**: The near-constant token acceptance (~1.0) for both SWIFT and Self-SD implies that speed improvements do not degrade output quality, highlighting a critical balance between efficiency and accuracy.
---
### Spatial Grounding & Verification
- **Legend Placement**: Top-center, clearly aligned with bar/line colors.
- **Color Consistency**:
- SWIFT bars (blue) match dashed green line (Token Acceptance).
- Self-SD bars (teal) match dotted gray line.
- **Axis Alignment**: Dual y-axes ensure clear separation of metrics without overlap.
---
### Conclusion
The chart underscores SWIFT’s dominance in speedup while maintaining token acceptance parity with baseline models. This positions SWIFT as a highly efficient solution for the evaluated tasks, with Self-SD offering incremental improvements. The stability of token acceptance across models suggests robustness in handling task-specific nuances.
</details>
Figure 7: Comparison between \method and Self-SD in handling dynamic data input streams. Unlike Self-SD, which suffers from efficiency reduction during distribution shift, \method maintains stable acceleration performance with an acceptance rate exceeding 0.9.
Dynamic Input Data Streams
We further validate the effectiveness of \method in handling dynamic input data streams. We selected CNN/DM, GSM8K, Alpaca (Taori et al., 2023), WMT14 DE-EN, and Nature Questions (Kwiatkowski et al., 2019) for the evaluation on summarization, reasoning, instruction following, translation, and question answering tasks, respectively. For each task, we randomly sample 500 instances from the test set and concatenate them task-by-task to form the input stream. The experimental results are presented in Figure 7. As demonstrated, Self-SD is sensitive to domain shifts, with the average token acceptance rate dropping from $92\%$ to $68\%$ . Consequently, it suffers from severe speedup reduction from $1.33×$ to an average of $1.05×$ under domain shifts. In contrast, \method exhibits promising adaptation capability to different domains with an average token acceptance rate of $96\%$ , leading to a consistent $1.3×$ $\sim$ $1.6×$ speedup.
<details>
<summary>x8.png Details</summary>

### Visual Description
## Line Charts: Flexible Optimization Strategy and Scaling Law of SWIFT
### Overview
The image contains two line charts comparing performance metrics across different configurations. Chart (a) examines speedup against the number of instances under varying optimization strategies, while chart (b) analyzes speedup relative to layer skip ratios for different model sizes. Both charts use circular markers and include legends for data series identification.
### Components/Axes
**Chart (a): Flexible Optimization Strategy**
- **X-axis**: "# of Instances" (0 to 50, linear scale)
- **Y-axis**: "Speedup" (1.25 to 1.50, linear scale)
- **Legend**: Bottom-right corner
- Blue circles: S=1000, β=25
- Orange circles: S=500, β=25
- Green circles: S=1000, β=50
**Chart (b): Scaling Law of SWIFT**
- **X-axis**: "Layer Skip Ratio r" (0.30 to 0.60, linear scale)
- **Y-axis**: "Speedup" (1.2 to 1.6, linear scale)
- **Legend**: Bottom-right corner
- Blue circles: 7B
- Green circles: 70B
- Orange circles: 13B
### Detailed Analysis
**Chart (a) Trends**
1. **Blue (S=1000, β=25)**: Speedup increases steadily from ~1.28 (0 instances) to ~1.49 (50 instances). Notable points:
- 10 instances: ~1.38
- 25 instances: ~1.45
- 40 instances: ~1.48
2. **Orange (S=500, β=25)**: Similar upward trend but slightly lower values:
- 10 instances: ~1.37
- 25 instances: ~1.44
- 40 instances: ~1.47
3. **Green (S=1000, β=50)**: Parallel trajectory with marginally lower speedup:
- 10 instances: ~1.36
- 25 instances: ~1.43
- 40 instances: ~1.46
**Chart (b) Trends**
1. **Blue (7B)**:
- Peaks at r=0.40 (~1.42)
- Drops sharply after r=0.45 (~1.25 at r=0.50)
2. **Green (70B)**:
- Peaks at r=0.45 (~1.58)
- Declines gradually after r=0.50 (~1.45 at r=0.55)
3. **Orange (13B)**:
- Peaks at r=0.45 (~1.52)
- Steeper decline post-r=0.45 (~1.30 at r=0.55)
### Key Observations
1. **Chart (a)**: All configurations show linear speedup growth with increasing instances. Higher β (50 vs 25) correlates with ~0.02 lower speedup at 50 instances.
2. **Chart (b)**:
- Larger models (70B) achieve higher peak speedup but exhibit sharper declines post-optimal r.
- 7B model's performance drops ~17% after r=0.40, while 13B drops ~14%.
- Optimal r values vary by model size: 7B (r=0.40), 13B/70B (r=0.45).
### Interpretation
- **Chart (a)** suggests that optimization strategy (S, β) impacts scalability, with higher β reducing efficiency gains at scale.
- **Chart (b)** reveals an inverse relationship between model size and optimal layer skip ratio effectiveness. Larger models (70B) require precise r tuning to avoid performance degradation, while smaller models (7B) show more abrupt declines.
- The 70B model's peak speedup (~1.58) exceeds others by 12-15%, but its sensitivity to r increases with size. This implies architectural tradeoffs between model capacity and computational efficiency in SWIFT's scaling law.
</details>
Figure 8: In-depth analysis of \method, which includes: (a) Flexible optimization strategy. The maximum optimization iteration $S$ and Bayesian interval $\beta$ can be flexibly adjusted to accommodate different input data types. (b) Scaling law. The speedup and optimal layer skip ratio of \method increase with larger model sizes, indicating that larger LLMs exhibit greater layer sparsity.
Flexible Optimization & Scaling Law
Figure 8 (a) presents the flexibility of \method in handling various input types by adjusting the maximum optimization step $S$ and Bayesian interval $\beta$ . For input with fewer instances, reducing $S$ enables an earlier transition to the acceleration phase while increasing $\beta$ reduces the overhead during the optimization phase, enhancing speedups during the initial stages of inference. In cases with sufficient input data, \method enables exploring more optimization paths, thereby enhancing the overall speedup. Figure 8 (b) illustrates the scaling law of \method: as the model size increases, both the optimal layer-skip ratio and overall speedup improve, indicating that larger LLMs exhibit more layer sparsity. This finding highlights the potential of \method for accelerating LLMs of larger sizes (e.g., 175B), which we leave for future investigation.
<details>
<summary>x9.png Details</summary>

### Visual Description
## Bar Chart: Speedup Comparison Across Model Versions
### Overview
The chart compares speedup values (relative to a baseline) for two AI models: Yi-34B and DeepSeek-Coder-33B. Three versions are evaluated: Vanilla, Base, and Instruct. Speedup is measured on a y-axis from 1.00 to 1.6, with each bar representing a version's performance.
### Components/Axes
- **X-axis**: Model names (Yi-34B, DeepSeek-Coder-33B)
- **Y-axis**: Speedup (1.00–1.6, linear scale)
- **Legend**:
- Orange = Vanilla
- Blue = Base
- Teal = Instruct
- **Bar Labels**: Numerical speedup values atop each bar (e.g., "1.31" for Yi-34B Base)
### Detailed Analysis
- **Yi-34B**:
- Vanilla: 1.00 (baseline)
- Base: 1.31 (31% speedup)
- Instruct: 1.26 (26% speedup)
- **DeepSeek-Coder-33B**:
- Vanilla: 1.00 (baseline)
- Base: 1.54 (54% speedup)
- Instruct: 1.39 (39% speedup)
### Key Observations
1. **Base versions dominate**: Both models' Base versions show the highest speedup (1.31 for Yi-34B, 1.54 for DeepSeek-Coder-33B).
2. **Instruct versions underperform Base**: Instruct versions have lower speedup than Base (e.g., Yi-34B Instruct: 1.26 vs. Base: 1.31).
3. **DeepSeek-Coder-33B outperforms Yi-34B**: Its Base version achieves 1.54 speedup, significantly higher than Yi-34B's 1.31.
4. **Vanilla versions are neutral**: All Vanilla versions equal 1.00, serving as the baseline.
### Interpretation
The data suggests that **Base versions are optimized for performance**, achieving substantial speedup over Vanilla and Instruct variants. The **DeepSeek-Coder-33B Base version** is the most efficient, indicating superior architectural or implementation optimizations. The **Instruct versions** likely prioritize instruction-following capabilities over raw speed, resulting in reduced performance. This trade-off highlights a common pattern in model development: specialized variants (e.g., Instruct) may sacrifice speed for task-specific functionality. The Vanilla versions act as a neutral reference, confirming that Base and Instruct versions are meaningfully different.
</details>
Figure 9: Speedups of \method on LLM backbones and their instruction-tuned variants.
Other LLM Backbones
Beyond LLaMA, we assess the effectiveness of \method on additional LLM backbones. Specifically, we include Yi-34B (Young et al., 2024) and DeepSeek-Coder-33B (Guo et al., 2024) along with their instruction-tuned variants for text and code generation tasks, respectively. The speedup results of \method are illustrated in Figure 9, demonstrating that \method achieves efficiency improvements ranging from $26\%$ to $54\%$ on these LLM backbones. Further experimental details are provided in Appendix C.3.
6 Conclusion
In this work, we introduce \method, an on-the-fly self-speculative decoding algorithm that adaptively selects certain intermediate layers of LLMs to skip during inference. The proposed method does not require additional training or auxiliary models, making it a plug-and-play solution for accelerating LLM inference across diverse input data streams. Extensive experiments conducted across various LLMs and tasks demonstrate that \method achieves over a $1.3×$ $\sim$ $1.6×$ speedup while preserving the distribution of the generated text. Furthermore, our in-depth analysis highlights the effectiveness of \method in handling dynamic input data streams and its seamless integration with various LLM backbones, showcasing the great potential of this paradigm for practical LLM inference acceleration.
Ethics Statement
The datasets used in our experiments are publicly released and labeled through interaction with humans in English. In this process, user privacy is protected, and no personal information is contained in the dataset. The scientific artifacts that we used are available for research with permissive licenses. The use of these artifacts in this paper is consistent with their intended purpose.
Acknowledgements
We thank all anonymous reviewers for their valuable comments during the review process. The work described in this paper was supported by Research Grants Council of Hong Kong (PolyU/15207122, PolyU/15209724, PolyU/15207821, PolyU/15213323) and PolyU internal grants (BDWP).
Reproducibility Statement
All the results in this work are reproducible. We provide all the necessary code in the Supplementary Material to replicate our results. The repository includes environment configurations, scripts, and other relevant materials. We discuss the experimental settings in Section 5.1 and Appendix C, including implementation details such as models, datasets, inference setup, and evaluation metrics.
References
- Ankner et al. (2024) Zachary Ankner, Rishab Parthasarathy, Aniruddha Nrusimha, Christopher Rinard, Jonathan Ragan-Kelley, and William Brandon. Hydra: Sequentially-dependent draft heads for medusa decoding. CoRR, abs/2402.05109, 2024. doi: 10.48550/ARXIV.2402.05109. URL https://doi.org/10.48550/arXiv.2402.05109.
- Bae et al. (2023) Sangmin Bae, Jongwoo Ko, Hwanjun Song, and Se-Young Yun. Fast and robust early-exiting framework for autoregressive language models with synchronized parallel decoding. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 5910–5924, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.362. URL https://aclanthology.org/2023.emnlp-main.362.
- Cai et al. (2024) Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D. Lee, Deming Chen, and Tri Dao. Medusa: Simple LLM inference acceleration framework with multiple decoding heads. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024. URL https://openreview.net/forum?id=PEpbUobfJv.
- Chen et al. (2023) Charlie Chen, Sebastian Borgeaud, Geoffrey Irving, Jean-Baptiste Lespiau, Laurent Sifre, and John Jumper. Accelerating large language model decoding with speculative sampling. CoRR, abs/2302.01318, 2023. doi: 10.48550/arXiv.2302.01318. URL https://doi.org/10.48550/arXiv.2302.01318.
- Chen et al. (2021) Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pondé de Oliveira Pinto, Jared Kaplan, Harrison Edwards, et al. Evaluating large language models trained on code. CoRR, abs/2107.03374, 2021. URL https://arxiv.org/abs/2107.03374.
- Cobbe et al. (2021) Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. CoRR, abs/2110.14168, 2021. URL https://arxiv.org/abs/2110.14168.
- Corro et al. (2023) Luciano Del Corro, Allie Del Giorno, Sahaj Agarwal, Bin Yu, Ahmed Awadallah, and Subhabrata Mukherjee. Skipdecode: Autoregressive skip decoding with batching and caching for efficient LLM inference. CoRR, abs/2307.02628, 2023. doi: 10.48550/ARXIV.2307.02628. URL https://doi.org/10.48550/arXiv.2307.02628.
- Dettmers et al. (2022) Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. Gpt3.int8(): 8-bit matrix multiplication for transformers at scale. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022. URL http://papers.nips.cc/paper_files/paper/2022/hash/c3ba4962c05c49636d4c6206a97e9c8a-Abstract-Conference.html.
- Du et al. (2024) Cunxiao Du, Jing Jiang, Yuanchen Xu, Jiawei Wu, Sicheng Yu, Yongqi Li, Shenggui Li, Kai Xu, Liqiang Nie, Zhaopeng Tu, and Yang You. Glide with a cape: A low-hassle method to accelerate speculative decoding. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024. URL https://openreview.net/forum?id=mk8oRhox2l.
- Dubey et al. (2024) Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, et al. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783.
- Eldan & Li (2023) Ronen Eldan and Yuanzhi Li. Tinystories: How small can language models be and still speak coherent english? CoRR, abs/2305.07759, 2023. doi: 10.48550/ARXIV.2305.07759. URL https://doi.org/10.48550/arXiv.2305.07759.
- Elhoushi et al. (2024) Mostafa Elhoushi, Akshat Shrivastava, Diana Liskovich, Basil Hosmer, Bram Wasti, Liangzhen Lai, Anas Mahmoud, Bilge Acun, Saurabh Agarwal, Ahmed Roman, Ahmed Aly, Beidi Chen, and Carole-Jean Wu. LayerSkip: Enabling early exit inference and self-speculative decoding. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 12622–12642, Bangkok, Thailand, August 2024. Association for Computational Linguistics. URL https://aclanthology.org/2024.acl-long.681.
- Frantar et al. (2023) Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. OPTQ: Accurate quantization for generative pre-trained transformers. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=tcbBPnfwxS.
- Fu et al. (2024) Yichao Fu, Peter Bailis, Ion Stoica, and Hao Zhang. Break the sequential dependency of LLM inference using lookahead decoding. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024. URL https://openreview.net/forum?id=eDjvSFOkXw.
- Gloeckle et al. (2024) Fabian Gloeckle, Badr Youbi Idrissi, Baptiste Rozière, David Lopez-Paz, and Gabriel Synnaeve. Better & faster large language models via multi-token prediction. CoRR, abs/2404.19737, 2024. doi: 10.48550/ARXIV.2404.19737. URL https://doi.org/10.48550/arXiv.2404.19737.
- Gu et al. (2024) Yuxian Gu, Li Dong, Furu Wei, and Minlie Huang. MiniLLM: Knowledge distillation of large language models. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=5h0qf7IBZZ.
- Guo et al. (2024) Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Y. Wu, Y. K. Li, Fuli Luo, Yingfei Xiong, and Wenfeng Liang. Deepseek-coder: When the large language model meets programming - the rise of code intelligence. CoRR, abs/2401.14196, 2024. doi: 10.48550/ARXIV.2401.14196. URL https://doi.org/10.48550/arXiv.2401.14196.
- He et al. (2024) Zhenyu He, Zexuan Zhong, Tianle Cai, Jason Lee, and Di He. REST: Retrieval-based speculative decoding. In Kevin Duh, Helena Gomez, and Steven Bethard (eds.), Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pp. 1582–1595, Mexico City, Mexico, June 2024. Association for Computational Linguistics. URL https://aclanthology.org/2024.naacl-long.88.
- Hoefler et al. (2021) Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, and Alexandra Peste. Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks. J. Mach. Learn. Res., 22(241):1–124, 2021.
- Hooper et al. (2023) Coleman Hooper, Sehoon Kim, Hiva Mohammadzadeh, Hasan Genc, Kurt Keutzer, Amir Gholami, and Yakun Sophia Shao. SPEED: speculative pipelined execution for efficient decoding. CoRR, abs/2310.12072, 2023. doi: 10.48550/ARXIV.2310.12072. URL https://doi.org/10.48550/arXiv.2310.12072.
- Hsieh et al. (2023) Cheng-Yu Hsieh, Chun-Liang Li, Chih-Kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alex Ratner, Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister. Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes. In Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki (eds.), Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, pp. 8003–8017. Association for Computational Linguistics, 2023. doi: 10.18653/V1/2023.FINDINGS-ACL.507. URL https://doi.org/10.18653/v1/2023.findings-acl.507.
- Hu et al. (2022) Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. URL https://openreview.net/forum?id=nZeVKeeFYf9.
- Jaiswal et al. (2024) Ajay Jaiswal, Bodun Hu, Lu Yin, Yeonju Ro, Shiwei Liu, Tianlong Chen, and Aditya Akella. Ffn-skipllm: A hidden gem for autoregressive decoding with adaptive feed forward skipping. CoRR, abs/2404.03865, 2024. doi: 10.48550/ARXIV.2404.03865. URL https://doi.org/10.48550/arXiv.2404.03865.
- Jones et al. (1998) Donald R. Jones, Matthias Schonlau, and William J. Welch. Efficient global optimization of expensive black-box functions. J. Glob. Optim., 13(4):455–492, 1998. doi: 10.1023/A:1008306431147. URL https://doi.org/10.1023/A:1008306431147.
- Kim et al. (2023) Sehoon Kim, Karttikeya Mangalam, Suhong Moon, Jitendra Malik, Michael W. Mahoney, Amir Gholami, and Kurt Keutzer. Speculative decoding with big little decoder. In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine (eds.), Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023. URL http://papers.nips.cc/paper_files/paper/2023/hash/7b97adeafa1c51cf65263459ca9d0d7c-Abstract-Conference.html.
- Kim et al. (2024) Taehyeon Kim, Ananda Theertha Suresh, Kishore A Papineni, Michael Riley, Sanjiv Kumar, and Adrian Benton. Accelerating blockwise parallel language models with draft refinement. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=KT6F5Sw0eg.
- Kou et al. (2024) Siqi Kou, Lanxiang Hu, Zhezhi He, Zhijie Deng, and Hao Zhang. Cllms: Consistency large language models. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024. URL https://openreview.net/forum?id=8uzBOVmh8H.
- Kwiatkowski et al. (2019) Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452–466, 2019. doi: 10.1162/tacl˙a˙00276. URL https://aclanthology.org/Q19-1026.
- Leviathan et al. (2023) Yaniv Leviathan, Matan Kalman, and Yossi Matias. Fast inference from transformers via speculative decoding. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pp. 19274–19286. PMLR, 2023. URL https://proceedings.mlr.press/v202/leviathan23a.html.
- Li et al. (2024a) Yuhui Li, Fangyun Wei, Chao Zhang, and Hongyang Zhang. Eagle: Speculative sampling requires rethinking feature uncertainty, 2024a.
- Li et al. (2024b) Yuhui Li, Fangyun Wei, Chao Zhang, and Hongyang Zhang. EAGLE-2: faster inference of language models with dynamic draft trees. CoRR, abs/2406.16858, 2024b. doi: 10.48550/ARXIV.2406.16858. URL https://doi.org/10.48550/arXiv.2406.16858.
- Liu et al. (2023a) Xiaoxuan Liu, Lanxiang Hu, Peter Bailis, Ion Stoica, Zhijie Deng, Alvin Cheung, and Hao Zhang. Online speculative decoding. CoRR, abs/2310.07177, 2023a. doi: 10.48550/ARXIV.2310.07177. URL https://doi.org/10.48550/arXiv.2310.07177.
- Liu et al. (2024) Yijin Liu, Fandong Meng, and Jie Zhou. Accelerating inference in large language models with a unified layer skipping strategy. CoRR, abs/2404.06954, 2024. doi: 10.48550/ARXIV.2404.06954. URL https://doi.org/10.48550/arXiv.2404.06954.
- Liu et al. (2019) Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.net/forum?id=rJlnB3C5Ym.
- Liu et al. (2023b) Zichang Liu, Jue Wang, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali Shrivastava, Ce Zhang, Yuandong Tian, Christopher Re, et al. Deja vu: Contextual sparsity for efficient llms at inference time. In International Conference on Machine Learning, pp. 22137–22176. PMLR, 2023b.
- Ma et al. (2024) Shuming Ma, Hongyu Wang, Lingxiao Ma, Lei Wang, Wenhui Wang, Shaohan Huang, Li Dong, Ruiping Wang, Jilong Xue, and Furu Wei. The era of 1-bit llms: All large language models are in 1.58 bits. CoRR, abs/2402.17764, 2024. doi: 10.48550/ARXIV.2402.17764. URL https://doi.org/10.48550/arXiv.2402.17764.
- Miao et al. (2024) Xupeng Miao, Gabriele Oliaro, Zhihao Zhang, Xinhao Cheng, Zeyu Wang, Zhengxin Zhang, Rae Ying Yee Wong, Alan Zhu, Lijie Yang, Xiaoxiang Shi, Chunan Shi, Zhuoming Chen, Daiyaan Arfeen, Reyna Abhyankar, and Zhihao Jia. Specinfer: Accelerating large language model serving with tree-based speculative inference and verification. In Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3, ASPLOS ’24, pp. 932–949, New York, NY, USA, 2024. Association for Computing Machinery. ISBN 9798400703867. doi: 10.1145/3620666.3651335. URL https://doi.org/10.1145/3620666.3651335.
- Nallapati et al. (2016) Ramesh Nallapati, Bowen Zhou, Cícero Nogueira dos Santos, Çaglar Gülçehre, and Bing Xiang. Abstractive text summarization using sequence-to-sequence rnns and beyond. In Yoav Goldberg and Stefan Riezler (eds.), Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016, Berlin, Germany, August 11-12, 2016, pp. 280–290. ACL, 2016. doi: 10.18653/V1/K16-1028. URL https://doi.org/10.18653/v1/k16-1028.
- OpenAI (2023) OpenAI. GPT-4 technical report. CoRR, abs/2303.08774, 2023. doi: 10.48550/ARXIV.2303.08774. URL https://doi.org/10.48550/arXiv.2303.08774.
- Patterson (2004) David A. Patterson. Latency lags bandwith. Commun. ACM, 47(10):71–75, 2004. doi: 10.1145/1022594.1022596. URL https://doi.org/10.1145/1022594.1022596.
- Pope et al. (2023) Reiner Pope, Sholto Douglas, Aakanksha Chowdhery, Jacob Devlin, James Bradbury, Jonathan Heek, Kefan Xiao, Shivani Agrawal, and Jeff Dean. Efficiently scaling transformer inference. In Dawn Song, Michael Carbin, and Tianqi Chen (eds.), Proceedings of the Sixth Conference on Machine Learning and Systems, MLSys 2023, Miami, FL, USA, June 4-8, 2023. mlsys.org, 2023. URL https://proceedings.mlsys.org/paper_files/paper/2023/hash/c4be71ab8d24cdfb45e3d06dbfca2780-Abstract-mlsys2023.html.
- Rasmussen & Williams (2006) Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian processes for machine learning. Adaptive computation and machine learning. MIT Press, 2006. ISBN 026218253X. URL https://www.worldcat.org/oclc/61285753.
- Rozière et al. (2023) Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton-Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, and Gabriel Synnaeve. Code llama: Open foundation models for code. CoRR, abs/2308.12950, 2023. doi: 10.48550/ARXIV.2308.12950. URL https://doi.org/10.48550/arXiv.2308.12950.
- Santilli et al. (2023) Andrea Santilli, Silvio Severino, Emilian Postolache, Valentino Maiorca, Michele Mancusi, Riccardo Marin, and Emanuele Rodolà. Accelerating transformer inference for translation via parallel decoding. In Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pp. 12336–12355. Association for Computational Linguistics, 2023. doi: 10.18653/v1/2023.acl-long.689. URL https://doi.org/10.18653/v1/2023.acl-long.689.
- Shazeer (2019) Noam Shazeer. Fast transformer decoding: One write-head is all you need. CoRR, abs/1911.02150, 2019. URL http://arxiv.org/abs/1911.02150.
- Stern et al. (2018) Mitchell Stern, Noam Shazeer, and Jakob Uszkoreit. Blockwise parallel decoding for deep autoregressive models. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 10107–10116, 2018. URL https://proceedings.neurips.cc/paper/2018/hash/c4127b9194fe8562c64dc0f5bf2c93bc-Abstract.html.
- Sun et al. (2023) Ziteng Sun, Ananda Theertha Suresh, Jae Hun Ro, Ahmad Beirami, Himanshu Jain, and Felix Yu. Spectr: Fast speculative decoding via optimal transport. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=SdYHLTCC5J.
- Taori et al. (2023) Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023.
- Touvron et al. (2021) Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. In International Conference on Machine Learning, pp. 10347–10357. PMLR, 2021.
- Touvron et al. (2023a) Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971, 2023a. doi: 10.48550/arXiv.2302.13971. URL https://doi.org/10.48550/arXiv.2302.13971.
- Touvron et al. (2023b) Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, et al. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288, 2023b. doi: 10.48550/ARXIV.2307.09288. URL https://doi.org/10.48550/arXiv.2307.09288.
- Xia et al. (2023) Heming Xia, Tao Ge, Peiyi Wang, Si-Qing Chen, Furu Wei, and Zhifang Sui. Speculative decoding: Exploiting speculative execution for accelerating seq2seq generation. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pp. 3909–3925. Association for Computational Linguistics, 2023. doi: 10.18653/V1/2023.FINDINGS-EMNLP.257. URL https://doi.org/10.18653/v1/2023.findings-emnlp.257.
- Xia et al. (2024) Heming Xia, Zhe Yang, Qingxiu Dong, Peiyi Wang, Yongqi Li, Tao Ge, Tianyu Liu, Wenjie Li, and Zhifang Sui. Unlocking efficiency in large language model inference: A comprehensive survey of speculative decoding. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Findings of the Association for Computational Linguistics ACL 2024, pp. 7655–7671, Bangkok, Thailand and virtual meeting, August 2024. Association for Computational Linguistics. URL https://aclanthology.org/2024.findings-acl.456.
- Yang et al. (2023) Seongjun Yang, Gibbeum Lee, Jaewoong Cho, Dimitris S. Papailiopoulos, and Kangwook Lee. Predictive pipelined decoding: A compute-latency trade-off for exact LLM decoding. CoRR, abs/2307.05908, 2023. doi: 10.48550/ARXIV.2307.05908. URL https://doi.org/10.48550/arXiv.2307.05908.
- Yi et al. (2024) Hanling Yi, Feng Lin, Hongbin Li, Ning Peiyang, Xiaotian Yu, and Rong Xiao. Generation meets verification: Accelerating large language model inference with smart parallel auto-correct decoding. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Findings of the Association for Computational Linguistics ACL 2024, pp. 5285–5299, Bangkok, Thailand and virtual meeting, August 2024. Association for Computational Linguistics. URL https://aclanthology.org/2024.findings-acl.313.
- Young et al. (2024) Alex Young, Bei Chen, Chao Li, Chengen Huang, Ge Zhang, Guanwei Zhang, Heng Li, Jiangcheng Zhu, Jianqun Chen, Jing Chang, Kaidong Yu, Peng Liu, Qiang Liu, Shawn Yue, Senbin Yang, Shiming Yang, Tao Yu, Wen Xie, Wenhao Huang, Xiaohui Hu, Xiaoyi Ren, Xinyao Niu, Pengcheng Nie, Yuchi Xu, Yudong Liu, Yue Wang, Yuxuan Cai, Zhenyu Gu, Zhiyuan Liu, and Zonghong Dai. Yi: Open foundation models by 01.ai. CoRR, abs/2403.04652, 2024. doi: 10.48550/ARXIV.2403.04652. URL https://doi.org/10.48550/arXiv.2403.04652.
- Zhang et al. (2024) Jun Zhang, Jue Wang, Huan Li, Lidan Shou, Ke Chen, Gang Chen, and Sharad Mehrotra. Draft& verify: Lossless large language model acceleration via self-speculative decoding. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 11263–11282, Bangkok, Thailand, August 2024. Association for Computational Linguistics. URL https://aclanthology.org/2024.acl-long.607.
- Zhou et al. (2024) Yongchao Zhou, Kaifeng Lyu, Ankit Singh Rawat, Aditya Krishna Menon, Afshin Rostamizadeh, Sanjiv Kumar, Jean-François Kagy, and Rishabh Agarwal. Distillspec: Improving speculative decoding via knowledge distillation. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=rsY6J3ZaTF.
- Zhu et al. (2024) Yunqi Zhu, Xuebing Yang, Yuanyuan Wu, and Wensheng Zhang. Hierarchical skip decoding for efficient autoregressive text generation. CoRR, abs/2403.14919, 2024. doi: 10.48550/ARXIV.2403.14919. URL https://doi.org/10.48550/arXiv.2403.14919.
Appendix
Appendix A Preliminary Details
We present the detailed configuration of Self-SD across four task domains in Figure 10, demonstrating that the optimal skipped layer configurations vary depending on the specific task.
<details>
<summary>x10.png Details</summary>

### Visual Description
## Heatmap: Attention Distribution Across MLP and ATT Layers
### Overview
The image displays a comparative heatmap of attention distribution patterns across two neural network components: MLP (Multilayer Perceptron) and ATT (Attention Mechanism). Each row represents a sequence of numerical values (2–80) with color-coded attention levels (High, Low, Neutral). The data suggests systematic patterns in how these components prioritize different numerical ranges.
### Components/Axes
- **Rows**:
- Top row: MLP (Multilayer Perceptron)
- Bottom row: ATT (Attention Mechanism)
- **Columns**: Numerical values from 2 to 80 (in increments of 2)
- **Legend**:
- Blue: High Attention
- Red: Low Attention
- White: Neutral Attention
### Detailed Analysis
#### MLP Row
- **High Attention (Blue)**:
- Concentrated at even-numbered positions: 16, 18, 20, 24, 26, 28, 36, 44, 46, 50, 52, 54, 56, 60, 62, 68, 72, 74, 76, 80.
- Gaps at 30, 32, 34, 38, 40, 42, 48, 58, 64, 66, 70, 78.
- **Low Attention (Red)**:
- Dominates odd-numbered positions: 5, 11, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 43, 45, 47, 49, 51, 53, 55, 57, 59, 61, 63, 65, 67, 69, 71, 73, 75, 77, 79.
- **Neutral (White)**:
- Sparse distribution: 2, 4, 6, 8, 10, 12, 14, 30, 32, 34, 38, 40, 42, 48, 58, 64, 66, 70, 78.
#### ATT Row
- **High Attention (Blue)**:
- Identical to MLP: 16, 18, 20, 24, 26, 28, 36, 44, 46, 50, 52, 54, 56, 60, 62, 68, 72, 74, 76, 80.
- **Low Attention (Red)**:
- Identical to MLP: 5, 11, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 43, 45, 47, 49, 51, 53, 55, 57, 59, 61, 63, 65, 67, 69, 71, 73, 75, 77, 79.
- **Neutral (White)**:
- Identical to MLP: 2, 4, 6, 8, 10, 12, 14, 30, 32, 34, 38, 40, 42, 48, 58, 64, 66, 70, 78.
### Key Observations
1. **Symmetry**: Both MLP and ATT exhibit identical attention distributions across all numerical values.
2. **Even-Odd Pattern**: High attention correlates with even numbers (except 30, 32, 34, 38, 40, 42, 48, 58, 64, 66, 70, 78), while low attention aligns with odd numbers.
3. **Neutral Gaps**: Neutral attention occurs at specific intervals (e.g., 30–34, 38–42, 48–58, 64–70, 78), suggesting systematic exclusion of certain ranges.
### Interpretation
The identical patterns in both MLP and ATT suggest:
- **Shared Prioritization Logic**: Both components may use similar numerical thresholds to determine attention weights.
- **Structural Constraints**: The neutral gaps (e.g., 30–34, 48–58) could indicate architectural limitations or intentional design choices to ignore specific ranges.
- **Potential Artifact**: The perfect symmetry might reflect a visualization artifact or intentional design to mirror attention behavior between components.
### Technical Implications
- **Model Efficiency**: High attention at even numbers may optimize processing for specific tasks (e.g., even-indexed features).
- **Bias Risks**: Systematic exclusion of odd numbers could introduce bias if odd-indexed data carries critical information.
- **Further Investigation**: Testing with varied input distributions could reveal whether these patterns persist or adapt.
</details>
(a) Summarization - CNN/DM
<details>
<summary>x11.png Details</summary>

### Visual Description
## Heatmap: MLP vs. ATT Number Distribution
### Overview
The image displays a two-row grid of numbers from 2 to 80, with alternating blue and red color coding. The top row is labeled "MLP" and the bottom row "ATT". Numbers are organized in a sequential pattern, with blue squares representing even numbers and red squares representing odd numbers. The grid spans 40 columns, with values increasing by 2 per cell.
### Components/Axes
- **Rows**:
- Top row: Labeled "MLP" (blue squares)
- Bottom row: Labeled "ATT" (red squares)
- **Columns**:
- Sequential numbering from 2 to 80 (in increments of 2)
- **Legend**:
- Bottom-right corner: Blue = even numbers, Red = odd numbers
- **Grid Structure**:
- 2 rows × 40 columns
- Cells alternate colors based on parity (even/odd)
### Detailed Analysis
- **MLP Row**:
- Contains even numbers (2, 4, 6, ..., 80)
- Blue squares occupy all cells
- Values increase linearly by 2 per column
- **ATT Row**:
- Contains odd numbers (1, 3, 5, ..., 79)
- Red squares occupy all cells
- Values increase linearly by 2 per column
- **Color Consistency**:
- Blue squares strictly correspond to even numbers
- Red squares strictly correspond to odd numbers
- No mismatches observed between color and parity
### Key Observations
1. **Perfect Parity Alignment**:
- MLP row exclusively contains even numbers (blue)
- ATT row exclusively contains odd numbers (red)
2. **Sequential Symmetry**:
- Both rows follow identical numerical progression (2→80 for MLP, 1→79 for ATT)
- Columns are perfectly aligned vertically
3. **Color-Coding Precision**:
- No exceptions to the even/odd color rule
- Legend confirms color assignments unambiguously
### Interpretation
This visualization appears to represent a structured comparison between two systems/components ("MLP" and "ATT") using parity-based categorization. The strict alternation of colors suggests:
- **Binary Classification**: Even numbers (MLP) vs. odd numbers (ATT)
- **Systematic Organization**: Values are ordered sequentially, implying a deliberate mapping or indexing system
- **Potential Technical Context**:
- MLP (likely Multi-Layer Perceptron) and ATT (likely Attention Mechanism) could represent neural network components
- The parity distinction might indicate different processing stages, data types, or operational modes
- **Design Intent**:
- The grid format emphasizes symmetry and balance
- Color coding enables immediate visual differentiation between categories
### Notable Patterns
- **Mirrored Structure**: MLP and ATT rows are identical in structure but offset by 1 in numerical value
- **Complete Coverage**: All numbers from 1 to 80 are represented without gaps
- **No Anomalies**: No deviations from the established pattern observed
This visualization likely serves as a conceptual model for comparing two technical systems through a simplified parity-based framework, with the grid structure emphasizing systematic relationships between components.
</details>
(b) Reasoning - GSM8K
<details>
<summary>x12.png Details</summary>

### Visual Description
## Heatmap: MLP vs ATT Activation Patterns
### Overview
The image displays a two-row grid structure with numerical values and color-coded cells. The top row is labeled "MLP" (blue cells) and the bottom row "ATT" (red cells). Numbers increment sequentially across columns, with colored cells highlighting specific positions. The grid spans 40 columns per row, with values ranging from 2-80 (MLP) and 1-79 (ATT).
### Components/Axes
- **Rows**:
- Top: "MLP" (blue-highlighted values)
- Bottom: "ATT" (red-highlighted values)
- **Columns**:
- Sequential numbering (MLP: 2-80, ATT: 1-79)
- 40 columns per row
- **Color Coding**:
- Blue cells in MLP row
- Red cells in ATT row
- No explicit legend, but colors correlate with row labels
### Detailed Analysis
**MLP Row (Blue Cells)**:
- Values: 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78, 80
- Blue-highlighted positions: 6, 12, 14, 18, 20, 22, 24, 26, 28, 30, 36, 38, 40, 48, 50, 52, 54, 56, 58, 60, 62, 68, 70, 72, 78
- Pattern: Blue cells cluster in groups of 2-4 consecutive numbers, with gaps between clusters (e.g., 6-10, 12-14, 18-20, etc.)
**ATT Row (Red Cells)**:
- Values: 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 43, 45, 47, 49, 51, 53, 55, 57, 59, 61, 63, 65, 67, 69, 71, 73, 75, 77, 79
- Red-highlighted positions: 5, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 43, 45, 47, 49, 51, 53, 55, 57, 59, 61, 63, 65, 67, 69, 71, 73, 75, 77, 79
- Pattern: Red cells appear more densely clustered than MLP, with frequent single-cell highlights (e.g., 5, 11, 13, 15, etc.)
### Key Observations
1. **Color Distribution**:
- MLP blue cells occupy ~30% of the row (24/40 cells)
- ATT red cells occupy ~32.5% of the row (26/40 cells)
2. **Numerical Spacing**:
- MLP blue cells follow a roughly 6-number interval pattern (e.g., 6, 12, 18, 24, etc.), but with irregular gaps
- ATT red cells show a 6-number interval pattern (5, 11, 17, 23, etc.) but with more frequent single-cell highlights
3. **Value Ranges**:
- MLP spans even numbers (2-80)
- ATT spans odd numbers (1-79)
### Interpretation
The grid likely represents activation patterns or decision thresholds in two neural network components (MLP and ATT). The blue/red coloring may indicate:
- **Activation states**: Blue for active, red for inactive (or vice versa)
- **Error rates**: Colored cells could represent misclassifications
- **Threshold crossings**: Values where a parameter exceeds a defined limit
The MLP's sparser blue clusters suggest more selective activation, while ATT's denser red highlights might indicate broader engagement. The sequential numbering implies positional importance, possibly relating to input/output dimensions or layer indices in a neural architecture.
</details>
(c) Storytelling - TinyStories
<details>
<summary>x13.png Details</summary>

### Visual Description
## Heatmap: Distribution of MLP and ATT Across Numbered Grid
### Overview
The image displays a two-row grid of numbers from 2 to 80 (MLP row) and 1 to 79 (ATT row), with specific numbers highlighted in blue (MLP) and red (ATT). White spaces represent numbers not assigned to either category. The grid appears to visualize the distribution of two categories (MLP and ATT) across a sequential numerical range.
### Components/Axes
- **Rows**:
- **Top Row (MLP)**: Labeled "MLP" in bold black text. Contains even numbers from 2 to 80, incrementing by 2 (2, 4, 6, ..., 80).
- **Bottom Row (ATT)**: Labeled "ATT" in bold black text. Contains odd numbers from 1 to 79, incrementing by 2 (1, 3, 5, ..., 79).
- **Columns**: Numbers are arranged sequentially from left to right, with MLP numbers in the top row and ATT numbers in the bottom row.
- **Colors**:
- **Blue**: Indicates numbers assigned to the MLP category.
- **Red**: Indicates numbers assigned to the ATT category.
- **White**: Represents numbers not assigned to either category.
- **Legend**: Implied by color coding (MLP = blue, ATT = red). No explicit legend is visible in the image.
### Detailed Analysis
#### MLP Row (Blue Highlights)
- **Blue Numbers**: 6, 8, 14, 16, 18, 22, 28, 36, 40, 50, 52, 58, 60, 68, 78.
- **Pattern**: Blue highlights occur at irregular intervals. Notable clusters include:
- 6–8 (consecutive),
- 14–18 (cluster of 5),
- 22–28 (gap of 6),
- 36–40 (gap of 8),
- 50–52 (gap of 2),
- 58–60 (gap of 2),
- 68 (isolated),
- 78 (isolated).
#### ATT Row (Red Highlights)
- **Red Numbers**: 9, 15, 17, 19, 23, 25, 33, 45, 47, 49, 51, 53, 55, 59, 61, 63, 65, 67, 69, 73, 79.
- **Pattern**: Red highlights also occur at irregular intervals. Notable clusters include:
- 15–19 (cluster of 5),
- 23–25 (gap of 2),
- 33 (isolated),
- 45–49 (cluster of 5),
- 51–55 (cluster of 5),
- 59–69 (cluster of 11),
- 73–79 (cluster of 7).
### Key Observations
1. **MLP Distribution**: Blue highlights are more sparse and clustered in mid-to-lower ranges (e.g., 6–8, 14–18, 36–40). The highest blue value is 78.
2. **ATT Distribution**: Red highlights are denser, especially in the upper range (59–69, 73–79). The highest red value is 79.
3. **White Spaces**: Most numbers in both rows are uncolored, indicating a majority of values belong to neither category.
4. **Symmetry**: The grid alternates between even (MLP) and odd (ATT) numbers, but highlights do not follow a strict alternating pattern.
### Interpretation
The heatmap suggests a categorical distribution of MLP and ATT across a numerical range, possibly representing:
- **MLP**: A subset of even numbers with specific criteria (e.g., thresholds, groupings).
- **ATT**: A broader subset of odd numbers, with denser clustering in higher ranges.
- **White Spaces**: Numbers not meeting the criteria for either category, potentially indicating a baseline or neutral state.
The irregular clustering implies that MLP and ATT may represent distinct subgroups or classifications within the numerical data, with ATT showing a more widespread presence in higher values. The lack of a clear pattern suggests the data may reflect real-world variability or domain-specific rules for categorization.
</details>
(d) Translation - WMT16
Figure 10: Visualization of skipped layer set configurations of LLaMA-2-13B optimized by Self-SD (Zhang et al., 2024) on different task domains. Gray squares indicate retained layers, red squares denote skipped attention layers, and blue squares signify skipped MLP layers.
Appendix B Experimental Setups
B.1 Models and Datasets
Our experiments mainly evaluate the effectiveness of \method on LLaMA-2 (Touvron et al., 2023b) and CodeLLaMA series (Rozière et al., 2023). We provide empirical validation on a diverse range of generation tasks. For summarization, mathematical reasoning, storytelling, and code generation tasks, we chose the CNN/Daily Mail (CNN/DM) (Nallapati et al., 2016), GSM8K (Cobbe et al., 2021), TinyStories (Eldan & Li, 2023), and HumanEval (Chen et al., 2021) datasets, respectively. We perform 1-shot evaluation for CNN/DM and TinyStories, and 5-shot evaluation for GSM8K. The maximum generation lengths on CNN/DM, GSM8K, and TinyStories are set to 64, 64, and 128, respectively. We compare pass@1 and pass@10 for HumanEval. In our further analysis, we include three more datasets to validate the capability of \method in handling dynamic input data streams. Specifically, we select Alpaca (Taori et al., 2023), WMT14 DE-EN, and Nature Questions (Kwiatkowski et al., 2019) for the instruction following, translation, and question answering tasks, respectively. The maximum generation lengths for HumanEval and all analyses are set to 512. We randomly sample 1000 instances from the test set for each dataset except HumanEval.
B.2 Inference Setup
In the optimization phase, we employ both random search and Bayesian optimization to suggest potential skipped layer set candidates, striking a balance between optimization performance and efficiency. The context window $\gamma$ is set to 32. The maximum draft length $N_{D}$ is set to 25. For random sampling in code generation tasks, we apply a temperature of 0.6 and $top\_p=0.95$ . The maximum number of layer set optimization steps $S$ is set to 1000, with Bayesian optimization performed every $\beta=25$ steps. The optimization phase is set to be early stopped if the matchness score does not improve after 300 steps or exceeds 0.95. The layer skip ratio $r$ is fixed at 0.45 for the 13B model and 0.5 for the 34B and 70B models. All experiments were conducted using Pytorch 2.1.0 on 4 $×$ NVIDIA RTX A6000 GPU (40GB) with CUDA 12.1, and an Intel(R) Xeon(R) Platinum 8370C CPU with 32 cores. Inference for our method and all baselines was performed using the Huggingface transformers package. Following prior work, we adopt speculative sampling (Leviathan et al., 2023) as our acceptance strategy, and the batch size is set to 1.
B.3 Evaluation Metrics
This subsection provides a detailed illustration of our evaluation metrics, which are mean generated length $M$ and token acceptance rate $\alpha$ . Specifically, the mean generated length $M$ refers to the average number of output tokens produced per forward pass of the target LLM; the acceptance rate $\alpha$ is defined as the ratio of accepted tokens to the total number of draft steps. In other words, it represents the expected probability of the target LLM accepting a potential token from a forward pass of the draft model. These two metrics are independent of computational hardware and, therefore considered as more objective metrics. Given the mean generated length $M$ , acceptance rate $\alpha$ , and the layer skip ratio $r$ , the mathematical formula for the expected wall-time speedup during the acceleration phase is derived as follows:
$$
\mathbb{E}(\text{Spd.})=\frac{M}{(M-1)\times\frac{c}{\alpha}+1}=\frac{M\alpha}%
{(M-1)c+\alpha},\quad c=1-r, \tag{6}
$$
where $c$ is defined as the cost coefficient in Leviathan et al. (2023). It is calculated as the ratio between the single forward time of the draft model and that of the target LLM. In summary, the ideal speedup will be higher with the larger $M$ and $\alpha$ and smaller $c$ .
Appendix C Experimental Details
C.1 Details of Main Results
We present the detailed statistics of our main experimental results in Table 4. \method consistently achieves a token acceptance rate $\alpha$ exceeding $90\%$ across all evaluation settings, with the mean generated length M ranging from 2.99 to 8.21. These statistics indicate strong behavior alignment between the target LLM and its layer-skipping draft variant, as discussed in Section 5.2. Additionally, we report the expected speedup $\mathbb{E}(\text{Spd.})$ calculated using Eq (6), indicating that the current implementation of \method has significant potential for further optimization to boost its efficiency.
| LLaMA-2-13B \method LLaMA-2-13B -Chat | Vanilla 4.34 Vanilla | 1.00 0.99 1.00 | - 1.52 $×$ - | 1.00 $×$ 3.13 1.00 $×$ | 1.00 0.98 1.00 | - 1.43 $×$ - | 1.00 $×$ 8.21 1.00 $×$ | 1.00 1.00 1.00 | - 1.65 $×$ - | 1.00 $×$ 1.53 $×$ 1.00 $×$ | 1.00 $×$ 1.00 $×$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| \method | 3.54 | 0.90 | 1.39 $×$ | 2.95 | 0.92 | 1.36 $×$ | 7.42 | 0.99 | 1.62 $×$ | 1.46 $×$ | |
| LLaMA-2-70B | Vanilla | 1.00 | - | 1.00 $×$ | 1.00 | - | 1.00 $×$ | 1.00 | - | 1.00 $×$ | 1.00 $×$ |
| \method | 3.85 | 0.99 | 1.58 $×$ | 2.99 | 0.98 | 1.48 $×$ | 6.17 | 0.99 | 1.71 $×$ | 1.59 $×$ | |
Table 4: Detailed results of \method on text generation tasks using LLaMA-2 series. We report the mean generated length M, token acceptance rate $\alpha$ , and the expected speedup $\mathbb{E}(\text{Spd.})$ calculated by Eq (6) under the setting of greedy decoding with FP16 precision.
C.2 Additional Results on LLaMA-70B Models
In addition to the main results presented in Table 2, we provide further experimental evaluations of \method on LLaMA-70B models, including LLaMA-2-70B and LLaMA-3-70B, along with their instruction-tuned variants, under the same experimental settings. The results demonstrate that \method consistently achieves a $1.4×$ $\sim$ $1.5×$ wall-clock speedup across both the LLaMA-2 and LLaMA-3 series. Notably, \method achieves a token acceptance rate $\alpha$ exceeding $85\%$ across various evaluation settings, with the mean generated length M ranging from 3.43 to 7.80. Although differences in layer redundancy are observed between models (e.g., skip ratio $r$ for LLaMA-2-70B vs. LLaMA-3-70B During the optimization phase, the layer skip ratio $r$ for LLaMA-3-70B was automatically adjusted from 0.5 to 0.4 as the token acceptance rate $\alpha$ remained below the tolerance threshold of 0.7.), \method demonstrates robust adaptability, maintaining consistent acceleration performance regardless of model version.
| LLaMA-2-70B \method LLaMA-2-70B -Chat | Vanilla 3.85 Vanilla | 1.00 0.99 1.00 | - 1.43 $×$ - | 1.00 $×$ 2.99 1.00 $×$ | 1.00 0.98 1.00 | - 1.39 $×$ - | 1.00 $×$ 6.17 1.00 $×$ | 1.00 0.99 1.00 | - 1.62 $×$ - | 1.00 $×$ 1.48 $×$ 1.00 $×$ | 1.00 $×$ 1.00 $×$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| \method | 3.43 | 0.85 | 1.31 $×$ | 3.12 | 0.89 | 1.32 $×$ | 5.45 | 0.95 | 1.53 $×$ | 1.37 $×$ | |
| LLaMA-3-70B | Vanilla | 1.00 | - | 1.00 $×$ | 1.00 | - | 1.00 $×$ | 1.00 | - | 1.00 $×$ | 1.00 $×$ |
| \method | 5.43 | 0.99 | 1.41 $×$ | 4.11 | 0.99 | 1.37 $×$ | 7.80 | 0.99 | 1.51 $×$ | 1.43 $×$ | |
| LLaMA-3-70B -Instruct | Vanilla | 1.00 | - | 1.00 $×$ | 1.00 | - | 1.00 $×$ | 1.00 | - | 1.00 $×$ | 1.00 $×$ |
| \method | 3.76 | 0.95 | 1.33 $×$ | 3.92 | 0.93 | 1.31 $×$ | 5.87 | 0.97 | 1.43 $×$ | 1.36 $×$ | |
Table 5: Experimental results of \method on text generation tasks using the LLaMA-70B series. We report the mean generated length M, token acceptance rate $\alpha$ , and speedup ratio under the setting of greedy decoding. The skip ratio $r$ is set to 0.5 for LLaMA-2 models and 0.4 for LLaMA-3 models.
C.3 Detailed Results of LLM Backbones
To further validate the effectiveness of \method, we conducted experiments using additional LLM backbones beyond the LLaMA series. Specifically, we select two recently representative LLMs: Yi-34B for text generation and DeepSeek-Coder-33B for code generation tasks. The experimental results are illustrated in Table 6 and 7, demonstrating the efficacy of \method across these LLM backbones. \method achieves a consistent $1.2×$ $\sim$ $1.3×$ wall-clock speedup on the Yi-34B series and a $1.3×$ $\sim$ $1.5×$ on the DeepSeek-Coder-33B series. Notably, for the DeepSeek-Coder-33B series, \method attains a mean generate length M ranging from 3.16 to 4.17, alongside a token acceptance rate $\alpha$ exceeding $83\%$ . These findings substantiate the utility of \method as a general-purpose, plug-and-play SD method, offering promising inference acceleration across diverse LLM backbones.
| Yi-34B \method Yi-34B-Chat | Vanilla 2.74 Vanilla | 1.00 0.94 1.00 | - 1.30 $×$ - | 1.00 $×$ 2.65 1.00 $×$ | 1.00 0.97 1.00 | - 1.28 $×$ - | 1.00 $×$ 3.25 1.00 $×$ | 1.00 0.98 1.00 | - 1.34 $×$ - | 1.00 $×$ 1.31 $×$ 1.00 $×$ | 1.00 $×$ 1.00 $×$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| \method | 2.84 | 0.91 | 1.29 $×$ | 2.77 | 0.89 | 1.27 $×$ | 2.52 | 0.80 | 1.21 $×$ | 1.26 $×$ | |
Table 6: Experimental results of \method on text generation tasks using Yi-34B series. We report the mean generated length M, token acceptance rate $\alpha$ and speedup ratio under the setting of greedy decoding with FP16 precision. The skip ratio $r$ is set to 0.45.
Appendix D Further Analysis and Discussion
D.1 Ablation Study
Table 8 presents the ablation study of \method using LLaMA-2-13B on CNN/DM. The experimental results demonstrate that each component of \method contributes to its overall speedup of \method. Specifically, early-stopping drafting effectively reduces the number of ineffective draft steps, improving the token acceptance rate $\alpha$ by $55\%$ . Dynamic verification further enhances efficiency by selecting suitable draft candidates from the top- $k$ predictions based on their confidence scores; removing this component leads to a decrease in both the mean generated length (M) and the overall speedup ratio. Additionally, the optimization phase refines the set of skipped layers, improving speedup by $34\%$ compared to the initial uniform layer-skipping strategy. In summary, these results confirm the effectiveness of each proposed innovation in \method.
| HumanEval (pass@1) \method HumanEval (pass@10) | Vanilla 4.97 Vanilla | 1.00 0.99 1.00 | - 1.54 $×$ - | 1.00 $×$ 3.80 1.00 $×$ | 1.00 0.88 1.00 | - 1.39 $×$ - | 1.00 $×$ 1.00 $×$ |
| --- | --- | --- | --- | --- | --- | --- | --- |
| \method | 3.16 | 0.91 | 1.36 $×$ | 3.74 | 0.83 | 1.31 $×$ | |
Table 7: Experimental results of \method on code generation tasks using DeepSeek-Coder-33B series. The skip ratio $r$ is set to 0.5. We use greedy decoding for pass@1 and random sampling with a temperature of 0.6 for pass@10. “ DS ” denotes the abbreviation of DeepSeek.
| \method w/o early-stopping w/o dynamic ver. | 5.82 11.16 4.39 | 0.98 0.43 0.90 | 1.560 $×$ 0.896 $×$ 1.342 $×$ |
| --- | --- | --- | --- |
| w/o optimization | 2.15 | 0.90 | 1.224 $×$ |
Table 8: Ablation study of \method. “ ver. ” denotes the abbreviation of verification.
| 16 32 64 | 3.91 5.82 5.56 | 0.95 0.98 0.99 | 1.341 $×$ 1.560 $×$ 1.552 $×$ | 0.242ms 0.244ms 0.312ms |
| --- | --- | --- | --- | --- |
| 128 | 5.61 | 0.98 | 1.550 $×$ | 0.425ms |
Table 9: Speedups of \method across different context window $\gamma$ . The latency of the optimization step is reported to illustrate the associated overhead.
D.2 Context Window
In Table 9, we show a detailed analysis of context window $\gamma$ , which determines the number of LLM-generated tokens used in the layer set optimization process. A smaller $\gamma$ introduces greater randomness in the matchness score calculation, resulting in suboptimal performance, while a larger $\gamma$ increases the computational overhead of the optimization step. The results indicate that $\gamma=32$ provides an optimal balance between optimization performance and computational overhead.
D.3 Comparisons with Prior Layer-Skipping Methods
In this subsection, we compare \method with two representative layer-skipping speculative decoding (SD) methods: LayerSkip (Elhoushi et al., 2024) and Self-SD (Zhang et al., 2024). Specifically, LayerSkip (Elhoushi et al., 2024) introduces an innovative approach to self-speculative decoding by implementing early-exit drafting, where the LLM generates drafts using only its earlier layers. However, this method necessitates a time-consuming pretraining or finetuning process, which modifies the original output distribution of the target LLM. Such alterations may compromise the reliability of the generated outputs; Self-SD (Zhang et al., 2024) proposed to construct the compact draft model by skipping intermediate layers, using an extensive Bayesian Optimization process before inference to determine the optimal skipped layers within the target LLM. As illustrated in Section 3.1, while effective, Self-SD suffers from significant optimization latency (nearly 7.5 hours for LLaMA-2-13B and 20 hours for LLaMA-2-70B). This prolonged optimization process limits its practicality and generalizability across diverse models and tasks.
Tables 10 and 11 summarize the comparative results in terms of acceleration performance and training/optimization costs, respectively. Below, we detail the advantages of \method over these methods:
- Comparison with LayerSkip: LayerSkip achieves an aggressive skip ratio ( $r=0.8$ ), resulting in an average generated length of $2.42$ and a token acceptance rate of $0.64$ . However, its reliance on pretraining or finetuning alters the original distribution of the target LLM, potentially reducing reliability. In contrast, \method preserves the original distribution of the target LLM while delivering a comparable $1.56×$ speedup without requiring additional training.
- Comparison with Self-SD: Self-SD relies on a time-intensive Bayesian Optimization process, which incurs substantial latency before inference. \method eliminates this bottleneck through an on-the-fly optimization strategy, achieving an approximately $\mathbf{200×}$ reduction in optimization latency while maintaining the same $1.56×$ speedup. We further augmented Self-SD with our Confidence-aware Inference Acceleration strategy (Self-SD w/ dynamic ver.). Even compared to this augmented version, \method achieves competitive speedups.
These findings highlight the efficiency and practicality of \method over previous layer-skipping SD methods. As the first plug-and-play layer-skipping SD approach, we hope that \method could provide valuable insights and inspire further research in this area.
| LayerSkip | ✗ | ✗ | 0.80 | 2.42 | 0.64 | 1.64 $×$ |
| --- | --- | --- | --- | --- | --- | --- |
| \hdashline Self-SD | ✗ | ✓ | 0.43 | 4.02 | 0.85 | 1.29 $×$ |
| Self-SD w/ dynamic ver. | ✗ | ✓ | 0.43 | 5.69 | 0.98 | 1.52 $×$ |
| \method (Ours) | ✓ | ✓ | 0.45 | 5.82 | 0.98 | 1.56 $×$ |
Table 10: Comparison of \method and prior layer-skipping SD methods. We report the skip ratio $r$ , mean generated length M, token acceptance $\alpha$ , and speedup ratio under greedy decoding. The results are obtained with LLaMA-2-13B on CNN/DM. “ ver. ” denotes the abbreviation of verification.
| LayerSkip Self-SD \method (Ours) | $50× 10^{3}$ training steps with 64 A100 (80GB) 1000 Bayesian Optimization Iterations Before inference N/A | - $\sim$ $7.5$ hours $\sim$ $\mathbf{2}$ minutes |
| --- | --- | --- |
Table 11: Comparison of \method and prior layer-skipping SD methods in terms of training cost and optimization latency for LLaMA-2-13B. Training costs are sourced from the original papers, while optimization latency is measured from our re-implementation on an A6000 GPU. \method demonstrates a $\sim$ $\mathbf{200×}$ reduction in optimization latency compared to previous methods without requiring additional training, establishing it as an efficient plug-and-play SD method.
D.4 Detailed Comparisons with Self-SD
<details>
<summary>x14.png Details</summary>

### Visual Description
## Line Graph: Speedup vs Optimization Latency
### Overview
The image is a line graph comparing the performance of two optimization methods, "Self-SD" and "SWIFT," across varying optimization latency (in seconds). The y-axis represents "Speedup," and the x-axis represents "Optimization Latency (s)." The graph includes a legend, gridlines, and two distinct data series.
---
### Components/Axes
- **X-axis (Horizontal)**:
- Label: "Optimization Latency (s)"
- Scale: 0 to 6000 seconds (increments of 2000).
- Position: Bottom of the graph.
- **Y-axis (Vertical)**:
- Label: "Speedup"
- Scale: 0.95 to 1.55 (increments of 0.10).
- Position: Left side of the graph.
- **Legend**:
- Located in the top-right corner.
- Entries:
- **Self-SD**: Blue line with circular markers (●).
- **SWIFT**: Red star marker (★).
- **Gridlines**:
- Light gray dashed lines spanning the plot area.
---
### Detailed Analysis
#### Self-SD (Blue Line with Circles)
- **Trend**: The line slopes upward, indicating increasing speedup with higher optimization latency.
- **Data Points**:
- At 0s: Speedup ≈ 0.95.
- At 1000s: Speedup ≈ 0.97.
- At 2000s: Speedup ≈ 1.05.
- At 3000s: Speedup ≈ 1.15.
- At 6000s: Speedup ≈ 1.25.
#### SWIFT (Red Star)
- **Trend**: A single outlier point at 0s optimization latency.
- **Data Point**:
- At 0s: Speedup ≈ 1.55.
---
### Key Observations
1. **SWIFT Outlier**: The red star (SWIFT) is positioned at the top-left corner (0s latency, 1.55 speedup), significantly higher than the Self-SD baseline.
2. **Self-SD Progression**: The blue line shows a gradual, linear increase in speedup as optimization latency increases.
3. **Latency-Speedup Tradeoff**: SWIFT achieves higher speedup at minimal latency, while Self-SD requires longer latency to improve performance.
---
### Interpretation
- **Performance Comparison**:
- SWIFT demonstrates superior speedup (1.55) at 0s latency, suggesting it is more efficient for low-latency scenarios.
- Self-SD’s speedup grows linearly with latency, indicating it may be better suited for applications where higher latency is acceptable for incremental gains.
- **Anomaly**: The SWIFT data point deviates sharply from the Self-SD trend, implying a fundamentally different optimization strategy or hardware utilization.
- **Practical Implications**:
- For latency-sensitive tasks, SWIFT might be preferred despite its outlier nature.
- Self-SD’s predictable scaling could be advantageous for long-running optimizations where latency is less critical.
---
### Spatial Grounding & Verification
- **Legend Alignment**:
- Blue line (Self-SD) matches the legend’s circular markers.
- Red star (SWIFT) matches the legend’s star symbol.
- **Axis Consistency**:
- All data points align with the labeled axes and gridlines.
- **Trend Verification**:
- Self-SD’s upward slope is confirmed by increasing y-values with x.
- SWIFT’s single point is isolated, requiring no trend analysis.
---
### Content Details
- **No additional text or embedded data tables** are present.
- **No other languages** are used; all labels and annotations are in English.
</details>
Figure 11: Comparison of \method and Self-SD in terms of optimization latency and speedup. \method achieves a $1.56×$ speedup with an optimization latency of 116 seconds.
In this subsection, we provide a detailed comparison of \method and Self-SD (Zhang et al., 2024). Figure 11 presents the speedups of Self-SD across varying optimization latencies, reflecting the increase in Bayesian Optimization iterations. As shown, Self-SD achieves minimal speedup improvement – almost equivalent to unified skipping – with fewer than 50 Bayesian iterations, corresponding to an optimization latency below 1474 seconds. At 100 Bayesian iterations, Self-SD achieves a $1.19×$ speedup; however, its optimization latency is nearly 25 times longer than that of \method (2898s vs. 116s).
Table 12 compares \method and Self-SD (first two rows) under similar optimization latencies. The results highlight \method ’s superiority in both optimization efficiency (116s vs. 155s) and speedup ( $1.56×$ vs. $0.97×$ ). Even when compared to the augmented version of Self-SD (w/ dynamic verification), \method achieves a substantial $30\%$ relative improvement in speedup. Below, we analyze the factors contributing to this advantage (elaborated in Section 3.1):
- Optimization Objective Granularity: Self-SD calculates its optimization objective at a multi-sample level, requiring sequential decoding of all selected training samples (e.g., 8 samples with 32 tokens each) for every iteration to optimize Equation 1. In contrast, \method adopts a step-level optimization objective, optimizing the layer set dynamically at each decoding step.
- Bayesian Optimization Complexity: The computational complexity of Bayesian optimization increases significantly with the number of iterations. \method mitigates this burden by combining random search with interval Bayesian optimization, accelerating convergence of the optimization process while reducing computational overhead.
To further examine optimization trade-offs, we reduce Self-SD’s sequential optimization requirement to a single sample with 8 tokens, enabling more Bayesian Optimization iterations within a comparable latency. The corresponding results, denoted as Self-SD c (rows 3-4), are presented in Table 12. Even with these optimized settings, \method demonstrates substantial superior speedup and efficiency, highlighting the effectiveness of our proposed strategies.
| Self-SD | - | 5 | 155 | 0.50 | 1.80 | 0.57 | 0.97 $×$ |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Self-SD w/ dynamic ver. | - | 5 | 155 | 0.50 | 2.07 | 0.86 | 1.17 $×$ |
| Self-SD c | - | 30 | 199 | 0.45 | 2.08 | 0.70 | 1.04 $×$ |
| Self-SD c w/ dynamic ver. | - | 30 | 199 | 0.45 | 2.44 | 0.93 | 1.22 $×$ |
| \method (Ours) | 552 | 23 | 116 | 0.45 | 5.82 | 0.98 | 1.56 $×$ |
Table 12: Comparison of \method and Self-SD at similar optimization latencies. We report the skip ratio $r$ , mean generated length M, token acceptance rate $\alpha$ , and speedup under greedy decoding. The results are obtained with LLaMA-2-13B on CNN/DM, with “ ver. ” indicating verification.
D.5 The Necessity of Plug-and-Play SD Methods
There has been a surge of recent interest in Speculative Decoding (SD), leading to the development of numerous promising strategies in the field, which can be broadly categorized into two directions:
- Training-required SD. These methods require additional pretraining or fine-tuning to improve speculative accuracy, often involving the integration of extra parameters. For instance, Medusa (Cai et al., 2024) and Eagle (Li et al., 2024a; b) incorporate lightweight draft heads into target LLMs and fine-tune them, achieving $3×$ $\sim$ $4×$ speedups.
- Plug-and-play SD. These approaches offer immediate acceleration of LLM inference without relying on auxiliary models or additional training. Notable examples include Parallel Decoding (Santilli et al., 2023) and Lookahead (Fu et al., 2024), which leverage Jacobi-based drafting, achieving $1.2×$ $\sim$ $1.4×$ speedups across various LLMs.
While training-required SD methods generally deliver higher speedups, their reliance on additional training and parameters limits both their generalizability and practicality. This has sparked debate within the academic community regarding the value of plug-and-play SD methods. To address these concerns, we present a detailed analysis below to highlight the necessity of plug-and-play SD approaches and underscore the contributions of our proposed \method:
1) Training costs of training-required SD methods are often prohibitive.
Training-required methods such as Medusa (Cai et al., 2024) and Eagle (Li et al., 2024a; b), while achieving higher speedups, incur substantial training costs. Despite efforts to reduce training overhead, these methods still require extensive computational resources (e.g., GPU time and datasets) to deliver valid acceleration performance. For example, Eagle requires 1–2 days of training with 8 RTX 3090 GPUs for LLaMA-33B or up to 2 days on 4 A100 (40G) GPUs for LLaMA-2-Chat-70B, using a dataset of 70k dialogues from ShareGPT. Such computational burdens introduce challenges in several scenarios:
- Users must train new draft models for unsupported target LLMs. For example, if the user’s target LLM is not among the released checkpoints or if the base model is updated (e.g., LLaMA-3.x), users are forced to train a new draft model, which may exceed their available GPU resources (e.g., GPU time).
- Users with small-scale acceleration needs face inefficiencies. For instance, a researcher needing to evaluate a small set of samples (e.g., 10 hours of evaluation) would find the 1–2 day training requirement disproportionate and hinder overall research efficiency.
2) Plug-and-play SD fills critical gaps unaddressed by training-required methods.
Plug-and-play SD methods, including \method, are model-agnostic and training-free, providing immediate acceleration without requiring additional computational overhead. These attributes are particularly critical for large models (70B–340B) and for use cases requiring rapid integration. The growing adoption of plug-and-play SD methods, such as Lookahead (Fu et al., 2024), further underscores their importance. These methods cater to scenarios where ease of use and computational efficiency are paramount, validating their research significance.
3) \method pioneers plug-and-play SD with layer-skipping drafting.
\method
represents the first plug-and-play SD method to incorporate layer-skipping drafting. It consistently achieves $1.3×$ $\sim$ $1.6×$ speedups over vanilla autoregressive decoding across diverse models and tasks. Additionally, it demonstrates $10\%$ $\sim$ $20\%$ higher efficiency compared to Lookahead (Fu et al., 2024). Despite its effectiveness, \method introduces a complementary research direction for existing plug-and-play SD. Its approach is orthogonal to Lookahead Decoding, and combining the two could further amplify their collective efficiency. We believe this study provides valuable insights and paves the way for future SD advancements, particularly for practical and cost-effective LLM acceleration.
To sum up, while training-required SD methods achieve higher speedups, their high computational costs and limited flexibility reduce practicality. Plug-and-play SD methods, like \method, offer training-free, model-agnostic acceleration, making them ideal for diverse scenarios. We hope this clarification fosters greater awareness and recognition of the value of plug-and-play SD research.
D.6 Additional Discussions with Related Work
In this work, we leverage the inherent layer sparsity of LLMs through layer skipping, which selectively bypasses intermediate layers within the target LLM to construct the compact draft model. In addition to layer skipping, there has been another research direction in SD that focuses on early exiting, where inference halts at earlier layers to improve computational efficiency (Yang et al., 2023; Hooper et al., 2023; Bae et al., 2023; Elhoushi et al., 2024). Particularly, LayerSkip (Elhoushi et al., 2024) explores early-exit drafting by generating drafts using only the earlier layers of the target LLM, followed by verification with the full-parameter model. This approach requires training involving layer dropout and early exit losses. Similarly, PPD (Yang et al., 2023) employs early exiting but trains individual classifiers for each layer instead of relying on a single final-layer classifier. Although effective, these methods rely on extensive fine-tuning to enable early-exiting capabilities, incurring substantial computational costs. Moreover, the training process alters the target LLM’s original output distribution, potentially compromising the reliability of generated outputs. In contrast, our proposed \method does not require auxiliary models or additional training, preserving the original output distribution of the target LLM while delivering comparable acceleration benefits.
There has been a parallel line of training-required SD research focusing on non-autoregressive drafting strategies (Stern et al., 2018; Cai et al., 2024; Gloeckle et al., 2024; Kim et al., 2024). These methods integrate multiple draft heads into the target LLM, enabling the parallel generation of draft tokens at each decoding step. Notably, Kim et al. (2024) builds on the Blockwise Parallel Decoding paradigm introduced in Stern et al. (2018), accelerating inference by refining block drafts with task-independent n-grams and lightweight rescorers using smaller LMs. While these approaches achieve notable acceleration, they also necessitate extensive training of draft models. \method complements these efforts by pioneering plug-and-play SD that eliminates the need for auxiliary models or additional training, offering a more flexible and practical solution for diverse use cases.
D.7 Optimization Steps
We present the detailed configuration of \method across various optimization steps in Figure 10. As the process continues, the skipped layer set is gradually refined toward the optimal configuration.
<details>
<summary>x15.png Details</summary>

### Visual Description
## Heatmap: MLP vs ATT Numerical Distribution
### Overview
The image displays a two-row heatmap comparing numerical values labeled "MLP" (top row) and "ATT" (bottom row). Each row contains alternating blue and red numbers, with MLP values in blue and ATT values in red. The numbers span from 1 to 80, with MLP covering even numbers and ATT covering odd numbers.
### Components/Axes
- **Rows**:
- **MLP**: Labeled in black text, associated with blue-colored numbers.
- **ATT**: Labeled in black text, associated with red-colored numbers.
- **Columns**:
- Unlabeled numerical axis spanning from 2 to 80 (MLP) and 1 to 79 (ATT).
- Numbers are arranged in a grid, with MLP values occupying odd-indexed columns (1st, 3rd, 5th, etc.) and ATT values occupying even-indexed columns (2nd, 4th, 6th, etc.).
- **Legend**:
- Implicit in row labels and colors:
- **Blue** = MLP (even numbers).
- **Red** = ATT (odd numbers).
### Detailed Analysis
- **MLP Row**:
- Contains all even numbers from **2 to 80** (inclusive), incrementing by 2.
- Values are evenly spaced in blue cells, starting at column 1 (value 2) and ending at column 40 (value 80).
- **ATT Row**:
- Contains all odd numbers from **1 to 79** (inclusive), incrementing by 2.
- Values are evenly spaced in red cells, starting at column 1 (value 1) and ending at column 40 (value 79).
- **Color Consistency**:
- All MLP values are blue, and all ATT values are red, with no deviations observed.
### Key Observations
1. **Perfect Parity Separation**:
- MLP exclusively represents even numbers, while ATT represents odd numbers.
- No overlap or mixed parity in either row.
2. **Symmetrical Distribution**:
- Both rows span 40 columns, with MLP ending at 80 and ATT at 79.
- The grid structure ensures each number occupies a unique cell without gaps.
3. **Alternating Pattern**:
- Blue (MLP) and red (ATT) cells alternate strictly by row, not column.
### Interpretation
This heatmap visually distinguishes two distinct numerical sequences:
- **MLP** (blue) represents even integers, likely symbolizing a category or group requiring even-numbered identifiers (e.g., paired data, even-indexed events).
- **ATT** (red) represents odd integers, possibly denoting a complementary category (e.g., unpaired data, odd-indexed events).
The strict separation of parity suggests a deliberate design choice to avoid ambiguity in data categorization. The absence of axis labels or contextual metadata limits interpretation of the numbers' real-world significance, but the structure implies a comparative analysis between two mutually exclusive groups. The use of color reinforces this dichotomy, aiding quick visual differentiation.
</details>
(a) Optimization Step 0
<details>
<summary>x16.png Details</summary>

### Visual Description
## Heatmap: MLP vs ATT Distribution
### Overview
The image displays a heatmap with two rows labeled "MLP" (top) and "ATT" (bottom), each containing numerical values in blue and red cells respectively. The columns are numbered sequentially from 2 to 80, with specific cells highlighted in blue (MLP) or red (ATT). The legend at the bottom confirms blue represents MLP and red represents ATT.
### Components/Axes
- **Rows**:
- **MLP** (top row): Blue cells.
- **ATT** (bottom row): Red cells.
- **Columns**:
- Numbered from 2 to 80 (not all numbers are present).
- **Legend**:
- Blue = MLP.
- Red = ATT.
### Detailed Analysis
- **MLP (Blue Cells)**:
- Positions: 6, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78, 80.
- Pattern: Blue cells dominate even-numbered positions, with a gap between 6 and 18 (missing 8, 10, 12, 14, 16). From 18 onward, blue cells appear at every even number up to 80.
- **ATT (Red Cells)**:
- Positions: 11, 13, 19, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 43, 45, 47, 49, 51, 53, 55, 57, 59, 61, 63, 65, 67, 69, 71, 73, 75, 77, 79.
- Pattern: Red cells appear at odd-numbered positions, with a gap between 13 and 19 (missing 15, 17). From 19 onward, red cells appear at every odd number up to 79.
### Key Observations
1. **MLP Dominance in Even Numbers**: MLP's blue cells are concentrated in even-numbered positions, with a notable gap between 6 and 18.
2. **ATT Dominance in Odd Numbers**: ATT's red cells are concentrated in odd-numbered positions, with a gap between 13 and 19.
3. **Structured Distribution**: Both rows exhibit a periodic pattern, suggesting a deliberate categorization or interaction between MLP and ATT.
### Interpretation
The data suggests a structured relationship between MLP and ATT, potentially reflecting their roles in a computational model (e.g., MLP as a multi-layer perceptron and ATT as attention mechanisms). The gaps in MLP's blue cells (e.g., 8, 10, 12, 14, 16) and ATT's red cells (e.g., 15, 17) may indicate missing data, exclusions, or intentional design choices. The alternating pattern implies a systematic interaction, where MLP and ATT operate in complementary or opposing roles across the numerical range. This could reflect a balance or trade-off in the model's architecture, with MLP handling even-numbered tasks and ATT managing odd-numbered ones.
</details>
(b) Optimization Step 64
<details>
<summary>x17.png Details</summary>

### Visual Description
## Heatmap: MLP vs. ATT Value Distribution
### Overview
The image displays a two-row grid of numerical values, with each row labeled "MLP" (top) and "ATT" (bottom). Values are color-coded: **blue** for MLP and **red** for ATT. The grid spans 40 columns, with numbers increasing sequentially from left to right.
### Components/Axes
- **Rows**:
- **MLP Row**: Contains even numbers (2, 4, 6, ..., 80) in blue.
- **ATT Row**: Contains odd numbers (1, 3, 5, ..., 79) in red.
- **Columns**: 40 columns, each representing a sequential position (1–40).
- **Legend**:
- **Blue**: MLP (top row).
- **Red**: ATT (bottom row).
- **No explicit axes or labels** beyond row titles and numerical values.
### Detailed Analysis
- **MLP Row**:
- Values: 2, 4, 6, ..., 80 (40 even numbers).
- Color: Blue.
- Pattern: Strictly increasing by 2.
- **ATT Row**:
- Values: 1, 3, 5, ..., 79 (40 odd numbers).
- Color: Red.
- Pattern: Strictly increasing by 2.
- **Legend Placement**: Top of the image, with "MLP" in blue and "ATT" in red.
### Key Observations
1. **Sequential Symmetry**:
- MLP values are even, ATT values are odd, creating a mirrored numerical pattern.
- Both rows span the same range (1–80 for MLP, 1–79 for ATT).
2. **Color Consistency**:
- Blue (MLP) and red (ATT) are strictly applied without overlap.
3. **Grid Structure**:
- 40 columns, each containing one value per row.
- No missing or out-of-order values.
### Interpretation
The image likely represents a comparative analysis of two models (MLP and ATT) with their respective values. The color coding and sequential numbering suggest a structured dataset, possibly for benchmarking or performance tracking. The absence of explicit axes or labels implies the focus is on the numerical and categorical relationships rather than spatial or temporal trends. The strict alternation of even/odd values may indicate a binary or dual-system framework, such as contrasting two algorithms or datasets.
**Note**: The image lacks contextual metadata (e.g., units, source, or purpose), limiting deeper interpretation. However, the structured layout and color coding strongly suggest a comparative or categorical representation.
</details>
(c) Optimization Step 128
<details>
<summary>x18.png Details</summary>

### Visual Description
## Heatmap: MLP vs ATT Number Distribution
### Overview
The image displays a heatmap with two rows of numerical data, labeled "MLP" (top) and "ATT" (bottom). Each row contains 40 numbers, with specific values highlighted in blue and red. The heatmap uses a white background for unhighlighted numbers, blue for selected values, and red for other highlighted values. No explicit legend is visible, but color coding is inferred from the context.
### Components/Axes
- **Rows**:
- **MLP**: Even numbers from 2 to 80 (2, 4, 6, ..., 80).
- **ATT**: Odd numbers from 1 to 79 (1, 3, 5, ..., 79).
- **Columns**: Each row is divided into 40 columns, with numbers incrementing sequentially.
- **Colors**:
- **Blue**: Highlighted values (e.g., 16, 22, 28, 38, 44, 50, 58, 64, 78 in MLP; 16, 22, 28, 38, 44, 50, 58, 64, 78 in ATT).
- **Red**: Highlighted values (e.g., 5, 9, 13, 17, 25, 27, 29, 31, 33, 39, 41, 45, 47, 49, 51, 53, 55, 57, 59, 61, 63, 65, 67, 69, 73, 75, 77, 79 in MLP; 5, 9, 13, 17, 25, 27, 29, 31, 33, 39, 41, 45, 47, 49, 51, 53, 55, 57, 59, 61, 63, 65, 67, 69, 73, 75, 77, 79 in ATT).
- **White**: Unhighlighted values.
### Detailed Analysis
#### MLP Row (Top)
- **Blue Numbers**: 16, 22, 28, 38, 44, 50, 58, 64, 78.
- **Red Numbers**: 5, 9, 13, 17, 25, 27, 29, 31, 33, 39, 41, 45, 47, 49, 51, 53, 55, 57, 59, 61, 63, 65, 67, 69, 73, 75, 77, 79.
- **White Numbers**: All other values (e.g., 2, 4, 6, 8, 10, 12, 14, 18, 20, 24, 26, 30, 32, 34, 36, 40, 42, 46, 48, 52, 54, 56, 60, 62, 66, 68, 70, 72, 74, 76, 80).
#### ATT Row (Bottom)
- **Blue Numbers**: 16, 22, 28, 38, 44, 50, 58, 64, 78.
- **Red Numbers**: 5, 9, 13, 17, 25, 27, 29, 31, 33, 39, 41, 45, 47, 49, 51, 53, 55, 57, 59, 61, 63, 65, 67, 69, 73, 75, 77, 79.
- **White Numbers**: All other values (e.g., 1, 3, 7, 11, 15, 19, 21, 23, 35, 37, 43, 43, 55, 61, 63, 65, 67, 71, 75, 79).
### Key Observations
1. **Blue Highlights**:
- Both rows share identical blue-highlighted numbers (16, 22, 28, 38, 44, 50, 58, 64, 78), suggesting a shared pattern or threshold.
- These numbers are spaced 6 units apart (e.g., 16 → 22 → 28 → ...), indicating a possible cyclical or modular relationship.
2. **Red Highlights**:
- Red numbers in both rows are odd and follow a similar pattern, with gaps of 4–6 units (e.g., 5 → 9 → 13 → ...).
- In the MLP row, red numbers include values like 5 and 9, which are not present in the ATT row (which starts at 1 and includes 5, 9, etc.).
3. **White Numbers**:
- Most values are unhighlighted, suggesting they are less significant or outside the focus of the analysis.
### Interpretation
- **Pattern Consistency**: The identical blue-highlighted numbers in both rows imply a shared criterion (e.g., critical thresholds, key data points). The red highlights may represent secondary or alternative categories.
- **Anomalies**:
- The MLP row includes red numbers (e.g., 5, 9) that are not present in the ATT row, which starts at 1. This discrepancy suggests a potential error in the image or a misalignment in labeling.
- The ATT row’s red numbers (e.g., 5, 9) align with the MLP row’s red numbers, but their positions differ due to the odd/even numbering.
- **Implications**: The heatmap likely visualizes a comparison between two datasets (MLP and ATT), with blue and red highlights indicating specific metrics (e.g., performance, frequency, or priority). The repetition of blue numbers across rows suggests a shared underlying structure or rule.
### Conclusion
The heatmap reveals a structured relationship between MLP and ATT data, with blue and red highlights emphasizing key values. While the exact purpose of the color coding is unclear without a legend, the patterns suggest a focus on specific numerical thresholds or categories. Further context (e.g., labels, annotations) would clarify the interpretation.
</details>
(d) Optimization Step 512
Figure 12: Visualization of skipped layer set configurations of LLaMA-2-13B optimized by \method on different optimization steps. Gray squares indicate retained layers, red squares denote skipped attention layers, and blue squares signify skipped MLP layers.
D.8 Case Study
We present examples from CNN/DM and Humaneval in Table 13 and 14, respectively, comparing token acceptance between \method and prior plug-and-play approach, Lookahead (Fu et al., 2024). The results demonstrate that \method accepts a higher number of drafts, which are uniformly distributed in the outputs, highlighting the effectiveness of our approach.
Table 13: CNN/DM example using LLaMA-2-13B; red tokens indicate accepted drafts.
| Lookahead \hdashline \method | A Canadian law student was greeted by a friendly note from a stranger when he returned to his car after leaving the head lights on all day . A Canadian law student was gre et ed by a friendly note from a stranger when he returned to his car after leaving the headlights on all day . |
| --- | --- |
Table 14: Humaneval example using CodeLLaMA-13B; red tokens indicate accepted drafts.
| Lookahead \hdashline \method | for i in range(len(numbers) - 1): if abs(numbers [i] - numbers [i + 1]) <= threshold: return True return False def has_close_elements_2(numbers: List[float], threshold: float) ->bool: ””” Check if in given list of numbers, are any two numbers closer to each other than given threshold. >>>has_close_elements_2-for i in range(len (numbers) - 1): if abs (numbers [i] - numbers[i + 1]) <= threshold: return True return False def has_close_elements_ 2(numbers: List[float], threshold: float) -> bool: ””” Check if in given list of numbers, are any two numbers closer to each other than given threshold. >>> has _close_elements_2 |
| --- | --- |