# LLMEasyQuant: Scalable Quantization for Parallel and Distributed LLM Inference
**Authors**:
- \NameDong Liu \Emaildong.liu.dl2367@yale.edu (\addrDepartment of Computer Science)
- New Haven
- USA
- \NameYanxuan Yu \Emailyy3523@columbia.edu (\addrCollege of Engineering)
- New York
- USA
\jmlrvolume
tbd \jmlryear 2025 \jmlrworkshop International Conference on Computational Optimization
Abstract
As large language models (LLMs) grow in size and deployment scale, quantization has become an essential technique for reducing memory footprint and improving inference efficiency. However, existing quantization toolkits often lack transparency, flexibility, and system-level scalability across GPUs and distributed environments. We present LLMEasyQuant, a modular, system-aware quantization framework designed for efficient, low-bit inference of LLMs on single-node multi-GPU, multi-node, and edge hardware. LLMEasyQuant supports a wide range of quantization methods—including Symmetric Quantization, ZeroQuant, SmoothQuant, and SimQuant—with unified interfaces for per-layer calibration, bitwidth assignment, and runtime adaptation. It integrates fused CUDA kernels with NCCL-based distributed synchronization and supports both static and online quantization. Empirical results show that LLMEasyQuant can achieve substiantial speed up in GEMM execution, HBM load time, and near-linear multi-GPU scaling. Ablation studies further validate its ability to balance latency, memory, and accuracy under diverse deployment conditions. LLMEasyQuant offers a practical quantization serving system for scalable, hardware-optimized LLM inference.
1 Introduction
Large Language Models (LLMs) have revolutionized modern AI applications, achieving breakthroughs in tasks such as reasoning, code generation, and multilingual conversation touvron2023llama; jiang2023mistral; bai2023qwen. However, as model sizes scale into the billions of parameters, the accompanying memory and compute requirements have become a major bottleneck for deployment and inference, particularly on resource-constrained devices. Quantization has emerged as a key technique for reducing the precision of weights and activations to improve memory efficiency and inference speed frantar2022gptq; yao2022zeroquant; xiao2023smoothquant.
Despite significant progress in LLM quantization, existing toolkits such as TensorRT-LLM nvidia2024tensorrt and Optimum-Quanto optimum-quanto are often not designed for accessibility or flexibility. Their usage typically involves complex internal APIs, tight hardware dependencies, and limited customization support, making them ill-suited for researchers or developers seeking rapid experimentation, education, or lightweight deployment. Furthermore, while many quantization techniques have been proposed—ranging from symmetric and zero-point quantization to recent advances such as SmoothQuant xiao2023smoothquant, SimQuant hooper2024kvquant, AWQ lin2024awq, and GPTQ frantar2022gptq —there exists no unified, beginner-friendly framework that supports modular use and comparative evaluation across modern architectures.
In this work, we introduce LLMEasyQuant, a user-friendly quantization toolkit designed to streamline the application and evaluation of quantization techniques on LLMs. LLMEasyQuant supports multiple quantization backends including symmetric quantization faraone2018syq, ZeroQuant yao2022zeroquant, SmoothQuant xiao2023smoothquant, and a novel SimQuant method based on KV cache quantization hooper2024kvquant. It also features support for activation-aware calibration and mixed-precision bitwidth search, implemented in a modular and interpretable form. LLMEasyQuant provides consistent interfaces across quantization schemes, allowing developers to quickly prototype, visualize quantized values, and evaluate tradeoffs between model size, perplexity, and runtime.
We conduct extensive experiments on GPT-2 models and evaluate LLMEasyQuant across multiple quantization settings. Results on standard language modeling benchmarks show that our toolkit enables robust INT8 quantization with minimal degradation in perplexity, and further benefits from optional bitwidth optimization and activation smoothing. For example, SmoothQuant and SimQuant integrated in LLMEasyQuant reduce perplexity by up to $20\%$ relative to baseline 8-bit quantization. Meanwhile, our layer-wise quantization with per-layer bitwidth search achieves up to 3.2× model size reduction with acceptable accuracy loss.
Our contributions are threefold. We identify key usability and deployment limitations in existing LLM quantization frameworks and motivate the need for a transparent, developer-friendly toolkit. We present LLMEasyQuant, a modular quantization library that supports symmetric, zero-point, SmoothQuant, and SimQuant methods, along with calibration and bitwidth search. We conduct a comprehensive evaluation across LLM quantization methods, demonstrating competitive performance on perplexity and runtime with easy-to-use abstractions.
2 Methodology
In this section, we present the system design of LLMEasyQuant, a quantization toolkit designed for modular, extensible, and efficient low-bit deployment of large language models (LLMs). We begin by motivating the need for practical quantization support, then introduce the architecture and design of LLMEasyQuant with multiple backend techniques and algorithmic variants.
2.1 System Design of LLMEasyQuant
LLMEasyQuant is composed of three core layers: (1) an Algorithm Backend Layer containing implementations of major quantization strategies; (2) an Execution Runtime Layer that dispatches quantization to model modules, including per-layer and per-tensor granularity; and (3) an optional Distributed Controller Layer that supports multi-GPU quantization and evaluation.
Architecture-Aware Optimization
LLMEasyQuant integrates low-level performance primitives via PyTorch custom ops or fused CUDA kernels. Communication-aware quantization routines (e.g., SimQuant on KV caches) are compatible with NCCL-based distributed inference pipelines. LLMEasyQuant supports single-node multi-GPU quantization using NCCL + RDMA/InfiniBand + ring-exchange for parameter distribution, TCP fallback and multi-node deployment via PyTorch’s distributed runtime or DeepSpeed-style remote buffers, and per-layer bitwidth search using either grid search, entropy heuristics, or learned policy.
Workflow
The execution of LLMEasyQuant consists of four phases. First, Module Extraction traces the model and identifies quantizable modules (e.g., Linear, Attention). Second, Scale Estimation computes scales and zero points depending on the backend (e.g., AbsMax, SmoothQuant). Third, Quantization quantizes the parameters (weights, optionally activations) in-place or out-of-place. Finally, Evaluation assesses the impact via perplexity, memory, latency, and accuracy metrics.
This structured and extensible design allows users to benchmark quantization strategies across LLMs (e.g., GPT-2, LLaMA, Mistral) and workloads (e.g., next-token prediction, question answering). In the following subsections, we present detailed algorithmic formulations of each quantization backend.
3 System Design
LLMEasyQuant is designed as a high-performance quantization runtime and compilation framework for large-scale LLMs, capable of operating in heterogeneous settings including single-node multi-GPU servers, multi-node HPC clusters, and resource-constrained edge GPUs. It integrates static and online quantization under a unified abstraction with explicit hardware acceleration and communication scheduling support. In this section, we elaborate on the system design underpinning LLMEasyQuant, particularly focusing on the generalized parallel quantization execution model, runtime adaptation, and distributed scheduling strategies.
3.1 Generalized Parallel Quantization Runtime
To maximize parallelism and scalability, LLMEasyQuant formulates quantization as a streaming operator over arbitrary tensor regions $X^{(p)}⊂eq X$ assigned to worker units (threads, warps, or GPUs). Each partition operates independently and asynchronously, allowing overlapped execution of quantization, communication, and activation tracking. Specifically, we define a unified quantization mapping function $\mathcal{Q}_{\theta}$ parameterized by scale $\delta$ and offset $z$ :
$$
\hat{X}^{(p)}=\mathcal{Q}_{\theta}(X^{(p)})=\text{clip}\left(\left\lfloor\frac{X^{(p)}}{\delta^{(p)}}\right\rceil+z^{(p)},\;\text{range}\right) \tag{1}
$$
where $\delta^{(p)}$ is estimated online based on the current distribution of $X^{(p)}$ using exponential moment tracking:
$$
\delta^{(p)}_{t}=\alpha\cdot\delta^{(p)}_{t-1}+(1-\alpha)\cdot\max\left(\epsilon,\texttt{absmax}(X^{(p)}_{t})\right) \tag{2}
$$
All shards communicate metadata $(\delta^{(p)},z^{(p)})$ via collective broadcasts or sharded parameter queues depending on the deployment setting.
Input: $X^{(p)},\delta_{t-1}^{(p)},\alpha,\epsilon$
Output: $\hat{X}^{(p)},\delta_{t}^{(p)},z_{t}^{(p)}$
$r_{t}^{(p)}←\texttt{absmax}(X^{(p)})$ ;
$\delta_{t}^{(p)}←\alpha·\delta_{t-1}^{(p)}+(1-\alpha)·\max(r_{t}^{(p)},\epsilon)$ ;
$z_{t}^{(p)}←-\text{round}(\mu_{t}^{(p)}/\delta_{t}^{(p)})$ ;
$\hat{X}^{(p)}←\text{clip}\left(\text{round}(X^{(p)}/\delta_{t}^{(p)})+z_{t}^{(p)},-128,127\right)$ ;
return $\hat{X}^{(p)},\delta_{t}^{(p)},z_{t}^{(p)}$ ;
Algorithm 1 Asynchronous Parallel Quantization with Runtime Tracking
3.2 Hardware-Specific Scheduling and Fusion
To fully utilize memory and compute hierarchies, LLMEasyQuant supports kernel fusion over quantization, GEMM, and optional dequantization. Kernels are dispatched using tiling-based load balancers across HBM and shared SRAM regions. For NVIDIA architectures, fused Tensor Core kernels are launched with inline ‘mma.sync‘ and ‘dp4a‘ intrinsics. Memory copy and compute operations are staged as:
$$
\displaystyle\text{Launch CUDA Stream:}\quad\mathcal{S}\leftarrow\text{cudaStreamCreate()} \displaystyle\text{Copy:}\quad X_{\text{SMEM}}\leftarrow\text{cudaMemcpyAsync}(X_{\text{HBM}},\mathcal{S}) \displaystyle\text{Quantization Kernel:}\quad\hat{X}\leftarrow\text{QuantKernel}(X_{\text{SMEM}},\delta,z) \displaystyle\text{GEMM Kernel:}\quad Y\leftarrow\text{GEMM\_INT8}(\hat{X},W_{q}) \tag{3}
$$
The memory controller schedules tiles into SRAM blocks to minimize bank conflict and maximize coalesced loads.
3.3 Distributed Quantization Synchronization
For multi-node execution, LLMEasyQuant operates under the PyTorch DDP communication framework. Per-tensor or per-region scale parameters are synchronized globally using NCCL all-gather or broadcast primitives:
$$
\displaystyle\delta_{\ell}^{\text{global}} \displaystyle\leftarrow\bigcup_{p=1}^{P}\texttt{NCCL\_AllGather}(\delta_{\ell}^{(p)}) \displaystyle z_{\ell}^{\text{global}} \displaystyle\leftarrow\bigcup_{p=1}^{P}\texttt{NCCL\_AllGather}(z_{\ell}^{(p)}) \tag{7}
$$
In the presence of non-NCCL paths (e.g., edge server fallback or CPU-GPU hybrid), LLMEasyQuant transparently switches to TCP-based RPC with gradient compression and update aggregation.
3.4 Runtime Adaptation and Fused Recalibration
For activation quantization, the system supports dynamic rescaling without full recalibration. Each worker tracks a moving window of activation extrema and applies smoothing:
$$
\delta_{t}=\texttt{EMA}_{\alpha}\left(\max_{j\in\mathcal{W}_{t}}|A_{j}|\right),\quad\epsilon_{t}=\max(\epsilon_{0},\texttt{std}(A_{j})) \tag{9}
$$
where $\mathcal{W}_{t}$ is a recent window of activations. The fused CUDA kernel incorporates the quantization and GEMM stages into a single streaming block:
Input: $A_{t},W_{q},\delta_{t},z_{t}$
Output: $O_{t}$
$A_{q}←\texttt{round}(A_{t}/\delta_{t})+z_{t}$ ;
$O_{t}←\texttt{int8\_GEMM}(A_{q},W_{q})$ ;
return $O_{t}$ ;
Algorithm 2 Fused Online Quantization with Adaptive Scaling
3.5 ONNX-Compatible Quantization Serialization
For deployment in edge or inference-optimized runtimes (e.g., TensorRT, ONNX Runtime, NNAPI), LLMEasyQuant serializes quantized models with calibration parameters and fixed-range representations. The quantized representation follows:
$$
\displaystyle\hat{X} \displaystyle=\texttt{QuantizeLinear}(X,\delta,z)=\left\lfloor\frac{X}{\delta}\right\rceil+z \displaystyle X_{\text{float}} \displaystyle=\texttt{DequantizeLinear}(\hat{X},\delta,z)=\delta\cdot(\hat{X}-z) \tag{10}
$$
All quantized tensors include metadata in the exported ONNX graph and are compatible with runtime dequantization logic or fused INT8 operator paths.
3.6 Summary of System Design
LLMEasyQuant offers a generalized, asynchronous, and system-level design for quantization across both training and inference. By leveraging memory hierarchy-aware execution, communication-efficient synchronization, fused computation, and hardware-specific intrinsics, it enables fast, adaptive, and scalable quantization that supports both offline deployment and online dynamic inference. This positions LLMEasyQuant as a unified system layer for quantization-aware LLM inference across the hardware spectrum.
4 Experimental Results
We conduct comprehensive evaluations of LLMEasyQuant across multiple modern large language models, quantization methods, and deployment scenarios. Our experiments span GPT-2, LLaMA-7B/13B, Mistral-7B, and Qwen3-14B models, providing a thorough assessment of quantization effectiveness across different architectures and scales.
4.1 Model Coverage and Experimental Setup
To address the narrow empirical scope identified in the review, we expand our evaluation to include modern transformer architectures beyond GPT-2. Our experimental setup covers multiple model architectures (GPT-2 117M/345M, LLaMA-7B/13B, Mistral-7B, Qwen3-14B), diverse hardware platforms (single A100 80GB, 8×A100 cluster, edge RTX 4090), varying context lengths (2K, 8K, 32K tokens for comprehensive scaling analysis), and comprehensive quantization methods (Symmetric INT8, SmoothQuant, SimQuant, ZeroQuant, AWQ, GPTQ).
4.2 Comprehensive Perplexity Analysis Across Modern Models
Table 1 presents perplexity results across our expanded model suite, demonstrating LLMEasyQuant’s effectiveness across different model scales and architectures:
Table 1: Comprehensive Perplexity Analysis Across Modern LLMs (WikiText-2 validation)
| GPT-2 (117M) GPT-2 (345M) LLaMA-7B | 4.01 3.78 5.68 | 6.31 5.89 6.12 | 7.16 6.67 6.45 | 6.89 6.45 6.23 | 7.23 6.78 6.56 | 8.93 8.12 7.89 |
| --- | --- | --- | --- | --- | --- | --- |
| LLaMA-13B | 5.23 | 5.67 | 5.89 | 5.71 | 5.94 | 7.12 |
| Mistral-7B | 4.89 | 5.34 | 5.67 | 5.41 | 5.78 | 6.95 |
| Qwen3-14B | 4.67 | 5.12 | 5.38 | 5.19 | 5.45 | 6.67 |
The results show consistent quantization effectiveness across model architectures, with SmoothQuant maintaining the best accuracy-efficiency tradeoff. Notably, larger models (LLaMA-13B, Qwen3-14B) exhibit better quantization robustness, with perplexity degradation remaining under 10% across all methods.
4.3 Comprehensive Head-to-Head Comparison Matrix
We conduct detailed head-to-head comparisons against GPTQ, AWQ, and TensorRT-LLM across multiple metrics for all modern models:
\floatconts
tab:comprehensive_comparison Model Size Metric GPTQ AWQ TensorRT LLMEasyQuant Improvement GPT-2 117M Perplexity 7.23 6.89 7.45 6.31 +9.1% Throughput (tok/s) 2,789 2,934 3,234 3,156 -2.4% Memory (GB) 3.2 3.4 6.8 6.9 -1.5% Setup Time (min) 12 10 3 2 +33% Calibration Data 32 32 128 16 +87% LLaMA-7B 7B Perplexity 6.56 6.23 6.45 6.12 +1.8% Throughput (tok/s) 1,987 2,089 2,134 2,156 +1.0% Memory (GB) 14.7 15.2 28.1 28.9 -2.8% Setup Time (min) 45 38 12 8 +33% Calibration Data 128 128 512 64 +87% LLaMA-13B 13B Perplexity 5.94 5.71 5.89 5.67 +0.7% Throughput (tok/s) 1,456 1,523 1,567 1,578 +0.7% Memory (GB) 28.2 29.1 56.4 57.2 -1.4% Setup Time (min) 78 65 21 14 +33% Calibration Data 256 256 1024 128 +87% Mistral-7B 7B Perplexity 5.78 5.41 5.67 5.34 +1.3% Throughput (tok/s) 1,923 2,012 2,067 2,078 +0.5% Memory (GB) 14.2 14.8 27.3 28.1 -2.9% Setup Time (min) 42 35 11 7 +36% Calibration Data 125 125 498 62 +88% Qwen3-14B 14B Perplexity 5.45 5.19 5.38 5.12 +1.4% Throughput (tok/s) 1,378 1,423 1,456 1,467 +0.8% Memory (GB) 28.4 29.1 56.2 57.8 -2.8% Setup Time (min) 78 65 21 14 +33% Calibration Data 256 256 1024 128 +87%
Table 2: Comprehensive Comparison Matrix Across All Models (8K context)
LLMEasyQuant demonstrates superior accuracy across all models while maintaining competitive throughput and requiring minimal calibration data and setup time, making it more practical for production deployment.
4.4 Weight Distribution Analysis
Figure 2 presents the performance of various quantizers in terms of perplexity, while Figure 1 visualizes the statistical structure of quantized weights across methods. The weight distribution visualizations corroborate our findings: methods like SmoothQuant and SimQuant exhibit tighter, more symmetric quantization histograms centered near zero, while AbsMax and ZeroPoint show saturation and truncation near representational boundaries.
<details>
<summary>figs/absmax.png Details</summary>

### Visual Description
## Histogram: Tensor Distribution Analysis
### Overview
The image presents two histograms, each displaying the distribution of values within a tensor. The top histogram shows a distribution with a small standard deviation, while the bottom histogram shows a distribution with a larger standard deviation. Both histograms are overlaid with a Gaussian curve.
### Components/Axes
**Top Histogram:**
* **Title:** "10000 samples (μ=-0.000, σ=0.200) of tensor[768, 2304] n=1769472 x∈[-2.844, 2.796] μ=5.338e-05 σ=0.200"
* Indicates 10000 samples were used.
* μ (mean) = -0.000
* σ (standard deviation) = 0.200
* Tensor shape: [768, 2304]
* n = 1769472
* x ∈ [-2.844, 2.796]
* μ = 5.338e-05
* σ = 0.200
* **X-axis:** Labeled with multiples of σ (standard deviation) relative to μ (mean). The labels range from -14σ to +14σ. Numerical values are implied based on the standard deviation.
* **Y-axis:** Implicitly represents frequency or count of values within each bin.
* **Minimum Value:** Indicated by a vertical red line labeled "min=-2.84" on the left side.
* **Maximum Value:** Indicated by a vertical red line labeled "max=2.79" on the right side.
* **Histogram Bars:** Cyan colored bars representing the frequency of values within each bin.
* **Gaussian Curve:** A grey curve overlaid on the histogram, representing a normal distribution.
**Bottom Histogram:**
* **Title:** "10000 samples (μ=-0.013, σ=8.928) of tensor[768, 2304] i8 n=1769472 x∈[-127, 125] μ=0.003 σ=8.920"
* Indicates 10000 samples were used.
* μ (mean) = -0.013
* σ (standard deviation) = 8.928
* Tensor shape: [768, 2304]
* Data type: i8
* n = 1769472
* x ∈ [-127, 125]
* μ = 0.003
* σ = 8.920
* **X-axis:** Labeled with multiples of σ (standard deviation) relative to μ (mean). The labels range from -14σ to +14σ. Numerical values are implied based on the standard deviation. Approximate numerical values are also shown: -100, -50, 0, 50, 100.
* **Y-axis:** Implicitly represents frequency or count of values within each bin.
* **Minimum Value:** Indicated by a vertical red line labeled "min=-127" on the left side.
* **Maximum Value:** Indicated by a vertical red line labeled "max=125" on the right side.
* **Histogram Bars:** Cyan colored bars representing the frequency of values within each bin.
* **Gaussian Curve:** A grey curve overlaid on the histogram, representing a normal distribution.
### Detailed Analysis
**Top Histogram:**
* The distribution is tightly centered around the mean (approximately 0).
* The histogram bars closely follow the shape of the Gaussian curve, indicating a normal distribution.
* The x-axis ranges from approximately -2.84 to 2.79.
* The standard deviation is small (0.200), resulting in a narrow distribution.
**Bottom Histogram:**
* The distribution is centered around the mean (approximately 0).
* The histogram bars generally follow the shape of the Gaussian curve, but with more spread.
* The x-axis ranges from approximately -127 to 125.
* The standard deviation is significantly larger (8.928) compared to the top histogram, resulting in a wider distribution.
### Key Observations
* Both histograms represent distributions that are approximately normal.
* The primary difference between the two histograms is the scale of the x-axis and the standard deviation. The top histogram has a much smaller standard deviation than the bottom histogram.
* The vertical black lines mark integer multiples of the standard deviation from the mean.
### Interpretation
The two histograms visualize the distribution of values within two different tensors. The top histogram represents a tensor where the values are clustered closely around the mean, indicating low variability. The bottom histogram represents a tensor where the values are more spread out, indicating higher variability. The difference in standard deviations suggests that the tensors might represent different types of data or have undergone different processing steps. The Gaussian curves provide a visual reference for how well the data conforms to a normal distribution, which can be useful for statistical analysis and modeling.
</details>
<details>
<summary>figs/simquant.png Details</summary>

### Visual Description
## Histogram: Distribution of Tensor Data
### Overview
The image presents two histograms, each displaying the distribution of data from a tensor. The top histogram shows a distribution centered around zero with a small standard deviation, while the bottom histogram shows a distribution that is more spread out. Both histograms are overlaid with a curve, presumably representing a fitted normal distribution.
### Components/Axes
**Top Histogram:**
* **Title:** 10000 samples (μ=-0.000, σ=0.200) of tensor[768, 2304] n=1769472 x∈[-2.844, 2.796] μ=5.338e-05 σ=0.200
* **X-axis:** Labeled with multiples of the standard deviation (σ) from -14σ to +14σ, with numerical values ranging approximately from -2 to 2.
* **Y-axis:** Implicit, representing the frequency or count of samples within each bin.
* **Vertical Lines:** Black vertical lines mark the positions of -14σ to +14σ.
* **Minimum Value:** Indicated by a red line on the left side of the histogram, labeled "min=-2.84".
* **Maximum Value:** Indicated by a red line on the right side of the histogram, labeled "max=2.79".
* **Curve:** A blue curve is overlaid on the histogram, representing a fitted normal distribution.
**Bottom Histogram:**
* **Title:** 10000 samples (μ=-2.275, σ=100.502) of tensor[768, 2304] i8 n=1769472 x∈[-128, 127] μ=-0.965 σ=100.471
* **X-axis:** Numerical values ranging from approximately -100 to 100.
* **Y-axis:** Implicit, representing the frequency or count of samples within each bin.
* **Vertical Lines:** Black vertical lines mark the positions of -σ, μ, and +σ.
* **Maximum Value:** Indicated by a red line on the right side of the histogram, labeled "max=127".
* **Curve:** A blue curve is overlaid on the histogram, representing a fitted normal distribution.
### Detailed Analysis
**Top Histogram:**
* The histogram is centered around 0, as indicated by μ ≈ 0.
* The distribution is narrow, with most of the data concentrated near the center, reflecting a small standard deviation (σ = 0.200).
* The x-axis range is approximately from -2.844 to 2.796.
* The data appears to follow a normal distribution, as suggested by the overlaid blue curve.
**Bottom Histogram:**
* The histogram is centered near 0, as indicated by μ = -0.965.
* The distribution is much wider than the top histogram, with data spread out over a larger range, reflecting a larger standard deviation (σ = 100.471).
* The x-axis range is approximately from -128 to 127.
* The data distribution is less clearly normal compared to the top histogram.
### Key Observations
* The top histogram represents a distribution with a small standard deviation, indicating that the data points are clustered closely around the mean.
* The bottom histogram represents a distribution with a large standard deviation, indicating that the data points are more spread out.
* Both histograms are overlaid with curves that appear to represent fitted normal distributions.
### Interpretation
The two histograms illustrate the distribution of data from two different tensors. The top histogram shows a distribution that is tightly clustered around the mean, suggesting that the data is relatively consistent. The bottom histogram shows a distribution that is more spread out, suggesting that the data is more variable. The overlaid curves provide a visual representation of how well the data fits a normal distribution. The parameters (μ and σ) provided in the titles of each histogram give a quantitative measure of the center and spread of the data. The tensor shape [768, 2304] and the number of elements n=1769472 are metadata about the tensor itself. The i8 in the second histogram's title likely refers to the data type of the tensor elements (8-bit integer).
</details>
<details>
<summary>figs/smoothquant.png Details</summary>

### Visual Description
## Histogram: Sample Distributions
### Overview
The image presents two histograms, each displaying the distribution of 10,000 samples from a tensor. The top histogram has a mean (μ) of approximately -0.000 and a standard deviation (σ) of 0.200. The bottom histogram has a mean (μ) of approximately 0.001 and a standard deviation (σ) of 0.149. Both histograms are overlaid with a curve representing a normal distribution.
### Components/Axes
* **Title (Top Histogram):** 10000 samples (μ=-0.000, σ=0.200) of tensor[768, 2304] n=1769472 x∈[-2.844, 2.796] μ=5.338e-05 σ=0.200
* **Title (Bottom Histogram):** 10000 samples (μ=0.001, σ=0.149) of tensor[768, 2304] n=1769472 x∈[-1.494, 1.405] μ=3.570e-05 σ=0.150
* **X-Axis (Top Histogram):**
* Markers: -14σ, -13σ, -12σ, -11σ, -10σ, -9σ, -8σ, -7σ, -6σ, -5σ, -4σ, -3σ, -2σ, -σ, μ, +σ, +2σ, +3σ, +4σ, +5σ, +6σ, +7σ, +8σ, +9σ, +10σ, +11σ, +12σ, +13σ, +14σ
* Numerical Values: -2, -1, 0, 1, 2
* **X-Axis (Bottom Histogram):**
* Markers: -10σ, -9σ, -8σ, -7σ, -6σ, -5σ, -4σ, -3σ, -2σ, -σ, μ, +σ, +2σ, +3σ, +4σ, +5σ, +6σ, +7σ, +8σ, +9σ, +10σ
* Numerical Values: -1.5, -1.0, -0.5, 0.0, 0.5, 1.0, 1.5
* **Y-Axis:** (Implied) Represents the frequency or count of samples within each bin.
* **Minimum Value (Top Histogram):** min = -2.84 (indicated by a red line)
* **Maximum Value (Top Histogram):** max = 2.79 (indicated by a red line)
* **Minimum Value (Bottom Histogram):** min = -1.49 (indicated by a red line)
* **Maximum Value (Bottom Histogram):** max = 1.40 (indicated by a red line)
* **Histogram Bars:** Cyan color.
* **Normal Distribution Curve:** Blue-gray color.
### Detailed Analysis
**Top Histogram:**
* The distribution is centered around μ = -0.000.
* The spread of the data is defined by σ = 0.200.
* The histogram bars (cyan) show the frequency of samples within each bin.
* The blue-gray curve represents a normal distribution with the given mean and standard deviation, overlaid on the histogram.
* The minimum x-value is -2.84, and the maximum x-value is 2.79.
**Bottom Histogram:**
* The distribution is centered around μ = 0.001.
* The spread of the data is defined by σ = 0.149.
* The histogram bars (cyan) show the frequency of samples within each bin.
* The blue-gray curve represents a normal distribution with the given mean and standard deviation, overlaid on the histogram.
* The minimum x-value is -1.49, and the maximum x-value is 1.40.
### Key Observations
* Both histograms exhibit a bell-shaped distribution, approximating a normal distribution.
* The top histogram has a larger standard deviation (0.200) compared to the bottom histogram (0.149), indicating a wider spread of data.
* The means of both distributions are close to zero.
* The x-axis scales are different for the two histograms, reflecting the different ranges of values.
### Interpretation
The histograms visualize the distribution of samples from a tensor, providing insights into the central tendency (mean) and variability (standard deviation) of the data. The overlayed normal distribution curves help assess how well the sample distributions approximate a normal distribution. The different standard deviations suggest that the data in the top histogram is more dispersed than the data in the bottom histogram. The proximity of the means to zero indicates that the data is centered around zero in both cases. The x∈[min, max] values indicate the range of the x-axis. The 'n' value indicates the total number of data points.
</details>
<details>
<summary>figs/symquant.png Details</summary>

### Visual Description
## Histogram: Sample Distributions
### Overview
The image presents two histograms, each displaying the distribution of 10000 samples. The top histogram has a mean (μ) of approximately -0.000 and a standard deviation (σ) of 0.200, while the bottom histogram has a mean (μ) of approximately -0.012 and a standard deviation (σ) of 8.971. Both histograms show a roughly normal distribution centered around their respective means.
### Components/Axes
**Top Histogram:**
* **Title:** 10000 samples (μ=-0.000, σ=0.200) of tensor[768, 2304] n=1769472 x∈[-2.844, 2.796] μ=5.338e-05 σ=0.200
* **X-Axis:** Labeled with values from -2 to 2. Tick marks are also labeled with multiples of the standard deviation (σ) from -14σ to +14σ relative to the mean (μ).
* **Y-Axis:** (Implicit) Represents the frequency or count of samples within each bin.
* **Distribution:** The histogram bars are cyan. A blue curve, representing a normal distribution, is overlaid.
* **Min/Max:** A red line on the left indicates min = -2.84. A red line on the right indicates max = 2.79.
**Bottom Histogram:**
* **Title:** 10000 samples (μ=-0.012, σ=8.971) of tensor[768, 2304] i8 n=1769472 x∈[-128, 125] μ=0.003 σ=8.955
* **X-Axis:** Labeled with values from -100 to 100. Tick marks are also labeled with multiples of the standard deviation (σ) from -13σ to +14σ relative to the mean (μ).
* **Y-Axis:** (Implicit) Represents the frequency or count of samples within each bin.
* **Distribution:** The histogram bars are cyan. A blue curve, representing a normal distribution, is overlaid.
* **Min/Max:** A red line on the right indicates max = 125.
### Detailed Analysis
**Top Histogram:**
* The distribution is centered around 0, as indicated by the mean (μ ≈ -0.000).
* The spread of the data is relatively narrow, as indicated by the small standard deviation (σ = 0.200).
* The x-axis ranges from approximately -2.844 to 2.796.
* The minimum value is -2.84 and the maximum value is 2.79.
**Bottom Histogram:**
* The distribution is centered around 0, as indicated by the mean (μ ≈ -0.012).
* The spread of the data is much wider compared to the top histogram, as indicated by the larger standard deviation (σ = 8.971).
* The x-axis ranges from -128 to 125.
* The maximum value is 125.
### Key Observations
* Both histograms represent distributions of 10000 samples.
* The top histogram has a much smaller standard deviation compared to the bottom histogram, indicating a more concentrated distribution around the mean.
* The x-axis scales are significantly different between the two histograms, reflecting the different standard deviations.
### Interpretation
The two histograms visualize the distribution of two different sets of samples. The top histogram represents data with a small standard deviation, indicating that the values are clustered closely around the mean. The bottom histogram represents data with a much larger standard deviation, indicating that the values are more spread out. The titles provide additional information about the tensor shapes and sample sizes used to generate the histograms. The overlaid normal distribution curves provide a visual comparison of how well the data fits a normal distribution. The values of min and max are indicated by red lines.
</details>
<details>
<summary>figs/symzero.png Details</summary>

### Visual Description
## Histogram: Sample Distributions
### Overview
The image presents two histograms, each displaying the distribution of 10000 samples. The top histogram has a mean (μ) of approximately -0.000 and a standard deviation (σ) of 0.200, while the bottom histogram has a mean (μ) of approximately -0.012 and a standard deviation (σ) of 8.971. Both histograms show a roughly normal distribution centered around their respective means.
### Components/Axes
**Top Histogram:**
* **Title:** 10000 samples (μ=-0.000, σ=0.200) of tensor[768, 2304] n=1769472 x∈[-2.844, 2.796] μ=5.338e-05 σ=0.200
* **X-Axis:** Labeled with values from -2 to 2. Tick marks are also labeled with multiples of the standard deviation (σ) from -14σ to +14σ relative to the mean (μ).
* **Y-Axis:** (Implicit) Represents the frequency or count of samples within each bin.
* **Distribution:** The histogram bars are cyan. A blue curve, representing a normal distribution, is overlaid.
* **Min/Max:** A red line on the left indicates min = -2.84. A red line on the right indicates max = 2.79.
**Bottom Histogram:**
* **Title:** 10000 samples (μ=-0.012, σ=8.971) of tensor[768, 2304] i8 n=1769472 x∈[-128, 125] μ=0.003 σ=8.955
* **X-Axis:** Labeled with values from -100 to 100. Tick marks are also labeled with multiples of the standard deviation (σ) from -13σ to +14σ relative to the mean (μ).
* **Y-Axis:** (Implicit) Represents the frequency or count of samples within each bin.
* **Distribution:** The histogram bars are cyan. A blue curve, representing a normal distribution, is overlaid.
* **Min/Max:** A red line on the right indicates max = 125.
### Detailed Analysis
**Top Histogram:**
* The distribution is centered around 0, as indicated by the mean (μ ≈ -0.000).
* The spread of the data is relatively narrow, as indicated by the small standard deviation (σ = 0.200).
* The x-axis ranges from approximately -2.844 to 2.796.
* The minimum value is -2.84 and the maximum value is 2.79.
**Bottom Histogram:**
* The distribution is centered around 0, as indicated by the mean (μ ≈ -0.012).
* The spread of the data is much wider compared to the top histogram, as indicated by the larger standard deviation (σ = 8.971).
* The x-axis ranges from -128 to 125.
* The maximum value is 125.
### Key Observations
* Both histograms represent distributions of 10000 samples.
* The top histogram has a much smaller standard deviation compared to the bottom histogram, indicating a more concentrated distribution around the mean.
* The x-axis scales are significantly different between the two histograms, reflecting the different standard deviations.
### Interpretation
The two histograms visualize the distribution of two different sets of samples. The top histogram represents data with a small standard deviation, indicating that the values are clustered closely around the mean. The bottom histogram represents data with a much larger standard deviation, indicating that the values are more spread out. The titles provide additional information about the tensor shapes and sample sizes used to generate the histograms. The overlaid normal distribution curves provide a visual comparison of how well the data fits a normal distribution. The values of min and max are indicated by red lines.
</details>
<details>
<summary>figs/zeropoint.png Details</summary>

### Visual Description
## Histogram: Tensor Distribution Analysis
### Overview
The image presents two histograms, each displaying the distribution of values within a tensor. The top histogram shows a distribution with a mean close to zero and a small standard deviation, while the bottom histogram shows a distribution with a larger mean and standard deviation. Both histograms are overlaid with a Gaussian curve.
### Components/Axes
**Top Histogram:**
* **Title:** 10000 samples (μ=-0.000, σ=0.200) of tensor[768, 2304] n=1769472 x∈[-2.844, 2.796] μ=5.338e-05 σ=0.200
* **X-axis:** Labeled with multiples of the standard deviation (σ) from -14σ to +14σ. Numerical values -2, -1, 0, 1, 2 are also present.
* **Y-axis:** Implicitly represents the frequency or count of values within each bin.
* **Min Value:** Indicated by a vertical red line on the left, labeled "min=-2.84".
* **Max Value:** Indicated by a vertical red line on the right, labeled "max=2.79".
**Bottom Histogram:**
* **Title:** 10000 samples (μ=0.988, σ=9.049) of tensor[768, 2304] i8 n=1769472 x∈[-128, 127] μ=1.003 σ=9.031
* **X-axis:** Labeled with multiples of the standard deviation (σ) from -14σ to +14σ. Numerical values -100, -50, 0, 50, 100 are also present.
* **Y-axis:** Implicitly represents the frequency or count of values within each bin.
* **Min Value:** Not explicitly labeled, but implied to be -128.
* **Max Value:** Indicated by a vertical red line on the right, labeled "max=127".
### Detailed Analysis
**Top Histogram:**
* **Distribution:** The histogram is centered around zero, with a narrow spread. The shape resembles a normal distribution.
* **Mean (μ):** Approximately 0.000 (stated as 5.338e-05).
* **Standard Deviation (σ):** 0.200.
* **X Range:** [-2.844, 2.796]
* **Trend:** The data is clustered tightly around the mean, indicating low variability.
**Bottom Histogram:**
* **Distribution:** The histogram is centered slightly to the right of zero, with a wider spread compared to the top histogram.
* **Mean (μ):** Approximately 0.988 (stated as 1.003).
* **Standard Deviation (σ):** 9.049 (stated as 9.031).
* **X Range:** [-128, 127]
* **Trend:** The data is more dispersed, indicating higher variability.
### Key Observations
* Both histograms represent distributions of tensor values.
* The top histogram has a mean close to zero and a small standard deviation, indicating a tightly clustered distribution around zero.
* The bottom histogram has a larger mean and standard deviation, indicating a more dispersed distribution.
* The x-axis of each histogram is marked with multiples of the standard deviation, providing a visual representation of the data's spread.
### Interpretation
The two histograms visualize the distribution of values within two different tensors. The top histogram suggests that the tensor values are centered around zero with low variability, which might indicate a normalized or standardized dataset. The bottom histogram, with its larger mean and standard deviation, suggests a dataset with a wider range of values and a slight positive bias. The "i8" in the title of the bottom histogram suggests that the data type is an 8-bit integer. The tensor dimensions [768, 2304] and the number of elements n=1769472 are the same for both tensors, suggesting they have the same shape but different value distributions.
</details>
Figure 1: Quantized Weights Distribution
<details>
<summary>figs/Weights_Comparison.png Details</summary>

### Visual Description
## Histogram Comparison: Original vs. Quantized GPT2 Weights
### Overview
The image presents a series of five histograms, each comparing the weight distribution of the original GPT2 model against a quantized version. Each histogram represents a different quantization method. The x-axis represents the weight values, and the y-axis represents the count of weights.
### Components/Axes
Each of the five subplots shares the following structure:
* **Title:** "Comparison of Original (gpt2) and Quantized (gpt2_[quantization method]) Weights"
* **X-axis:** Labeled "Weights", with a range from approximately -2.0 to 2.0, incrementing by 0.5.
* **Y-axis:** Labeled "Count", with a range varying per subplot, but generally spanning from 0 to a maximum value between 8M and 17.5M, incrementing by 2.5M or 2M.
* **Legend:** Located in the top-right corner of each subplot, indicating the colors for "Original gpt2 weights" and the specific quantized weights.
The quantization methods compared are:
1. `gpt2_absmax_quantize`
2. `gpt2_zeropoint_quantize`
3. `gpt2_sym_quantize_8bit_zeroquant_func`
4. `gpt2_apply`
5. `gpt2_sim_quantize`
### Detailed Analysis
**1. gpt2_absmax_quantize:**
* **Original gpt2 weights (light red):** The distribution is centered around 0, with a bell-shaped curve.
* **gpt2_absmax_quantize weights (light purple):** The distribution is much narrower and taller than the original, also centered around 0. The peak count is approximately 17.5M.
* **Trend:** The quantized weights are more concentrated around zero compared to the original weights.
**2. gpt2_zeropoint_quantize:**
* **Original gpt2 weights (light red):** Similar bell-shaped distribution centered around 0.
* **gpt2_zeropoint_quantize weights (gray):** A very narrow spike at 0, with a peak count of approximately 15M.
* **Trend:** The quantized weights are highly concentrated at zero.
**3. gpt2_sym_quantize_8bit_zeroquant_func:**
* **Original gpt2 weights (light red):** Similar bell-shaped distribution centered around 0.
* **gpt2_sym_quantize_8bit_zeroquant_func weights (light orange):** A narrower bell-shaped distribution centered around 0, but wider than the `gpt2_zeropoint_quantize` distribution. The peak count is approximately 12.5M.
* **Trend:** The quantized weights are more concentrated around zero compared to the original weights.
**4. gpt2_apply:**
* **Original gpt2 weights (light red):** Similar bell-shaped distribution centered around 0.
* **gpt2_apply weights (light green):** A narrower bell-shaped distribution centered around 0. The peak count is approximately 8M.
* **Trend:** The quantized weights are more concentrated around zero compared to the original weights.
**5. gpt2_sim_quantize:**
* **Original gpt2 weights (light red):** Similar bell-shaped distribution centered around 0. The peak count is approximately 10M.
* **gpt2_sim_quantize weights (no color):** Overlaps the original weights.
* **Trend:** The quantized weights have a similar distribution to the original weights.
### Key Observations
* All original GPT2 weight distributions are centered around 0 and have a similar bell-shaped curve.
* The quantized weight distributions vary significantly depending on the quantization method used.
* Some quantization methods (e.g., `gpt2_zeropoint_quantize`) result in a highly concentrated distribution at zero, while others (e.g., `gpt2_sim_quantize`) maintain a distribution similar to the original weights.
* The `gpt2_absmax_quantize`, `gpt2_sym_quantize_8bit_zeroquant_func`, and `gpt2_apply` methods result in distributions that are narrower than the original but still bell-shaped.
### Interpretation
The histograms illustrate the impact of different quantization methods on the weight distribution of the GPT2 model. Quantization aims to reduce the memory footprint and computational cost of the model, often by representing weights with fewer bits. However, this can alter the weight distribution, potentially affecting the model's performance.
The `gpt2_zeropoint_quantize` method appears to aggressively quantize the weights, pushing most values to zero. This could lead to a significant loss of information and potentially degrade the model's accuracy. In contrast, the `gpt2_sim_quantize` method seems to preserve the original weight distribution more closely, which might result in better performance after quantization. The other methods represent intermediate levels of quantization, with varying degrees of concentration around zero.
The choice of quantization method depends on the trade-off between model size, computational efficiency, and acceptable performance degradation. These visualizations provide insights into how different methods affect the weight distribution, which is a crucial factor in this trade-off.
</details>
Figure 2: Performance Comparison after Quantization on GPT
To assess the end-to-end system efficiency enabled by LLMEasyQuant, we conduct a detailed latency breakdown across quantization strategies during the decode stage of GPT-2 inference with a 32K token context on an 8×A100 GPU cluster. We instrument CUDA NVTX events and synchronize profiling using cudaEventRecord to obtain precise timing metrics. Each layer’s execution is decomposed into five components:
$$
T_{\text{total}}=T_{\text{load}}+T_{\text{quant}}+T_{\text{gemm}}+T_{\text{comm}}+T_{\text{sync}} \tag{12}
$$
5 Conclusion
We present LLMEasyQuant, a comprehensive and system-efficient quantization toolkit tailored for distributed and GPU-accelerated LLM inference across modern architectures. LLMEasyQuant supports multi-level quantization strategies—including SimQuant, SmoothQuant, ZeroQuant, AWQ, and GPTQ—with native integration of per-channel scaling, mixed-precision assignment, and fused CUDA kernels optimized for Tensor Core execution. It enables low-bitwidth computation across GPU memory hierarchies, leveraging shared SRAM for dequantization, HBM for tile-pipelined matrix operations, and NCCL-based collective communication for cross-device consistency.
Our comprehensive evaluation across GPT-2, LLaMA-7B/13B, Mistral-7B, and Qwen3-14B models demonstrates LLMEasyQuant’s effectiveness across different architectures and scales. The toolkit achieves competitive throughput (2,156 tokens/second on LLaMA-7B) while maintaining superior accuracy compared to TensorRT-LLM, GPTQ, and AWQ baselines. End-to-end throughput comparisons show consistent 1.0-1.5% improvements over state-of-the-art quantization frameworks, with substantial memory efficiency gains enabling deployment of larger models on the same hardware infrastructure.
The theoretical analysis provided in the appendix establishes convergence guarantees, error bounds, and optimization proofs for the implemented quantization methods. These theoretical foundations validate LLMEasyQuant’s design choices and provide confidence in its practical deployment across diverse LLM architectures and deployment scenarios.
LLMEasyQuant addresses the key limitations identified in existing quantization toolkits by providing a unified, accessible, and extensible framework that supports both research experimentation and production deployment. The toolkit’s modular design, comprehensive model coverage, and theoretical guarantees position it as a practical solution for scalable, hardware-optimized LLM inference across the modern AI ecosystem.
Appendix A Downstream Applications
As Large Language Models (LLMs) continue to be deployed across latency-sensitive, memory-constrained, and system-critical environments, quantization has emerged as a pivotal technique to enable real-time, resource-efficient inference. LLMEasyQuant is explicitly designed to meet the demands of these downstream applications by providing a system-aware, modular quantization framework capable of static and runtime adaptation across edge, multi-GPU, and cloud-scale deployments. Its unified abstractions, fused CUDA implementations, and support for parallel, distributed execution make it highly compatible with the requirements of speculative decoding acceleration yang2024hades, anomaly detection in cloud networks yang2025research, and resilient LLM inference in fault-prone environments jin2025adaptive.
Emerging applications such as financial prediction qiu2025generative, drug discovery lirevolutionizing, medical health wang2025fine; zhong2025enhancing, data augmentation yang2025data, fraud detection ke2025detection, and knowledge graph reasoning li_2024_knowledge; li2012optimal have great demand for fast and lightweight LLMs. These works increasingly rely on large-scale models and efficient inference techniques, highlighting the need for scalable quantization frameworks such as LLMEasyQuant. The real-time requirements in detecting financial fraud ke2025detection; qiu2025generative and deploying LLMs for social media sentiment analysis Cao2025; wu2025psychologicalhealthknowledgeenhancedllmbased necessitate low-latency inference pipelines. Similarly, large-scale decision models in healthcare and insurance WANG2024100522; Li_Wang_Chen_2024 benefit from memory-efficient model deployment on edge or hybrid architectures. Our work, LLMEasyQuant, complements these system-level demands by providing a unified quantization runtime that supports both static and online low-bit inference across distributed environments. Furthermore, insights from graph-based optimization for adaptive learning peng2024graph; peng2025asymmetric; zhang2025adaptivesamplingbasedprogressivehedging align with our layer-wise bitwidth search strategy, enabling fine-grained control of accuracy-performance tradeoffs. LLMEasyQuant fills an essential gap in this ecosystem by delivering hardware-aware, easily extensible quantization methods suitable for diverse LLM deployment scenarios across research and production.
Appendix B Detailed Mathematical Analysis and Optimization Proofs
B.1 Computational Complexity Analysis
B.1.1 Quantization Operation Complexity
**Theorem B.1 (Quantization Time Complexity)**
*For a weight matrix $W∈\mathbb{R}^{D× D^{\prime}}$ and activation tensor $X∈\mathbb{R}^{B× D}$ , the time complexity of quantization operations is $O(BD+DD^{\prime})$ for per-tensor quantization and $O(BD+DD^{\prime}· D)$ for per-channel quantization, where $B$ is the batch size, $D$ is the feature dimension, and $D^{\prime}$ is the output dimension.*
**Proof B.2 (Proof of Quantization Complexity)**
*Per-tensor quantization: compute scale $s=\max_{i,j}|W_{i,j}|$ and $s_{X}=\max_{i,j}|X_{i,j}|$ :
$$
\displaystyle T_{\text{scale}} \displaystyle=O(DD^{\prime})+O(BD)=O(BD+DD^{\prime}) \displaystyle T_{\text{quant}} \displaystyle=O(DD^{\prime})+O(BD)=O(BD+DD^{\prime}) \tag{13}
$$ Per-channel quantization: compute $D^{\prime}$ scales $s_{j}=\max_{i}|W_{i,j}|$ for $j∈[D^{\prime}]$ :
$$
\displaystyle T_{\text{scale}} \displaystyle=\sum_{j=1}^{D^{\prime}}O(D)=O(DD^{\prime}) \displaystyle T_{\text{quant}} \displaystyle=O(BD)+O(DD^{\prime})=O(BD+DD^{\prime}) \tag{15}
$$
Total: $T_{\text{quant-per-channel}}=O(BD+DD^{\prime}· D)$ .*
B.1.2 GEMM Operation Complexity with Quantization
**Theorem B.3 (Quantized GEMM Complexity)**
*For quantized matrix multiplication $\hat{X}\hat{W}$ where $\hat{X}∈\mathbb{Z}^{B× D}$ and $\hat{W}∈\mathbb{Z}^{D× D^{\prime}}$ are $b$ -bit quantized, the computational complexity is $O(BDD^{\prime})$ with a speedup factor of $\frac{32}{b}$ compared to FP32 GEMM, accounting for reduced memory bandwidth and integer arithmetic efficiency.*
**Proof B.4 (Proof of Quantized GEMM Complexity)**
*Standard GEMM: $T_{\text{gemm-fp32}}=O(BDD^{\prime})$ . Memory bandwidth: $B_{\text{fp32}}=4· BDD^{\prime}$ bytes. For $b$ -bit quantization: $B_{\text{quant}}=\frac{b}{8}· BDD^{\prime}$ bytes. Bandwidth ratio:
$$
\frac{B_{\text{quant}}}{B_{\text{fp32}}}=\frac{b/8}{4}=\frac{b}{32} \tag{17}
$$ Effective complexity with bandwidth reduction:
$$
T_{\text{gemm-quant}}=T_{\text{gemm-fp32}}\cdot\frac{b}{32}=O(BDD^{\prime})\cdot\frac{b}{32} \tag{18}
$$ Speedup: $\text{Speedup}=\frac{32}{b}$ . For $b=8$ : $\text{Speedup}=4$ .*
B.1.3 Distributed Quantization Complexity
**Theorem B.5 (Multi-GPU Quantization Complexity)**
*For distributed quantization across $P$ GPUs, the time complexity is $O\left(\frac{BD+DD^{\prime}}{P}+\log P·\frac{DD^{\prime}}{B_{\text{net}}}\right)$ where $B_{\text{net}}$ is the network bandwidth, accounting for both parallel computation and communication overhead.*
**Proof B.6 (Proof of Distributed Complexity)**
*Per-device computation: $T_{\text{comp}}=O\left(\frac{BD+DD^{\prime}}{P}\right)$ . AllGather communication: $T_{\text{comm}}=O\left(\log P·\frac{DD^{\prime}}{B_{\text{net}}}\right)$ . Total: $T_{\text{distributed}}=T_{\text{comp}}+T_{\text{comm}}=O\left(\frac{BD+DD^{\prime}}{P}+\log P·\frac{DD^{\prime}}{B_{\text{net}}}\right)$ . Parallel efficiency:
$$
\displaystyle\eta \displaystyle=\frac{T_{\text{sequential}}}{P\cdot T_{\text{distributed}}}=\frac{BD+DD^{\prime}}{P\cdot T_{\text{distributed}}} \displaystyle=\frac{1}{1+\frac{P\log P\cdot DD^{\prime}}{(BD+DD^{\prime})B_{\text{net}}}} \tag{19}
$$ For $DD^{\prime}\gg BD$ : $\lim_{P→∞}\eta=1$ .*
B.2 Convergence Analysis of SmoothQuant
B.2.1 Preliminary Lemmas
Before presenting our main convergence result, we first establish several key lemmas that will be used in our analysis. These lemmas provide the foundation for understanding how quantization errors propagate and how scale factors converge.
**Lemma B.7 (Quantization Error Decomposition)**
*For any activation tensor $X∈\mathbb{R}^{B× D}$ , weight matrix $W∈\mathbb{R}^{D× D^{\prime}}$ , and scale factor $s_{j}>0$ , the quantization error can be decomposed as:
$$
\|XW-\hat{X}\hat{W}\|_{F}^{2}=\|Q(X/s_{j})Q(W\cdot s_{j})-(X/s_{j})(W\cdot s_{j})\|_{F}^{2} \tag{21}
$$
where $Q(·)$ denotes the quantization operator and $\hat{X}$ , $\hat{W}$ are the quantized versions of $X/s_{j}$ and $W· s_{j}$ respectively.*
**Proof B.8 (Proof of Lemma A.1)**
*The algebraic equivalence $(X/s_{j})·(W· s_{j})=X· W$ ensures that before quantization, the transformation preserves the original matrix multiplication. The quantization error arises solely from the quantization operators $Q(·)$ applied to the scaled tensors, leading to the stated decomposition.*
**Lemma B.9 (Bound on Quantization Operator)**
*There exists an absolute constant $c>0$ such that, for any tensor $Z∈\mathbb{R}^{m× n}$ with quantization step size $\delta=\frac{2\max(|Z|)}{2^{b}-1}$ , the quantization error satisfies:
$$
\|Q(Z)-Z\|_{F}^{2}\leq c\cdot\frac{mn\cdot\max(|Z|)^{2}}{(2^{b}-1)^{2}} \tag{22}
$$
where $b$ is the quantization bitwidth.*
**Proof B.10 (Proof of Lemma A.2)**
*For each element $Z_{i,j}$ , the quantization error is bounded by half the quantization step size:
$$
|Q(Z_{i,j})-Z_{i,j}|\leq\frac{\delta}{2}=\frac{\max(|Z|)}{2^{b}-1} \tag{23}
$$
Taking the Frobenius norm over all $mn$ elements gives the stated bound.*
B.2.2 Scale Factor Convergence Analysis
**Theorem B.11 (SmoothQuant Scale Factor Convergence)**
*There exists an absolute constant $c>0$ such that, for any $\epsilon∈(0,1)$ , if we choose the SmoothQuant scale factor $s_{j}=\left(\frac{\max(|X_{j}|)^{\alpha}}{\max(|W_{j}|)^{1-\alpha}}+\epsilon\right)$ with $\alpha∈[0,1]$ , then for activation tensors $X∈\mathbb{R}^{B× D}$ and weight matrices $W∈\mathbb{R}^{D× D^{\prime}}$ , the quantization error satisfies:
$$
\mathbb{E}[\|XW-\hat{X}\hat{W}\|_{F}^{2}]\leq c\cdot\frac{\max(|X_{j}|)^{2}+\max(|W_{j}|)^{2}\cdot s_{j}^{2}}{s_{j}^{2}\cdot(2^{b}-1)^{2}}\cdot BD\cdot DD^{\prime} \tag{24}
$$
where $\hat{X}$ and $\hat{W}$ are the quantized versions of $X$ and $W$ respectively, and $b$ is the quantization bitwidth. In particular, as $b→∞$ , we have $\lim_{b→∞}\mathbb{E}[\|XW-\hat{X}\hat{W}\|_{F}^{2}]=0$ .*
**Proof B.12 (Proof of Theorem A.1)**
*We prove this theorem step by step, using the lemmas established above. Step 1: Error Decomposition By Lemma B.7, we have:
$$
\|XW-\hat{X}\hat{W}\|_{F}^{2}=\|Q(X/s_{j})Q(W\cdot s_{j})-(X/s_{j})(W\cdot s_{j})\|_{F}^{2} \tag{25}
$$ The transformation preserves the original matrix multiplication exactly due to the algebraic equivalence:
$$
(X/s_{j})\cdot(W\cdot s_{j})=X\cdot W \tag{26}
$$ Step 2: Triangle Inequality Application For the quantized versions, we analyze the error propagation using the triangle inequality:
$$
\displaystyle\|\hat{X}\hat{W}-XW\|_{F}^{2} \displaystyle=\|\text{Quantize}(X/s_{j})\cdot\text{Quantize}(W\cdot s_{j})-(X/s_{j})(W\cdot s_{j})\|_{F}^{2} \displaystyle\leq\|\text{Quantize}(X/s_{j})-X/s_{j}\|_{F}^{2}\cdot\|\text{Quantize}(W\cdot s_{j})\|_{F}^{2} \displaystyle\quad+\|X/s_{j}\|_{F}^{2}\cdot\|\text{Quantize}(W\cdot s_{j})-W\cdot s_{j}\|_{F}^{2} \tag{27}
$$ Step 3: Quantization Error Bounds Let $\delta_{X}$ and $\delta_{W}$ be the quantization step sizes for activations and weights respectively. By Lemma B.9, we have:
$$
\|\text{Quantize}(X/s_{j})-X/s_{j}\|_{F}^{2}\leq c\cdot\frac{B\cdot D\cdot\max(|X/s_{j}|)^{2}}{(2^{b}-1)^{2}} \tag{30}
$$ $$
\|\text{Quantize}(W\cdot s_{j})-W\cdot s_{j}\|_{F}^{2}\leq c\cdot\frac{D\cdot D^{\prime}\cdot\max(|W\cdot s_{j}|)^{2}}{(2^{b}-1)^{2}} \tag{31}
$$ where $\delta_{X}=\frac{2\max(|X/s_{j}|)}{2^{b}-1}$ and $\delta_{W}=\frac{2\max(|W· s_{j}|)}{2^{b}-1}$ . Step 4: Final Bound Combining the bounds, we obtain:
$$
\|\hat{X}\hat{W}-XW\|_{F}^{2}\leq c\cdot\frac{\max(|X/s_{j}|)^{2}\cdot\|W\cdot s_{j}\|_{F}^{2}+\max(|W\cdot s_{j}|)^{2}\cdot\|X/s_{j}\|_{F}^{2}}{(2^{b}-1)^{2}}\cdot BD\cdot DD^{\prime} \tag{32}
$$ As the bitwidth $b$ increases, $\delta_{X},\delta_{W}→ 0$ , and thus the quantization error approaches zero, proving the convergence. This completes the proof.*
B.2.3 Optimal Scale Factor Derivation
**Lemma B.13 (Optimal Scale Factor)**
*The optimal scale factor minimizing quantization error is:
$$
s_{j}^{*}=\arg\min_{s_{j}}\mathbb{E}[\|XW-\hat{X}\hat{W}\|_{F}^{2}]=\sqrt{\frac{\mathbb{E}[\max(|X_{j}|)^{2}]}{\mathbb{E}[\max(|W_{j}|)^{2}]}} \tag{33}
$$*
**Proof B.14 (Proof of Lemma A.1)**
*Minimize: $\mathcal{L}(s_{j})=\mathbb{E}[\|XW-Q(X/s_{j})Q(W· s_{j})\|_{F}^{2}]$ . Using error bounds: $\mathcal{L}(s_{j})≈\mathbb{E}\left[\frac{BD\delta_{X}^{2}\|W· s_{j}\|_{F}^{2}}{4}+\frac{DD^{\prime}\delta_{W}^{2}\|X/s_{j}\|_{F}^{2}}{4}\right]$ . Substituting $\delta_{X}=\frac{2\max(|X/s_{j}|)}{2^{b}-1}$ , $\delta_{W}=\frac{2\max(|W· s_{j}|)}{2^{b}-1}$ :
$$
\displaystyle\mathcal{L}(s_{j}) \displaystyle\propto\mathbb{E}\left[\frac{\max(|X/s_{j}|)^{2}\|W\cdot s_{j}\|_{F}^{2}}{s_{j}^{2}}+\frac{\max(|W\cdot s_{j}|)^{2}\|X/s_{j}\|_{F}^{2}}{s_{j}^{2}}\right] \displaystyle\propto\mathbb{E}\left[\frac{\max(|X_{j}|)^{2}}{s_{j}^{2}}+s_{j}^{2}\max(|W_{j}|)^{2}\right] \tag{34}
$$ Taking derivative: $\frac{∂\mathcal{L}}{∂ s_{j}}=-\frac{2\mathbb{E}[\max(|X_{j}|)^{2}]}{s_{j}^{3}}+2s_{j}\mathbb{E}[\max(|W_{j}|)^{2}]=0$ . Solving: $s_{j}^{*}=\sqrt{\frac{\mathbb{E}[\max(|X_{j}|)^{2}]}{\mathbb{E}[\max(|W_{j}|)^{2}]}}$ . SmoothQuant approximation: $s_{j}=\left(\frac{\max(|X_{j}|)^{\alpha}}{\max(|W_{j}|)^{1-\alpha}}\right)$ with $\alpha=0.5$ minimizes approximation error.*
B.2.4 Error Bound Analysis for SimQuant
**Theorem B.15 (SimQuant Reconstruction Error Bound)**
*There exists an absolute constant $c>0$ such that, for any $\epsilon∈(0,1)$ , if we apply SimQuant with bitwidth $b$ and channel-wise quantization to tensor $X∈\mathbb{R}^{B× D}$ , then with probability at least $1-\epsilon$ , the reconstruction error is bounded by:
$$
\|X-\hat{X}\|_{\infty}\leq c\cdot\frac{\max_{d\in[D]}(\max_{i}X_{i,d}-\min_{i}X_{i,d})}{2^{b}-1} \tag{36}
$$
where $\hat{X}$ is the quantized version of $X$ , $B$ is the batch size, and $D$ is the feature dimension.*
**Proof B.16 (Proof of Theorem A.2)**
*We analyze the quantization error for each channel $d$ independently. The quantization step size for channel $d$ is:
$$
\Delta_{d}=\frac{v_{\max}^{(d)}-v_{\min}^{(d)}}{2^{b}-1} \tag{37}
$$ where $v_{\max}^{(d)}=\max_{i}X_{i,d}$ and $v_{\min}^{(d)}=\min_{i}X_{i,d}$ . The quantization process maps each element $X_{i,d}$ to the nearest quantized value:
$$
\hat{X}_{i,d}=\text{round}\left(\frac{X_{i,d}-v_{\min}^{(d)}}{\Delta_{d}}\right)\cdot\Delta_{d}+v_{\min}^{(d)} \tag{38}
$$ The quantization error for element $X_{i,d}$ is bounded by half the quantization step size:
$$
\displaystyle|X_{i,d}-\hat{X}_{i,d}| \displaystyle\leq\frac{\Delta_{d}}{2}=\frac{v_{\max}^{(d)}-v_{\min}^{(d)}}{2(2^{b}-1)} \displaystyle\leq\frac{\max(X)-\min(X)}{2^{b}-1} \tag{39}
$$ The last inequality follows from the fact that $v_{\max}^{(d)}-v_{\min}^{(d)}≤\max(X)-\min(X)$ for any channel $d$ . Taking the supremum over all elements $(i,d)$ gives:
$$
\|X-\hat{X}\|_{\infty}=\max_{i,d}|X_{i,d}-\hat{X}_{i,d}|\leq c\cdot\frac{\max_{d\in[D]}(\max_{i}X_{i,d}-\min_{i}X_{i,d})}{2^{b}-1} \tag{41}
$$ where $c$ is an absolute constant. This completes the proof.*
B.2.5 Convergence Rate Analysis for SimQuant
**Lemma B.17 (SimQuant Convergence Rate)**
*For SimQuant with dynamic range estimation, the quantization error converges to zero with rate $O(1/2^{b})$ as the bitwidth increases.*
**Proof B.18 (Proof of Lemma A.2)**
*From Theorem A.2: $\|X-\hat{X}\|_{∞}≤\frac{\max(X)-\min(X)}{2^{b}-1}$ . As $b→∞$ : $\Delta_{d}=\frac{v_{\max}^{(d)}-v_{\min}^{(d)}}{2^{b}-1}=O(2^{-b})$ . Therefore: $\|X-\hat{X}\|_{∞}=O(2^{-b})$ , establishing exponential convergence.*
B.3 Optimization Guarantees for Layer-wise Quantization
B.3.1 Mixed-Precision Search Convergence Analysis
**Theorem B.19 (Mixed-Precision Search Convergence)**
*The mixed-precision search algorithm converges to a locally optimal bitwidth assignment $\{b_{\ell}^{*}\}$ that minimizes the objective:
$$
\min_{\{b_{\ell}\}}\mathcal{L}_{\text{task}}+\lambda\sum_{\ell}\Phi(b_{\ell}) \tag{42}
$$
where $\mathcal{L}_{\text{task}}$ is the task-specific loss and $\Phi(b_{\ell})$ is the cost function for bitwidth $b_{\ell}$ .*
**Proof B.20 (Proof of Theorem A.3)**
*Search space: $\mathcal{B}=\{2,3,4,8\}$ , $|\mathcal{B}|=4$ , total space: $|\mathcal{B}|^{L}$ . Objective: $f(\{b_{\ell}\})=\mathcal{L}_{\text{task}}+\lambda\sum_{\ell=1}^{L}\Phi(b_{\ell})$ where $\mathcal{L}_{\text{task}}≥ 0$ , $\Phi(b_{\ell})≥ 0$ , hence $f≥ 0$ . Greedy update: $b_{\ell}^{(t+1)}=\arg\min_{b∈\mathcal{B}}f(b_{1}^{(t)},...,b_{\ell-1}^{(t)},b,b_{\ell+1}^{(t)},...,b_{L}^{(t)})$ . Sequence $\{f^{(t)}\}$ is monotone decreasing: $f^{(t+1)}≤ f^{(t)}$ and bounded: $f^{(t)}≥ 0$ . By monotone convergence: $\lim_{t→∞}f^{(t)}=f^{*}$ exists. Termination condition: $∀\ell,∀ b∈\mathcal{B}$ :
$$
f(b_{1}^{*},\ldots,b_{\ell-1}^{*},b_{\ell}^{*},b_{\ell+1}^{*},\ldots,b_{L}^{*})\leq f(b_{1}^{*},\ldots,b_{\ell-1}^{*},b,b_{\ell+1}^{*},\ldots,b_{L}^{*}) \tag{43}
$$ This defines local optimum: $f(\{b_{\ell}^{*}\})≤ f(\{b_{\ell}\})$ for all $\{b_{\ell}\}$ in neighborhood. Complexity: each iteration evaluates $≤ L·|\mathcal{B}|$ configurations, worst-case iterations $≤|\mathcal{B}|^{L}$ , hence $T=O(L·|\mathcal{B}|·|\mathcal{B}|^{L})=O(L·|\mathcal{B}|^{L+1})$ .*
B.3.2 Distributed Quantization Synchronization Analysis
**Theorem B.21 (Distributed Synchronization Correctness)**
*The NCCL-based synchronization of quantization parameters $\{\delta_{\ell},z_{\ell}\}$ ensures consistency across all devices in the distributed setup with probability 1.*
**Proof B.22 (Proof of Theorem A.4)**
*AllGather properties: deterministic ( $\text{AllGather}(x_{1},...,x_{P})=\text{AllGather}(x_{1}^{\prime},...,x_{P}^{\prime})$ if $x_{p}=x_{p}^{\prime}$ ), collective (all $P$ processes participate), atomic (completes simultaneously). Synchronization: $\delta_{\ell}^{\text{global}}=\text{AllGather}(\delta_{\ell}^{(1)},...,\delta_{\ell}^{(P)})$ , $z_{\ell}^{\text{global}}=\text{AllGather}(z_{\ell}^{(1)},...,z_{\ell}^{(P)})$ . By determinism: $\delta_{\ell}^{(p)}=\delta_{\ell}^{\text{global}}$ , $z_{\ell}^{(p)}=z_{\ell}^{\text{global}}$ for all $p∈[P]$ . Quantized weights: $\hat{W}_{\ell}^{(p)}=Q(W_{\ell}^{(p)},\delta_{\ell}^{\text{global}},z_{\ell}^{\text{global}})$ . Since $Q$ is deterministic: $\hat{W}_{\ell}^{(p)}=\hat{W}_{\ell}^{\text{global}}$ for all $p$ , ensuring consistency.*
B.4 Computational Complexity Analysis
B.4.1 Algorithmic Complexity
**Theorem B.23 (SmoothQuant Complexity)**
*The SmoothQuant algorithm has time complexity $O(B· D· D^{\prime})$ and space complexity $O(D)$ for processing a batch of size $B$ with input dimension $D$ and output dimension $D^{\prime}$ .*
**Proof B.24 (Proof of Complexity)**
*Operations:
$$
\displaystyle T_{\text{scale}} \displaystyle=O(D)+O(DD^{\prime})=O(DD^{\prime}) \displaystyle T_{\text{smooth}} \displaystyle=O(BD) \displaystyle T_{\text{quant}} \displaystyle=O(BD+DD^{\prime}) \tag{44}
$$ Total time: $T=O(BD+DD^{\prime})=O(BDD^{\prime})$ (dominated by GEMM). Space: $S=O(BD+DD^{\prime})$ .*
B.4.2 Memory Hierarchy Optimization
**Theorem B.25 (Memory Bandwidth Optimization)**
*The fused quantization kernel reduces memory bandwidth by $O(\frac{1}{b})$ compared to separate quantization and GEMM operations, where $b$ is the bitwidth.*
**Proof B.26 (Proof of Memory Optimization)**
*For separate operations, the memory bandwidth requirements include loading FP16 weights ( $2×|W|$ bytes), storing quantized weights ( $b/8×|W|$ bytes), and loading quantized weights for GEMM ( $b/8×|W|$ bytes), resulting in a total of $(2+2× b/8)×|W|$ bytes. For fused operation, the memory bandwidth requirements include loading FP16 weights ( $2×|W|$ bytes) and storing quantized weights ( $b/8×|W|$ bytes), resulting in a total of $(2+b/8)×|W|$ bytes. Bandwidth reduction: $\frac{(2+2× b/8)-(2+b/8)}{2+2× b/8}=\frac{b/8}{2+2× b/8}=O(\frac{1}{b})$*
B.5 Error Propagation Analysis
B.5.1 Layer-wise Error Accumulation
Before presenting the main error accumulation theorem, we establish a recursive formula for how quantization errors propagate through transformer layers.
**Lemma B.27 (Recursive Error Propagation)**
*For a transformer with $L$ layers, let $f_{\ell}$ denote the function at layer $\ell$ and $\hat{f}_{\ell}$ denote its quantized version. If $\epsilon_{\ell}$ is the quantization error at layer $\ell$ , then the accumulated error through the network satisfies:
$$
\|f_{L}(\cdots f_{1}(x))-\hat{f}_{L}(\cdots\hat{f}_{1}(x))\|\leq\sum_{\ell=1}^{L}\epsilon_{\ell}\cdot\prod_{j=\ell+1}^{L}\|J_{j}\| \tag{47}
$$
where $J_{j}=\frac{∂ f_{j}}{∂ x}$ is the Jacobian of layer $j$ at the input point.*
**Proof B.28 (Proof of Lemma A.3)**
*We prove this by induction on the number of layers. For $L=1$ , the statement is trivial. For $L>1$ , we use the chain rule and the fact that quantization errors are bounded:
$$
\displaystyle\|f_{L}\circ\cdots\circ f_{1}(x)-\hat{f}_{L}\circ\cdots\circ\hat{f}_{1}(x)\| \displaystyle\leq\|f_{L}\circ\cdots\circ f_{1}(x)-f_{L}\circ\cdots\circ f_{2}\circ\hat{f}_{1}(x)\| \displaystyle\quad+\|f_{L}\circ\cdots\circ f_{2}\circ\hat{f}_{1}(x)-\hat{f}_{L}\circ\cdots\circ\hat{f}_{1}(x)\| \tag{48}
$$
The first term is bounded by $\epsilon_{1}\prod_{j=2}^{L}\|J_{j}\|$ , and the second term follows by the inductive hypothesis.*
**Theorem B.29 (Error Accumulation Bound)**
*For a transformer with $L$ layers, the accumulated quantization error grows as $O(L·\epsilon)$ where $\epsilon$ is the per-layer quantization error bound.*
**Proof B.30 (Proof of Error Accumulation)**
*By Lemma B.27, we have:
$$
\|f_{L}(\cdots f_{1}(x))-\hat{f}_{L}(\cdots\hat{f}_{1}(x))\|\leq\sum_{\ell=1}^{L}\epsilon_{\ell}\cdot\prod_{j=\ell+1}^{L}\|J_{j}\| \tag{50}
$$
where $J_{j}=\frac{∂ f_{j}}{∂ x}$ is the Jacobian of layer $j$ and $\hat{f}_{\ell}$ is the quantized version of layer $\ell$ . For transformer layers with bounded activation functions (e.g., ReLU, GELU), the Jacobian norms are bounded by a constant $C$ . Therefore:
$$
\|f_{L}(\cdots f_{1}(x))-\hat{f}_{L}(\cdots\hat{f}_{1}(x))\|\leq\sum_{\ell=1}^{L}\epsilon_{\ell}\cdot C^{L-\ell}\leq L\cdot\max_{\ell}\epsilon_{\ell}\cdot C^{L} \tag{51}
$$ Since $\epsilon_{\ell}≤\epsilon$ for all layers, the accumulated error is $O(L·\epsilon)$ . This completes the proof.*
B.6 Calibration Data Requirements Analysis
B.6.1 Minimum Calibration Set Size
**Theorem B.31 (Calibration Data Requirements)**
*For accurate quantization parameter estimation, the minimum calibration set size is $O(\frac{D\log D}{\epsilon^{2}})$ where $D$ is the feature dimension and $\epsilon$ is the desired estimation accuracy.*
**Proof B.32 (Proof of Calibration Requirements)**
*Hoeffding’s bound for scale estimation: $P(|\hat{s}-s|≥\epsilon)≤ 2\exp\left(-\frac{2n\epsilon^{2}}{(b-a)^{2}}\right)$ where $n$ is sample size, $[a,b]$ is data range. For $D$ dimensions, union bound:
$$
P(\exists j:|\hat{s}_{j}-s_{j}|\geq\epsilon)\leq\sum_{j=1}^{D}P(|\hat{s}_{j}-s_{j}|\geq\epsilon)\leq D\cdot 2\exp\left(-\frac{2n\epsilon^{2}}{(b-a)^{2}}\right) \tag{52}
$$ Setting $D· 2\exp\left(-\frac{2n\epsilon^{2}}{(b-a)^{2}}\right)=\delta$ and solving:
$$
\displaystyle n \displaystyle\geq\frac{(b-a)^{2}}{2\epsilon^{2}}\log\left(\frac{2D}{\delta}\right) \displaystyle=O\left(\frac{D\log D}{\epsilon^{2}}\right) \tag{53}
$$*
**Lemma B.33**
*The NCCL-based synchronization of quantization parameters $\{\delta_{\ell},z_{\ell}\}$ ensures consistency across all devices in the distributed setup.*
**Proof B.34**
*The NCCL AllGather operation guarantees that all devices receive identical copies of the quantization parameters. For parameters $\delta_{\ell}^{(p)}$ and $z_{\ell}^{(p)}$ computed on device $p$ : $$
\delta_{\ell}^{\text{global}}=\bigcup_{p=1}^{P}\texttt{NCCL\_AllGather}(\delta_{\ell}^{(p)}) \tag{55}
$$ Since AllGather is a deterministic collective operation, all devices will have identical $\delta_{\ell}^{\text{global}}$ and $z_{\ell}^{\text{global}}$ after synchronization. This ensures that quantized weights $\hat{W}_{\ell}$ are identical across all devices, maintaining model consistency.*
B.7 Memory Hierarchy Optimization Analysis
B.7.1 HBM-SRAM Transfer Optimization
We analyze the memory transfer optimization in LLMEasyQuant’s fused kernels.
**Theorem B.35**
*The fused quantization kernel reduces memory bandwidth by $O(\frac{1}{b})$ compared to separate quantization and GEMM operations, where $b$ is the bitwidth.*
**Proof B.36**
*For separate operations, the memory bandwidth requirements include loading FP16 weights ( $2×|W|$ bytes), storing quantized weights ( $b/8×|W|$ bytes), and loading quantized weights for GEMM ( $b/8×|W|$ bytes), resulting in a total of $(2+2× b/8)×|W|$ bytes. For fused operation, the memory bandwidth requirements include loading FP16 weights ( $2×|W|$ bytes) and storing quantized weights ( $b/8×|W|$ bytes), resulting in a total of $(2+b/8)×|W|$ bytes. Bandwidth reduction: $\frac{(2+2× b/8)-(2+b/8)}{2+2× b/8}=\frac{b/8}{2+2× b/8}=O(\frac{1}{b})$*
B.8 Quantization Error Propagation Analysis
B.8.1 Layer-wise Error Accumulation
We analyze how quantization errors propagate through transformer layers.
**Theorem B.37**
*For a transformer with $L$ layers, the accumulated quantization error grows as $O(L·\epsilon)$ where $\epsilon$ is the per-layer quantization error bound.*
**Proof B.38**
*Let $\epsilon_{\ell}$ be the quantization error in layer $\ell$ . The error propagation can be modeled as:
$$
\|f_{L}(\cdots f_{1}(x))-\hat{f}_{L}(\cdots\hat{f}_{1}(x))\|\leq\sum_{\ell=1}^{L}\epsilon_{\ell}\cdot\prod_{j=\ell+1}^{L}\|J_{j}\| \tag{56}
$$ where $J_{j}$ is the Jacobian of layer $j$ and $\hat{f}_{\ell}$ is the quantized version of layer $\ell$ . For transformer layers with bounded activation functions (e.g., ReLU, GELU), the Jacobian norms are bounded by a constant $C$ . Therefore:
$$
\|f_{L}(\cdots f_{1}(x))-\hat{f}_{L}(\cdots\hat{f}_{1}(x))\|\leq\sum_{\ell=1}^{L}\epsilon_{\ell}\cdot C^{L-\ell}\leq L\cdot\max_{\ell}\epsilon_{\ell}\cdot C^{L} \tag{57}
$$ Since $\epsilon_{\ell}≤\epsilon$ for all layers, the accumulated error is $O(L·\epsilon)$ .*
B.9 Performance Analysis of Fused Kernels
B.9.1 CUDA Kernel Efficiency
We analyze the efficiency of LLMEasyQuant’s fused CUDA kernels.
**Theorem B.39**
*The fused quantization-GEMM kernel achieves optimal memory bandwidth utilization with occupancy $≥ 75\%$ on modern GPU architectures.*
**Proof B.40**
*The kernel design employs persistent thread blocks for reduced kernel launch overhead, cooperative warp-level reductions for scale computation, shared memory tiling to minimize global memory access, and Tensor Core utilization for INT8 GEMM operations. The occupancy is calculated as:
$$
\text{Occupancy}=\frac{\text{Active Warps}}{\text{Maximum Warps per SM}}\geq\frac{32\times 4}{128}=1.0 \tag{58}
$$ However, due to register pressure and shared memory usage, practical occupancy is $≥ 75\%$ , which is optimal for memory-bound operations.*
B.10 Calibration Data Requirements
B.10.1 Minimum Calibration Set Size
We derive the minimum calibration data requirements for accurate quantization.
**Theorem B.41**
*For accurate quantization parameter estimation, the minimum calibration set size is $O(\frac{D\log D}{\epsilon^{2}})$ where $D$ is the feature dimension and $\epsilon$ is the desired estimation accuracy.*
**Proof B.42**
*The scale factor estimation requires accurate estimation of the maximum absolute value. Using concentration inequalities (Hoeffding’s bound), for estimation error $\epsilon$ : $$
P(|\hat{s}-s|\geq\epsilon)\leq 2\exp\left(-\frac{2n\epsilon^{2}}{(b-a)^{2}}\right) \tag{59}
$$ where $n$ is the sample size and $[a,b]$ is the range of the data. For $D$ -dimensional data, we need to estimate $D$ scale factors. Using union bound:
$$
P(\exists j:|\hat{s}_{j}-s_{j}|\geq\epsilon)\leq D\cdot 2\exp\left(-\frac{2n\epsilon^{2}}{(b-a)^{2}}\right) \tag{60}
$$ Setting the right-hand side to $\delta$ and solving for $n$ :
$$
n\geq\frac{(b-a)^{2}}{2\epsilon^{2}}\log\left(\frac{2D}{\delta}\right)=O\left(\frac{D\log D}{\epsilon^{2}}\right) \tag{61}
$$*
This theoretical analysis provides the foundation for LLMEasyQuant’s practical implementation and validates its design choices across quantization methods, distributed execution, and hardware optimization.