# Fractal Patterns May Illuminate the Success of Next-Token Prediction
**Authors**:
- Ibrahim Alabdulmohsin (Google Deepmind)
- Zürich, Switzerland
- &Vinh Q. Tran (Google Deepmind)
- New York, USA
- &Mostafa Dehghani (Google Deepmind)
- Mountain View, USA
> Corresponding author.
Abstract
We study the fractal structure of language, aiming to provide a precise formalism for quantifying properties that may have been previously suspected but not formally shown. We establish that language is: (1) self-similar, exhibiting complexities at all levels of granularity, with no particular characteristic context length, and (2) long-range dependent (LRD), with a Hurst parameter of approximately $\mathrm{H}=0.70± 0.09$ . Based on these findings, we argue that short-term patterns/dependencies in language, such as in paragraphs, mirror the patterns/dependencies over larger scopes, like entire documents. This may shed some light on how next-token prediction can capture the structure of text across multiple levels of granularity, from words and clauses to broader contexts and intents. In addition, we carry out an extensive analysis across different domains and architectures, showing that fractal parameters are robust. Finally, we demonstrate that the tiny variations in fractal parameters seen across LLMs improve upon perplexity-based bits-per-byte (BPB) in predicting their downstream performance. We hope these findings offer a fresh perspective on language and the mechanisms underlying the success of LLMs.
1 Introduction
How does the training objective of predicting the next token in large language models (LLMs) yield remarkable capabilities? Consider, for instance, the two models: Gemini [5] and GPT4 [49]. These models have demonstrated capabilities that extend to quantitative reasoning, summarization, and even coding, which has led some researchers to ponder if there was more to intelligence than “on-the-fly improvisation” [11]. While providing a satisfactory explanation is a difficult endeavor, a possible insight can be drawn from fractals and self-similarity. We elucidate the connection in this work.
Self-Similarity. Self-similar processes were introduced by Kolmogorov in 1940 [36]. The notion garnered considerable attention during the late 1960s, thanks to the extensive works of Mandelbrot and his peers [19]. Broadly speaking, an object is called “self-similar” if it is invariant across scales, meaning its statistical or geometric properties stay consistent irrespective of the magnification applied to it (see Figure 1). Nature and geometry furnish us with many such patterns, such as coastlines, snowflakes, the Cantor set and the Kuch curve. Despite the distinction, self-similarity is often discussed in the context of “fractals,” another term popularized by Mandelbrot in his seminal book The Fractal Geometry of Nature [44]. However, the two concepts are different [26]. See Section 2.
In language, in particular, there have been studies arguing for the presence of a self-similar structure. Nevertheless, due to computational constraints, it was not feasible to holistically model the joint probability distribution of language. As such, linguists often resorted to rudimentary approximations in their arguments, such as by substituting a word with its frequency or length [9], or by focusing on the recurrence of a specific, predetermined word [48, 3]. These studies fall short of fully capturing the structure of language due to the simplifying assumptions they make, as discussed in Section 4.
Highlighting the self-similar nature of a process can have profound implications. For instance, conventional Poisson models for Ethernet traffic were shown to fail because traffic was self-similar [16, 38, 50, 66]. In such cases, recognizing and quantifying self-similarity had practical applications, such as in the design of buffers [39]. Similarly in language, we argue that self-similarity may offer a fresh perspective on the mechanisms underlying the success of LLMs. Consider the illustrative example shown in Figure 1, where the task is to predict the subsequent measurement in a time series, specifically predicting next tokens in a Wikipedia article (see Section 2 for details). The three plots in Figure 1 (left) represent different manifestations of the same process observed across three distinct time scales. Notably, we observe rich, self-similar details, such as burstiness, in all of them. A well-established approach for quantifying self-similarity is the Hölder exponent [64], which we denote by $\mathrm{S}$ . In language, we find it to be $\mathrm{S}=0.59± 0.08$ , confirming statistical self-similarity.
Figure 1: Manifestations of processes across different time scales. A region marked in red corresponds to the magnified plot beneath it. left: The process exhibits self-similarity with rich details at all levels of granularity. It is an integral process $(X_{t})_{t∈\mathbb{N}}$ calculated from Wikipedia (see Section 2). right: Example of a process that is not self-similar, looking smoother at larger time scales.
Why is this important? We hypothesize that since LLMs are trained to predict the future of a self-similar process, they develop proficiency in capturing patterns across multiple levels of granularity for two interconnected reasons. First, self-similarity implies that the patterns at the level of a paragraph are reflective of the patterns seen at the level of a whole text, which is reminiscent of the recursive structure of language [52]. Thus, recognizing short-term patterns can aide in learning broader contexts. Second, because language displays intricate patterns at all levels of granularity, it would not be enough to rely only on the immediate context of a sentence to predict the next token. Instead, the model needs to identify patterns at higher levels of granularity; e.g. follow the direction of the argument and the broader intent. It must balance between short- and long-term contexts. Willinger et al., [65] and Altmann et al., [3] argue for self-similarity in language due to this hierarchical nature.
Long-range dependence. However, self-similarity alone is not sufficient for a predictive model to exhibit anything resembling “intelligent” behavior. In fact, some self-similar processes, despite their intricate details, remain entirely unpredictable. A quintessential example is the simple Brownian motion, which is a Wiener process with independent increments. Its discrete analog is $B_{n}=\sum_{i=1}^{n}\varepsilon_{i}$ , where $\varepsilon_{i}\sim\mathcal{N}(0,\sigma^{2})$ . Despite possessing rich details at all granularities, a model trained to predict $B_{n}$ cannot learn anything useful from data since the process itself has independent increments.
Thus, for strong capabilities to emerge, the process must have some degree of predictability or dependence as well. One classical metric for quantifying predictability in a stochastic process is the Hurst parameter [31], developed by the hydrologist H. E. Hurst in 1951 while studying the Nile river. It is generally considered to be a robust metric [65], unlike the wavelet estimator [1] and the periodogram method [24] that can be sensitive to errors [53]. As discussed in Section 2.3, we find the Hurst parameter in language to be $\mathrm{H}=0.70± 0.09$ . For context, $\mathrm{H}$ only takes values in $[0,1]$ . A value $\mathrm{H}>0.5$ implies predictability in the data, while $\mathrm{H}=0.5$ indicates random increments.
While it is compelling that our estimate of $\mathrm{H}$ in language lies nearly midway between predictability ( $\mathrm{H}=1$ ) and noise ( $\mathrm{H}=0.5$ ), a Hurst parameter of about $0.75$ turns out to occur commonly in nature, including in river discharges, Ethernet traffic, temperatures, precipitation, and tree rings [16, 21, 8]. For agents that learn from data, such as LLMs, this value is also reminiscent of processing-based theories of curiosity, which suggest that a sweet spot of complexity exists (not too simple, nor too unpredictable) that facilities or accelerates learning [34].
Importantly, predictability and self-similarity together imply long-range dependence (LRD). This follows from the definition of self-similarity, where the patterns at small scales mirror those at larger scales so, for example, the correlations established at micro levels are also pertinent at macro levels. LRD is arguably crucial for enhancing the functionality of predictive models because processes with only short-range dependence could be forecasted (somewhat trivially) with lookup tables that provide the likelihood of transitions over brief sequences. By contrast, this is not possible in LRD processes whose contexts extend indefinitely into the past.
Information Theoretic Complexity.
To define fractal parameters, we follow recent works such as [28, 22, 40, 46, 25] in adopting an information-theoretic characterization of the complexity in language using minimal-length codes or surprise. This corresponds to an intrinsic, irreducible description of language and the minimum compute overhead to comprehend/decode it [22], which also correlates well with actual reading times [28, 40]. In this context, self-similarity means that the intrinsic complexity or surprise in language (measured in bits) cannot be smoothed out, even as we look into broader narratives. That is, surprising paragraphs will follow predictable paragraphs, in a manner that is statistically similar to how surprising sentences follow predictable sentences.
Analysis.
How robust are these findings? To answer this question, we carry out an extensive empirical analysis across various model architectures and scales, ranging from 1B to over 500B parameters. We find that fractal parameters are quite robust to the choice of the architecture.
However, there exists tiny variations across LLMs. Interestingly, we demonstrate that from a practical standpoint, these differences help in predicting downstream performance in LLMs compared to using perplexity-based metrics alone, such as bits-per-byte (BPB). Specifically, we introduce a new metric and show that using it to predict downstream performance can increase the adjusted $R^{2}$ from approximately $0.65$ when using solely BPB, to over $0.86$ with the new metric We release the code for calculating fractal parameters at: https://github.com/google-research/google-research/tree/master/fractals_language .
Statement of Contribution. In summary, we:
1. highlight how the fractal structure of language can offer a new perspective on the capabilities of LLMs, and provide a formalism to quantify properties, such as long-range dependence.
1. establish that language is self-similar and long-range dependent. We provide concrete estimates in language of the three parameters: the self-similarity (Hölder) exponent, the Hurst parameter, and the fractal dimension. We also estimate the related Joseph exponent.
1. carry out a comparative study across different model architectures and scales, and different domains, such as ArXiv and GitHub, demonstrating that fractal parameters are robust.
1. connect fractal patterns with learning. Notably, we show that a “median” Hurst exponent improves upon perplexity-based bits-per-byte (BPB) in predicting downstream performance.
2 Fractal Structure of Language
2.1 Preliminaries
Suppose we have a discrete-time, stationary stochastic process $(x_{t})_{t∈\mathbb{N}}$ , with $\mathbb{E}[x_{t}]=0$ and $\mathbb{E}[x_{t}^{2}]=1$ . We will refer to $(x_{t})_{t∈\mathbb{N}}$ as the increment process to distinguish it from the integral process $(X_{t})_{t∈\mathbb{N}}$ defined by $X_{t}=\sum_{k=0}^{t}x_{k}$ . While $(x_{t})_{t∈\mathbb{N}}$ and $(X_{t})_{t∈\mathbb{N}}$ are merely different representations of the same data, it is useful to keep both representations in mind. For example, self-similarity is typically studied in the context of integral processes whereas LRD is defined on increment processes.
In the literature, it is not uncommon to mistakenly equate parameters that are generally different. For example, the Hurst parameter $\mathrm{H}$ has had many definitions in the past that were not equivalent, and Mandelbrot himself cautioned against this [43]. The reason behind this is because different parameters can agree in the idealized fractional Brownian motion, leading some researchers to equate them in general [64]. We will keep the self-similarity exponent $\mathrm{S}$ and $\mathrm{H}$ separate in our discussion.
<details>
<summary>2402.01825v2/extracted/2402.01825v2/figs/selfsim/self_sim_exponent_lag_OpenWebText2.png Details</summary>

### Visual Description
# Technical Document Analysis: OpenWeb Log-Log Plot
## 1. **Axis Labels and Markers**
- **X-axis**: Labeled `τ` (tau), with logarithmic scale markers at `10^1`, `10^2`, and `10^3`.
- **Y-axis**: Labeled `ρ` (rho), with logarithmic scale markers at `10^-5`, `10^-4`, and `10^-3`.
## 2. **Legend and Color Coding**
- **Legend**: Located in the **top-right corner** of the plot.
- **Label**: `OpenWeb` (text).
- **Color**: Blue (matches the data points).
- **Trend Line**: Red, representing a power-law relationship (not explicitly labeled in the legend).
## 3. **Data Points**
- **Blue Data Points** (corresponding to `OpenWeb`):
- `(τ = 10^1, ρ = 10^-3)`
- `(τ = 10^2, ρ = 10^-4)`
- `(τ = 10^3, ρ = 10^-5)`
- **Trend Line**: Red, straight diagonal line on the log-log plot, indicating a **power-law decay** (slope ≈ -1).
## 4. **Key Trends**
- **Visual Trend**: The red trend line slopes downward, confirming a **negative correlation** between `τ` and `ρ` on a log-log scale.
- **Data Alignment**: Blue data points closely follow the red trend line, suggesting a consistent power-law relationship.
## 5. **Component Isolation**
- **Header**: Title `OpenWeb` (centered at the top).
- **Main Chart**:
- Log-log axes with labeled markers.
- Blue data points and red trend line.
- **Footer**: No additional text or elements.
## 6. **Language and Transcription**
- **Primary Language**: English.
- **No Other Languages Detected**.
## 7. **Spatial Grounding**
- **Legend Position**: Top-right corner (coordinates: `[x = 0.9, y = 0.9]` relative to the plot area).
- **Data Point Colors**: Blue (matches legend), Red (trend line, no legend entry).
## 8. **Conclusion**
The plot illustrates a **power-law decay** of `ρ` with respect to `τ` for the `OpenWeb` dataset. The red trend line confirms the logarithmic relationship, while the blue data points validate the trend. No additional categories, sub-categories, or textual data are present.
</details>
<details>
<summary>2402.01825v2/extracted/2402.01825v2/figs/selfsim/self_sim_exponent_lag_Github.png Details</summary>

### Visual Description
# Technical Document Extraction: Scatter Plot Analysis
## Image Description
The image is a **log-log scatter plot** titled **"Github"**. It visualizes the relationship between two variables, **τ (tau)** on the x-axis and **ρ (rho)** on the y-axis. The plot includes a **red trend line** and **blue data points** clustered around the line.
---
### Key Components
1. **Title**:
- **Text**: "Github"
- **Placement**: Top center of the plot.
2. **Axes**:
- **X-axis (τ)**:
- **Label**: "τ" (Greek letter tau).
- **Range**: Logarithmic scale from \(10^1\) to \(10^3\).
- **Markers**: Tick marks at \(10^1\), \(10^2\), and \(10^3\).
- **Y-axis (ρ)**:
- **Label**: "ρ" (Greek letter rho).
- **Range**: Logarithmic scale from \(10^{-5}\) to \(10^{-3}\).
- **Markers**: Tick marks at \(10^{-5}\), \(10^{-4}\), and \(10^{-3}\).
3. **Data Points**:
- **Color**: Blue.
- **Placement**: Clustered along the red trend line, spanning the full range of τ and ρ.
- **Trend**: Inverse power-law relationship (as τ increases, ρ decreases).
4. **Trend Line**:
- **Color**: Red.
- **Equation**: Implied linear relationship on log-log scale (\( \rho \propto \tau^{-1} \)).
- **Slope**: Negative, confirming inverse proportionality.
5. **Legend**:
- **Absent**: No explicit legend is visible in the image.
- **Inference**:
- Red line = Trend line.
- Blue dots = Observed data points.
---
### Spatial Grounding
- **Legend Placement**: Not applicable (no legend present).
- **Data Point Colors**:
- Blue dots match the inferred legend (data points).
- Red line matches the inferred legend (trend line).
---
### Trend Verification
- **Visual Trend**:
- The red trend line slopes **downward** from left to right, indicating a **negative correlation** between τ and ρ.
- Data points align closely with the trend line, confirming the power-law relationship.
---
### Component Isolation
1. **Header**:
- Title: "Github" (no additional text).
2. **Main Chart**:
- Axes, data points, and trend line dominate the plot.
3. **Footer**:
- No footer or additional annotations.
---
### Data Table Reconstruction
No explicit data table is present. However, the plot implies the following structure:
| τ (x-axis) | ρ (y-axis) |
|------------|------------|
| \(10^1\) | \(10^{-3}\) |
| \(10^2\) | \(10^{-4}\) |
| \(10^3\) | \(10^{-5}\) |
*Note: Data points are approximate and clustered around the trend line.*
---
### Language and Transcription
- **Primary Language**: English (labels, titles, and axis markers).
- **Other Languages**: None detected.
---
### Final Notes
- The plot uses a **log-log scale** to emphasize multiplicative relationships.
- The absence of a legend requires inference based on color coding.
- The trend line and data points suggest a **power-law decay** of ρ with increasing τ.
</details>
<details>
<summary>2402.01825v2/extracted/2402.01825v2/figs/selfsim/self_sim_exponent_lag_FreeLaw.png Details</summary>

### Visual Description
# Technical Document Analysis: FreeLaw Scatter Plot
## Image Description
The image is a **log-log scatter plot** titled **"FreeLaw"**. It visualizes the relationship between two variables, **τ (tau)** on the x-axis and **ρ (rho)** on the y-axis. The plot includes a **red trend line** and **blue data points**, with axis labels and tick marks in scientific notation.
---
### Key Components
1. **Title**:
- **Text**: "FreeLaw"
- **Placement**: Top center of the plot.
2. **Axes**:
- **X-axis (τ)**:
- **Label**: "τ" (Greek letter tau).
- **Range**: \(10^1\) to \(10^3\).
- **Tick Marks**: \(10^1, 10^2, 10^3\).
- **Y-axis (ρ)**:
- **Label**: "ρ" (Greek letter rho).
- **Range**: \(10^{-5}\) to \(10^{-3}\).
- **Tick Marks**: \(10^{-5}, 10^{-4}, 10^{-3}\).
3. **Data Points**:
- **Color**: Blue.
- **Coordinates**:
- \((10, 10^{-4})\)
- \((100, 10^{-4.5})\)
- \((1000, 10^{-5})\)
- **Trend**: Data points align closely with the red trend line, suggesting a strong correlation.
4. **Trend Line**:
- **Color**: Red.
- **Slope**: Negative (downward trajectory).
- **Equation**: Implied power-law relationship:
\[
\rho \propto \tau^{-0.5}
\]
(Derived from the log-log scale and slope of the line.)
5. **Legend**:
- **Status**: No explicit legend present.
- **Inference**:
- Red line = Trend line.
- Blue points = Observed data.
---
### Spatial Grounding
- **Legend Placement**: Not applicable (no legend).
- **Data Point Colors**:
- Blue points match the inferred "data" category.
- Red line matches the inferred "trend" category.
---
### Trend Verification
- **Visual Trend**:
- The red line slopes **downward** on the log-log plot, indicating an inverse relationship between τ and ρ.
- Data points follow the trend line closely, confirming a consistent power-law decay.
---
### Component Isolation
1. **Header**:
- Contains only the title "FreeLaw".
2. **Main Chart**:
- Scatter plot with log-log axes, data points, and trend line.
3. **Footer**:
- No additional text or components.
---
### Data Table Reconstruction
No explicit data table is present. However, the coordinates of the data points can be reconstructed as:
| τ (x-axis) | ρ (y-axis) |
|------------|------------|
| \(10^1\) | \(10^{-4}\) |
| \(10^2\) | \(10^{-4.5}\) |
| \(10^3\) | \(10^{-5}\) |
---
### Language and Transcription
- **Primary Language**: English.
- **No Other Languages Detected**.
---
### Summary
The plot demonstrates a **power-law decay** of ρ with respect to τ, governed by the equation \(\rho \propto \tau^{-0.5}\). The red trend line and blue data points confirm this relationship, with no deviations observed. The absence of a legend simplifies interpretation but requires explicit labeling of components.
</details>
<details>
<summary>2402.01825v2/extracted/2402.01825v2/figs/selfsim/self_sim_exponent_lag_Pile-CC.png Details</summary>

### Visual Description
# Technical Document Extraction: PileCC Scatter Plot
## Header
- **Title**: "PileCC" (centered at the top of the plot)
## Main Chart
### Axes
- **X-axis (τ)**:
- Label: "τ"
- Scale: Logarithmic (10¹ to 10³)
- Tick Marks: 10¹, 10², 10³
- **Y-axis (ρ)**:
- Label: "ρ"
- Scale: Logarithmic (10⁻⁵ to 10⁻³)
- Tick Marks: 10⁻⁵, 10⁻⁴, 10⁻³
### Data Series
1. **Blue Data Points**:
- **Placement**: Scattered across the plot, aligned with the trend line.
- **Trend**: Inverse relationship (as τ increases, ρ decreases).
- **Key Observations**:
- Points cluster near the red trend line.
- No explicit legend entry for blue points, but color coding implies they represent empirical/observed data.
2. **Red Trend Line**:
- **Color**: Red
- **Slope**: Downward (negative correlation on log-log scale).
- **Equation**: Implied linear fit on log-log axes (exact equation not provided).
- **Placement**: Diagonal from upper-left (10¹, 10⁻³) to lower-right (10³, 10⁻⁵).
### Legend
- **Absent**: No explicit legend box is visible in the image. Color coding (blue for data, red for trend) is inferred from visual context.
## Footer
- **No additional text or components** present.
## Spatial Grounding & Color Verification
- **Legend Colors**: Not applicable (no legend present).
- **Data Point Colors**: Blue matches inferred "data" category.
- **Trend Line Color**: Red matches inferred "trend" category.
## Trend Verification
- **Red Line**: Slopes downward, confirming inverse proportionality between τ and ρ on a log-log scale.
- **Data Points**: Align with the trend line, reinforcing the observed negative correlation.
## Component Isolation
1. **Header**: Title "PileCC" (no subtext).
2. **Main Chart**:
- Axes with logarithmic scales.
- Blue data points and red trend line.
3. **Footer**: Empty.
## Notes
- The plot uses logarithmic scales for both axes, which linearizes exponential relationships.
- No numerical data table or explicit legend is provided; interpretation relies on visual cues.
- No textual annotations or subcategories (e.g., categories, sub-categories) are present.
</details>
<details>
<summary>2402.01825v2/extracted/2402.01825v2/figs/selfsim/self_sim_exponent_lag_Wikipediaen.png Details</summary>

### Visual Description
# Technical Document Extraction: Log-Log Scatter Plot Analysis
## 1. **Axis Labels and Markers**
- **X-axis (τ)**:
- Label: `τ` (Greek letter tau)
- Range: `10¹` to `10³` (logarithmic scale)
- Tick markers: `10¹`, `10²`, `10³`
- **Y-axis (ρ)**:
- Label: `ρ` (Greek letter rho)
- Range: `10⁻⁵` to `10⁻³` (logarithmic scale)
- Tick markers: `10⁻⁵`, `10⁻⁴`, `10⁻³`
## 2. **Legend and Data Series**
- **Legend**:
- Position: Top-right corner
- Label: `Wiki` (text)
- Color: Blue (matches data points)
- **Data Series**:
- Type: Scatter plot with blue circular markers (`●`)
- Color: Blue (confirmed via legend)
- Trend: Aligns with red trend line
## 3. **Trend Line**
- **Line Color**: Red
- **Slope**: Diagonal (negative correlation on log-log scale)
- **Interpretation**: Power-law relationship between `τ` and `ρ` (i.e., `ρ ∝ τ⁻¹`)
## 4. **Key Observations**
- **Data Points**:
- All blue markers (`●`) lie precisely on the red trend line.
- No deviations observed between data points and the trend line.
- **Scale**:
- Both axes use logarithmic scaling, compressing large value ranges.
- X-axis spans two orders of magnitude (`10¹` to `10³`).
- Y-axis spans two orders of magnitude (`10⁻⁵` to `10⁻³`).
## 5. **Spatial Grounding**
- **Legend Placement**: Top-right quadrant (outside the plot area).
- **Line Placement**: Diagonal from top-left to bottom-right, spanning the entire plot.
## 6. **Component Isolation**
- **Header**: Title `Wiki` (centered at the top).
- **Main Chart**:
- Axes with logarithmic scales.
- Data points and trend line.
- **Footer**: No additional text or components.
## 7. **Language and Transcription**
- **Primary Language**: English (all labels and titles are in English).
- **No Secondary Languages Detected**.
## 8. **Critical Notes**
- **No Data Table Present**: The image contains only a scatter plot with a trend line.
- **No Embedded Text**: No annotations or text within the plot area beyond axis labels and legend.
- **No Heatmap or Sub-Categories**: The plot is a single-series scatter plot with a trend line.
## 9. **Final Summary**
The plot illustrates a **power-law relationship** between `τ` (x-axis) and `ρ` (y-axis) for the dataset labeled `Wiki`. The red trend line confirms a strict inverse proportionality (`ρ ∝ τ⁻¹`), with all data points (`●`, blue) perfectly aligned along this line. The logarithmic scale emphasizes the magnitude of the relationship across two orders of magnitude for both variables.
</details>
<details>
<summary>2402.01825v2/extracted/2402.01825v2/figs/selfsim/self_sim_exponent_lag_PubMedCentral.png Details</summary>

### Visual Description
# Technical Document Analysis: PubMed Log-Log Plot
## 1. **Chart Title and Labels**
- **Title**: "PubMed" (centered at the top of the plot).
- **X-Axis**:
- Label: `τ` (Greek letter tau).
- Range: `10¹` to `10³` (logarithmic scale).
- Tick markers: `10¹`, `10²`, `10³`.
- **Y-Axis**:
- Label: `ρ` (Greek letter rho).
- Range: `10⁻⁵` to `10⁻³` (logarithmic scale).
- Tick markers: `10⁻⁵`, `10⁻⁴`, `10⁻³`.
## 2. **Legend**
- **Position**: Top-right corner of the plot.
- **Content**:
- Red line labeled "Trend Line" (matches the red line in the plot).
## 3. **Data Points**
- **Color**: Blue (circles).
- **Distribution**:
- Aligned along the red trend line.
- X-values: Approximately `10¹` to `10³`.
- Y-values: Approximately `10⁻⁴` to `10⁻³`.
## 4. **Trend Line**
- **Color**: Red.
- **Slope**: Negative (downward).
- **Visual Trend**:
- As `τ` increases (x-axis), `ρ` decreases (y-axis).
- Suggests a power-law relationship: `ρ ∝ τ⁻ⁿ` (where `n > 0`).
## 5. **Key Observations**
- **Log-Log Scale**: Both axes use logarithmic scaling, linearizing exponential relationships.
- **Data Alignment**: Blue data points closely follow the red trend line, indicating strong correlation.
- **Interpretation**:
- The plot likely represents a relationship between two variables (τ and ρ) in a PubMed dataset.
- The negative slope implies an inverse proportionality between τ and ρ.
## 6. **Spatial Grounding**
- **Legend**: Top-right corner (confirmed via visual inspection).
- **Data Points**: Scattered along the trend line, with no deviations exceeding the line's trajectory.
## 7. **Component Isolation**
- **Header**: "PubMed" (title).
- **Main Chart**:
- Axes, data points, and trend line.
- **Footer**: No additional text or components.
## 8. **Additional Notes**
- No embedded text, tables, or secondary legends.
- No non-English text detected.
- The plot focuses on a single data series with a fitted trend line.
## 9. **Conclusion**
The chart illustrates a negative correlation between `τ` and `ρ` on a log-log scale, with data points tightly clustered around a red trend line. The logarithmic axes suggest the relationship follows a power-law decay.
</details>
<details>
<summary>2402.01825v2/extracted/2402.01825v2/figs/selfsim/self_sim_exponent_lag_DMMathematics.png Details</summary>

### Visual Description
# Technical Document Analysis of Image
## 1. **Chart Description**
The image is a **log-log scatter plot** titled **"DMMath"**. It visualizes the relationship between two variables: **τ (x-axis)** and **ρ (y-axis)**, both plotted on logarithmic scales. The chart includes a **red line** and **blue data points** that align closely with the line.
---
## 2. **Axis Labels and Markers**
- **X-axis (τ)**:
- Label: **τ** (Greek letter tau)
- Scale: Logarithmic (10¹ to 10³)
- Tick marks: 10¹, 10², 10³
- **Y-axis (ρ)**:
- Label: **ρ** (Greek letter rho)
- Scale: Logarithmic (10⁻⁵ to 10⁻³)
- Tick marks: 10⁻⁵, 10⁻⁴, 10⁻³
---
## 3. **Legend and Data Series**
- **Legend**: Located in the **top-right corner** of the plot.
- **Red line**: Labeled **"DMMath"**
- **Blue data points**: No explicit label in the legend, but they are visually associated with the red line.
---
## 4. **Key Trends and Data Points**
- **Red Line (DMMath)**:
- **Slope**: Negative (downward trend)
- **Equation**: Implied power-law relationship (y = kx⁻¹, where k is a constant)
- **Visual Trend**: A straight line on a log-log plot indicates a **power-law decay** of ρ with increasing τ.
- **Blue Data Points**:
- **Placement**: All points lie **exactly on the red line**, suggesting a perfect fit to the DMMath model.
- **Values**:
- At τ = 10¹, ρ ≈ 10⁻³
- At τ = 10², ρ ≈ 10⁻⁴
- At τ = 10³, ρ ≈ 10⁻⁵
- **Trend**: Consistent with the red line’s power-law decay.
---
## 5. **Spatial Grounding**
- **Legend Position**: Top-right corner (standard for clarity).
- **Data Point Colors**:
- Red line: Matches the legend label "DMMath".
- Blue points: No legend entry, but visually aligned with the red line.
---
## 6. **Component Isolation**
- **Header**: Title **"DMMath"** (top-center).
- **Main Chart**:
- Axes, grid lines, red line, and blue data points.
- **Footer**: No visible footer elements.
---
## 7. **Trend Verification**
- **Red Line**: Slopes downward, confirming a **negative correlation** between τ and ρ.
- **Blue Points**: All points follow the red line, validating the model’s accuracy.
---
## 8. **Data Table Reconstruction**
No explicit data table is present, but the **data points** can be inferred from the plot:
| τ (10¹) | ρ (10⁻³) |
|---------|----------|
| 10 | 10⁻³ |
| 100 | 10⁻⁴ |
| 1000 | 10⁻⁵ |
---
## 9. **Additional Notes**
- **Log-Log Scale**: Both axes use logarithmic scales, which linearizes power-law relationships.
- **No Other Languages**: All text is in English.
- **No Missing Data**: All points are plotted, and the line is continuous.
---
## 10. **Conclusion**
The chart demonstrates a **power-law relationship** between τ and ρ, with the DMMath model (red line) perfectly fitting the observed data (blue points). The log-log scale emphasizes the exponential decay of ρ as τ increases.
</details>
<details>
<summary>2402.01825v2/extracted/2402.01825v2/figs/selfsim/self_sim_exponent_lag_ArXiv.png Details</summary>

### Visual Description
# Technical Document Analysis: ArXiv Scatter Plot
## 1. **Chart Identification**
- **Type**: Log-log scatter plot with trend line.
- **Title**: "ArXiv" (centered at the top of the plot).
## 2. **Axis Labels and Markers**
- **X-axis (τ)**:
- Label: "τ" (Greek letter tau).
- Scale: Logarithmic (base 10).
- Range: \(10^1\) to \(10^3\).
- Tick markers: \(10^1\), \(10^2\), \(10^3\).
- **Y-axis (ρ)**:
- Label: "ρ" (Greek letter rho).
- Scale: Logarithmic (base 10).
- Range: \(10^{-5}\) to \(10^{-3}\).
- Tick markers: \(10^{-5}\), \(10^{-4}\), \(10^{-3}\).
## 3. **Key Visual Components**
- **Data Points**:
- Represented as **blue circles**.
- Positioned along the plot, closely aligned with the red trend line.
- Example approximate coordinates (log-scale):
- \((10^1, 10^{-3})\), \((10^2, 10^{-4})\), \((10^3, 10^{-5})\).
- **Trend Line**:
- **Color**: Red.
- **Equation**: Implied power-law relationship (\(y = kx^n\)) due to straight-line alignment on log-log scale.
- Slope: Negative (decreasing trend).
- **Reference Line**:
- **Color**: Dashed gray.
- **Position**: Horizontal line at \(ρ = 10^{-4}\).
## 4. **Trend Verification**
- **Red Line**: Slopes downward from left to right, indicating an inverse relationship between \(τ\) and \(ρ\).
- **Data Points**: Follow the red line closely, confirming the power-law trend.
- **Reference Line**: Acts as a threshold at \(ρ = 10^{-4}\), with data points distributed above and below it.
## 5. **Legend and Color Consistency**
- **Legend**: Not explicitly present in the image.
- **Color Mapping**:
- Blue circles: Data points.
- Red line: Trend line.
- Dashed gray line: Reference threshold.
## 6. **Spatial Grounding**
- **Legend Placement**: Not applicable (no legend).
- **Data Point Colors**: Blue circles match the implied legend (if one existed).
## 7. **Component Isolation**
- **Header**: Title "ArXiv".
- **Main Chart**:
- Axes with logarithmic scales.
- Data points, trend line, and reference line.
- **Footer**: No additional text or components.
## 8. **Textual Transcription**
- **Axis Titles**: "τ" (x-axis), "ρ" (y-axis).
- **Tick Labels**: \(10^1\), \(10^2\), \(10^3\) (x-axis); \(10^{-5}\), \(10^{-4}\), \(10^{-3}\) (y-axis).
- **Reference Line**: \(ρ = 10^{-4}\) (dashed gray).
## 9. **Additional Notes**
- **Language**: All text is in English.
- **No Data Table**: The plot does not include a tabular data structure.
- **Critical Observation**: The log-log scale emphasizes multiplicative relationships, with the red line suggesting a proportional decrease in \(ρ\) as \(τ\) increases.
## 10. **Conclusion**
The plot illustrates a power-law relationship between \(τ\) and \(ρ\), with data points tightly clustered around the red trend line. The reference line at \(ρ = 10^{-4}\) provides a benchmark for comparison.
</details>
Figure 2: Peak probability $p_{\epsilon}(\tau)$ is plotted against the granularity level $\tau$ (see Section 2.2). We observe power laws $p_{\epsilon}(\tau)\sim\tau^{-\mathrm{S}}$ , indicating self-similarity, with a median exponent of $\mathrm{S}=0.59± 0.08$ .
Experimental Setup.
In order to establish self-similarity and LRD in language, we convert texts into sequences of bits using a large language model (LLM). Specifically, we use PaLM2-L (Unicorn) [6] to calculate the probability of the next token $w_{t}$ conditioned on its entire prefix $w_{[t-1]}=(w_{0},w_{1},...,w_{t-1})$ . As discussed in Section 1, this captures its intrinsic, irreducible description [22]. By the chain rule [15], the corresponding number of bits assigned to $w_{t}$ is $z_{t}=-\log p(w_{t}|w_{[t-1]})$ . Unlike in prior works, which rely on simplifications such as by substituting a word with its length [9] or by focusing on the recurrence of a single word [48, 3], we use the LLM to approximate the full joint distribution of language since LLMs are known to produce calibrated probability scores at the token level [33]. We carry out these calculations for prefixes of up to 2048 tokens ( $≈ 8$ pages of text). With a suitable normalization, such bits-per-byte (BPB), one obtains a standardized description of text, consistent across tokenizers. BPB is a widely used as a tokenizer-agnostic metric to compare LM modeling performance, e.g. for The Pile [23].
<details>
<summary>2402.01825v2/extracted/2402.01825v2/figs/hurst/rs_exponent_rescaled_range_fit_OpenWebText2.png Details</summary>

### Visual Description
# Technical Document Analysis: OpenWeb Scatter Plot
## Image Description
The image is a **log-log scatter plot** titled **"OpenWeb"**. It visualizes the relationship between two variables:
- **X-axis**: Labeled `n` (logarithmic scale: 10⁰ to 10³)
- **Y-axis**: Labeled `R/S` (logarithmic scale: 10⁰ to 10³)
A **red diagonal line** represents a trend, and **blue data points** are plotted along this line. The axes use powers of 10 for both scales.
---
## Key Components
### 1. Title
- **Text**: "OpenWeb"
- **Placement**: Top center of the plot.
### 2. Axes Labels
- **X-axis**:
- Label: `n`
- Scale: Logarithmic (10⁰, 10¹, 10², 10³)
- **Y-axis**:
- Label: `R/S`
- Scale: Logarithmic (10⁰, 10¹, 10², 10³)
### 3. Data Points
- **Color**: Blue
- **Distribution**:
- Clustered between `n = 10²` and `n = 10³` (X-axis).
- Corresponding `R/S` values range from `10¹` to `10²` (Y-axis).
- **Trend**: Data points align closely with the red line, indicating a strong correlation.
### 4. Trend Line
- **Color**: Red
- **Equation**: Implied linear relationship on log-log scale (power-law in linear terms).
- **Slope**: Positive, indicating `R/S ∝ n`.
### 5. Legend
- **Status**: No explicit legend is present in the image.
- **Inference**: The red line likely represents a theoretical or fitted trend, while blue points are observed data.
---
## Spatial Grounding
- **Legend Placement**: Not applicable (no legend visible).
- **Data Point Colors**: Blue matches the observed data; red matches the trend line.
---
## Trend Verification
- **Red Line**: Slopes upward diagonally, confirming a positive correlation between `n` and `R/S`.
- **Data Points**: Align with the red line, validating the trend.
---
## Component Isolation
1. **Header**: Title "OpenWeb" (top center).
2. **Main Chart**:
- Log-log axes with labeled ticks.
- Blue data points and red trend line.
3. **Footer**: No additional text or elements.
---
## Data Table Reconstruction
No explicit data table is present. However, inferred data points (approximate):
| `n` (X-axis) | `R/S` (Y-axis) |
|--------------|----------------|
| 10² | 10¹ |
| 10².5 | 10¹.5 |
| 10³ | 10² |
*Note: Values are estimates based on visual clustering around the red line.*
---
## Conclusion
The plot demonstrates a **power-law relationship** (`R/S ∝ n`) in the OpenWeb dataset, with data points tightly clustered around the red trend line. No additional textual or categorical information is present.
</details>
<details>
<summary>2402.01825v2/extracted/2402.01825v2/figs/hurst/rs_exponent_rescaled_range_fit_Github.png Details</summary>

### Visual Description
# Technical Document Analysis: Scatter Plot of "Github" Data
## **1. Title and Labels**
- **Title**: "Github" (centered at the top of the plot).
- **X-axis**: Labeled "n" (horizontal axis), with tick marks at `10⁰`, `10¹`, `10²`, `10³`.
- **Y-axis**: Labeled "R/S" (vertical axis), with tick marks at `10⁰`, `10¹`, `10²`, `10³`.
## **2. Data Points**
- **Color**: Blue dots represent individual data points.
- **Distribution**:
- Clustered around the red trend line.
- Spread across the plot, with higher density in the mid-range of `n` (approximately `10¹` to `10³`).
- No data points near the extremes (`n = 10⁰` or `R/S = 10⁰`).
## **3. Trend Line**
- **Color**: Red.
- **Equation**: A straight line on the log-log plot, indicating a **power-law relationship** between `n` and `R/S`.
- **Slope**: Approximately 1 (linear in log space), suggesting `R/S ∝ n`.
## **4. Key Trends**
- **Visual Trend**:
- The red trend line slopes upward from the bottom-left to the top-right of the plot.
- Data points align closely with the trend line, confirming a strong correlation.
- **Log-Log Scale Implications**:
- A straight line on a log-log plot implies a power-law relationship (e.g., `R/S = k * n^m`, where `m ≈ 1` here).
## **5. Spatial Grounding**
- **Legend**: No explicit legend is present in the image. However, the red line is identified as the trend line, and blue dots represent data points.
- **Axis Alignment**:
- X-axis (`n`) spans `10⁰` to `10³`.
- Y-axis (`R/S`) spans `10⁰` to `10³`.
## **6. Component Isolation**
- **Header**: Title "Github" at the top.
- **Main Chart**:
- Scatter plot with blue data points and a red trend line.
- Logarithmic axes with labeled tick marks.
- **Footer**: No additional text or components.
## **7. Verification**
- **Color Consistency**:
- Blue data points match the implied "data" category.
- Red trend line matches the implied "trend" category.
- **Trend Confirmation**:
- The red line’s slope (1) aligns with the log-log scale, confirming a linear relationship in log space.
## **8. Missing Elements**
- **Legend**: Not explicitly present in the image.
- **Data Table**: No numerical data table is included; only visual data points are shown.
## **9. Language and Transcription**
- **Primary Language**: English.
- **Transcribed Text**:
- Axis labels: "n" (x-axis), "R/S" (y-axis).
- Title: "Github".
- Tick marks: `10⁰`, `10¹`, `10²`, `10³` on both axes.
## **10. Conclusion**
The plot illustrates a **power-law relationship** between `n` and `R/S` on a logarithmic scale, with data points tightly clustered around a red trend line of slope 1. No legend or additional textual elements are present.
</details>
<details>
<summary>2402.01825v2/extracted/2402.01825v2/figs/hurst/rs_exponent_rescaled_range_fit_FreeLaw.png Details</summary>

### Visual Description
# Technical Document Analysis: FreeLaw Scatter Plot
## 1. **Chart Identification**
- **Type**: Scatter plot with linear trend line.
- **Title**: "FreeLaw" (centered at the top of the plot).
- **Axes**:
- **X-axis**: Labeled "n" (horizontal), logarithmic scale (10⁰ to 10³).
- **Y-axis**: Labeled "R/S" (vertical), logarithmic scale (10⁰ to 10³).
## 2. **Key Trends and Data Points**
- **Trend Line**:
- **Color**: Red.
- **Slope**: Diagonal upward (positive correlation).
- **Interpretation**: Represents a power-law relationship between `n` and `R/S` (straight line on log-log scale implies `R/S ∝ n^k` for some constant `k`).
- **Data Points**:
- **Color**: Blue dots.
- **Distribution**: Clustered around the red trend line, primarily in the range:
- `n`: 10⁰ to 10³.
- `R/S`: 10¹ to 10².
- **Notable**: Data points deviate slightly from the trend line at the extremes (e.g., lower `n` values show less consistency).
## 3. **Legend and Color Mapping**
- **Legend**: Not explicitly labeled in the plot. However:
- **Red**: Trend line (linear fit).
- **Blue**: Observed data points.
- **Spatial Grounding**: Legend inferred from color coding; no explicit box present.
## 4. **Component Isolation**
- **Header**: "FreeLaw" (title).
- **Main Chart**: Scatter plot with trend line.
- **Footer**: None visible.
## 5. **Trend Verification**
- **Visual Confirmation**: Red line slopes upward, confirming a positive relationship between `n` and `R/S`.
- **Data Alignment**: Blue dots align closely with the trend line in the mid-range (`n ≈ 10¹` to `10²`), suggesting the power-law holds most strongly here. Deviations at extremes may indicate measurement noise or model limitations.
## 6. **Axis Markers and Labels**
- **X-axis Markers**: 10⁰, 10¹, 10², 10³.
- **Y-axis Markers**: 10⁰, 10¹, 10², 10³.
- **Units**: No explicit units provided for `n` or `R/S`.
## 7. **Additional Observations**
- **Scale**: Logarithmic axes emphasize multiplicative relationships.
- **Data Density**: Higher concentration of points in the 10¹–10² range for `R/S`, suggesting commonality in this regime.
- **Outliers**: No explicit outliers; all points lie near the trend line.
## 8. **Conclusion**
The plot demonstrates a power-law scaling of `R/S` with `n` in the FreeLaw system, with the relationship most consistent in the mid-range of `n`. The logarithmic axes highlight the proportional growth rate, while deviations at extremes warrant further investigation.
</details>
<details>
<summary>2402.01825v2/extracted/2402.01825v2/figs/hurst/rs_exponent_rescaled_range_fit_Pile-CC.png Details</summary>

### Visual Description
# Technical Document Analysis of Image
## 1. **Title and Labels**
- **Title**: "PileCC" (centered at the top of the plot).
- **X-axis Label**: "n" (logarithmic scale, ranging from $10^0$ to $10^3$).
- **Y-axis Label**: "R/S" (logarithmic scale, ranging from $10^0$ to $10^3$).
## 2. **Chart Components**
- **Data Points**:
- **Color**: Blue (circular markers).
- **Distribution**: Clustered along the red trend line, with values concentrated between $n = 10^1$ and $n = 10^3$ on the x-axis, and $R/S = 10^1$ to $10^2$ on the y-axis.
- **Trend Line**:
- **Color**: Red (solid line).
- **Slope**: Positive (indicating a direct relationship between $n$ and $R/S$).
- **Equation**: Not explicitly provided, but the line passes through the origin and extends diagonally across the plot.
## 3. **Key Trends**
- **Log-Log Scale**: Both axes use a logarithmic scale, suggesting a power-law relationship between $n$ and $R/S$.
- **Data Alignment**: The blue data points closely follow the red trend line, indicating a strong correlation.
- **Slope Interpretation**: The upward slope of the red line implies that $R/S$ increases proportionally with $n$ on a logarithmic scale.
## 4. **Legend and Color Matching**
- **Legend**: No explicit legend is present in the image. However, the red line (trend) and blue data points are visually distinct, with no ambiguity in color coding.
## 5. **Spatial Grounding**
- **Legend Position**: Not applicable (no legend exists).
- **Data Point Placement**: Blue dots are distributed along the red trend line, with no outliers.
## 6. **Trend Verification**
- **Line A (Red)**: Slopes upward, confirming a positive correlation between $n$ and $R/S$.
- **Data Series (Blue Dots)**: Aligns with the trend line, reinforcing the observed relationship.
## 7. **Component Isolation**
- **Header**: "PileCC" (title).
- **Main Chart**: Scatter plot with log-log axes, red trend line, and blue data points.
- **Footer**: No additional text or components.
## 8. **Textual Information**
- **Transcribed Text**:
- "PileCC" (title).
- "R/S" (y-axis label).
- "n" (x-axis label).
## 9. **Additional Notes**
- **No Data Table**: The image does not contain a data table; only a scatter plot with a trend line is present.
- **Language**: All text is in English. No other languages are present.
## 10. **Conclusion**
The image depicts a log-log scatter plot titled "PileCC," showing a positive correlation between $n$ (x-axis) and $R/S$ (y-axis). The red trend line indicates a power-law relationship, with blue data points closely following the line. No legend or additional textual elements are present.
</details>
<details>
<summary>2402.01825v2/extracted/2402.01825v2/figs/hurst/rs_exponent_rescaled_range_fit_Wikipediaen.png Details</summary>

### Visual Description
# Technical Document Extraction: Log-Log Scatter Plot Analysis
## 1. **Labels and Axis Titles**
- **Title**: "Wiki" (centered at the top of the plot).
- **X-Axis Label**: "n" (logarithmic scale, ranging from $10^0$ to $10^3$).
- **Y-Axis Label**: "R/S" (logarithmic scale, ranging from $10^0$ to $10^3$).
## 2. **Legend**
- **Legend Label**: "Wiki" (positioned at the top, likely corresponding to the title).
- **Color Association**:
- **Blue**: Data points (scatter plot markers).
- **Red**: Linear regression line (power-law fit).
## 3. **Data Points**
- **Description**:
- Blue dots clustered around the red line, indicating a strong correlation between $n$ and $R/S$.
- No explicit numerical values provided for individual data points, but their placement suggests a power-law relationship.
- **Spatial Grounding**:
- Data points are distributed across the plot, with higher density near the red line.
## 4. **Key Trends and Visual Analysis**
- **Line Trend**:
- The red line is a straight line on the log-log plot, indicating a **power-law relationship** between $n$ and $R/S$ (i.e., $R/S \propto n$).
- Slope: Approximately 1 (since the line passes through the origin in log-log space).
- **Data Point Trend**:
- Data points follow the red line closely, confirming the power-law trend.
- No significant deviations observed, suggesting a consistent relationship.
## 5. **Component Isolation**
- **Header**: "Wiki" (title).
- **Main Chart**:
- Axes with logarithmic scales.
- Blue data points and red regression line.
- **Footer**: No explicit footer content.
## 6. **Cross-Reference Check**
- **Legend Colors vs. Line/Placement**:
- The legend label "Wiki" corresponds to the title, not the line or data points.
- The red line (power-law fit) and blue data points are distinct, but no separate legend entries are provided for them. This may indicate the legend is only for the title.
## 7. **Additional Notes**
- **Logarithmic Scale Implications**:
- The straight-line relationship on the log-log plot confirms a power-law ($R/S = k \cdot n$), where $k$ is a constant.
- The absence of a y-intercept in the red line suggests $k = 1$ (i.e., $R/S = n$).
- **Data Range**:
- $n$ spans from $1$ to $1000$.
- $R/S$ spans from $1$ to $1000$, with most data points concentrated between $10^1$ and $10^2$.
## 8. **Conclusion**
The plot illustrates a **power-law relationship** between $n$ and $R/S$, with data points tightly clustered around the red regression line. The logarithmic scales emphasize the proportional scaling, and the absence of additional legend entries suggests the title "Wiki" serves as the primary identifier for the dataset.
</details>
<details>
<summary>2402.01825v2/extracted/2402.01825v2/figs/hurst/rs_exponent_rescaled_range_fit_PubMedCentral.png Details</summary>

### Visual Description
# Technical Document Analysis: PubMed Scatter Plot
## Image Description
The image is a **log-log scatter plot** titled **"PubMed"**. It visualizes the relationship between two variables:
- **X-axis**: Labeled **"n"** (logarithmic scale: 10⁰ to 10³)
- **Y-axis**: Labeled **"R/S"** (logarithmic scale: 10⁰ to 10³)
A **red trend line** (diagonal, slope = 1) is overlaid on the plot, indicating a power-law relationship between `n` and `R/S`. **Blue data points** (scatter markers) are clustered around this line, primarily in the range:
- **X-axis**: 10¹ to 10³
- **Y-axis**: 10¹ to 10²
---
## Key Components
### 1. Labels and Axis Markers
- **Title**: "PubMed" (top center)
- **X-axis**:
- Label: "n"
- Ticks: 10⁰, 10¹, 10², 10³
- **Y-axis**:
- Label: "R/S"
- Ticks: 10⁰, 10¹, 10², 10³
### 2. Data Series
- **Red Line**:
- Type: Linear trend line (slope = 1 on log-log scale)
- Equation: `R/S = n` (implied by slope)
- **Blue Data Points**:
- Count: ~8 points (visually clustered)
- Coordinates (approximate):
- (10¹, 10¹), (10¹.⁵, 10¹.⁵), (10², 10¹.⁵), (10².⁵, 10²), (10³, 10²)
### 3. Legend
- **No explicit legend** is present in the image.
- **Color coding**:
- Red = Trend line
- Blue = Data points
---
## Trend Verification
- The **red trend line** slopes upward diagonally, confirming a **positive correlation** between `n` and `R/S`.
- **Data points** align closely with the trend line, suggesting a strong power-law relationship.
---
## Spatial Grounding
- **Legend placement**: Not applicable (no legend box visible).
- **Data point colors**: Blue matches the implied legend (blue = data).
- **Line color**: Red matches the implied legend (red = trend line).
---
## Additional Notes
- The plot uses a **logarithmic scale** for both axes, compressing large value ranges.
- No textual annotations or sub-categories are present.
- No other languages or non-English text detected.
This analysis confirms a **power-law relationship** between `n` and `R/S` in the PubMed dataset, with data points tightly clustered around the trend line.
</details>
<details>
<summary>2402.01825v2/extracted/2402.01825v2/figs/hurst/rs_exponent_rescaled_range_fit_DMMathematics.png Details</summary>

### Visual Description
# Technical Document Extraction: DMMath Scatter Plot
## 1. Labels and Axis Titles
- **Title**: "DMMath" (centered at the top of the plot).
- **X-axis**: Labeled "n" (horizontal axis), with values ranging from $10^0$ to $10^3$ (logarithmic scale).
- **Y-axis**: Labeled "R/S" (vertical axis), with values ranging from $10^0$ to $10^3$ (logarithmic scale).
## 2. Data Points
- **Color**: Blue dots.
- **Distribution**:
- X-values: $10^1$ to $10^3$.
- Y-values: $10^1$ to $10^2$.
- **Trend**: Data points are clustered around a straight line on the log-log plot, indicating a power-law relationship between $n$ and $R/S$.
## 3. Trend Line
- **Color**: Red.
- **Equation**: A straight line on the log-log plot, suggesting a power-law relationship.
- Passes through approximate points: $(10^1, 10^1)$ and $(10^3, 10^2)$.
- Slope calculation (log-log):
- $\text{Slope} = \frac{\log_{10}(10^2) - \log_{10}(10^1)}{\log_{10}(10^3) - \log_{10}(10^1)} = \frac{2 - 1}{3 - 1} = 0.5$.
- Implied relationship: $R/S \propto n^{0.5}$ (i.e., $R/S = k \cdot n^{0.5}$, where $k$ is a constant).
## 4. Legend
- **Location**: Top-right corner of the plot.
- **Label**: "DMMath" (blue text).
- **Color Match**: Blue (matches the data points).
## 5. Additional Observations
- **Logarithmic Scales**: Both axes use logarithmic scaling, which linearizes power-law relationships.
- **Data Clustering**: Points are tightly grouped around the red trend line, confirming the power-law trend.
- **No Text Blocks or Tables**: The image contains no additional textual or tabular data.
## 6. Spatial Grounding
- **Legend Position**: Top-right corner (coordinates: [x, y] = [top-right]).
- **Data Point Colors**: Blue (matches legend).
- **Line Color**: Red (distinct from data points).
## 7. Trend Verification
- **Visual Trend**: The red line slopes upward on the log-log plot, indicating a positive correlation between $n$ and $R/S$.
- **Numerical Consistency**: The slope of 0.5 aligns with the observed data distribution.
## 8. Component Isolation
- **Header**: "DMMath" (title).
- **Main Chart**: Scatter plot with blue data points and red trend line.
- **Footer**: No additional components.
## 9. Final Notes
- The plot explicitly demonstrates a power-law relationship between $n$ and $R/S$, with no deviations from the trend line. No other languages or non-English text are present.
</details>
<details>
<summary>2402.01825v2/extracted/2402.01825v2/figs/hurst/rs_exponent_rescaled_range_fit_ArXiv.png Details</summary>

### Visual Description
# Technical Document Analysis: ArXiv Scatter Plot
## 1. Labels and Axis Titles
- **Title**: "ArXiv" (top center)
- **Y-Axis**: Labeled "R/S" (logarithmic scale, range: 10⁰ to 10³)
- **X-Axis**: Labeled "n" (logarithmic scale, range: 10⁰ to 10³)
- **Axis Markers**:
- Y-Axis ticks at 10⁰, 10¹, 10², 10³
- X-Axis ticks at 10⁰, 10¹, 10², 10³
## 2. Key Trends and Data Points
- **Trend Line**:
- Red straight line on log-log plot, indicating a **power-law relationship** (R/S ∝ n).
- Slope: Positive, ascending from bottom-left to top-right.
- **Data Points**:
- Blue circular markers clustered around the trend line.
- Concentrated in the range:
- X: 10¹ to 10³
- Y: 10¹ to 10²
- No outliers; all points align closely with the trend line.
## 3. Diagram Components
- **Header**: Title "ArXiv" (no additional text).
- **Main Chart**:
- Log-log axes with exponential scaling.
- Red trend line dominates the visual hierarchy.
- Blue data points form a dense cluster near the trend line.
- **Footer**: No explicit footer or legend.
## 4. Spatial Grounding
- **Legend**: Absent. Data point color (blue) is not explicitly labeled.
- **Trend Line Placement**: Originates at (10⁰, 10⁰), extending to (10³, 10³).
- **Data Point Distribution**:
- Majority between (10¹, 10¹) and (10³, 10²).
- No points below 10¹ on either axis.
## 5. Trend Verification
- **Visual Trend**: Red line slopes upward, confirming a direct proportionality between `n` and `R/S`.
- **Data Alignment**: All blue points cluster tightly around the trend line, validating the power-law relationship.
## 6. Component Isolation
- **Regions**:
- **Header**: Title only.
- **Main Chart**: Axes, trend line, and data points.
- **Footer**: None.
## 7. Missing Elements
- **Legend**: Not present. Color coding (blue for data, red for trend) is inferred.
- **Data Table**: No explicit table; data represented visually.
## 8. Language and Transcription
- **Language**: English (no non-English text detected).
- **Transcription**: All labels and axis titles transcribed verbatim.
## 9. Critical Observations
- The log-log scale emphasizes multiplicative relationships, highlighting the power-law scaling of `R/S` with `n`.
- The absence of a legend suggests the trend line and data points are self-explanatory within the context of the plot.
</details>
Figure 3: Rescaled range $R(n)/S(n)$ is plotted against the number of normalized bits $n$ . We observe a power law $R(n)/S(n)\sim n^{\mathrm{H}}$ in all domains. When aggregating all datasets, $\mathrm{H}=0.70± 0.09$ .
Besides PaLM2, we also experiment and report on various model sizes of PaLM [12] and decoder-only T5 [54]. Namely, we report results for models: PaLM2 XXS (Gecko), XS (Otter), S (Bison), M, and L (Unicorn); PaLM 8B, 62B, 540B; and decoder-only T5.1.1 at Base (110M), Large (341M), XL (1.2B), and XXL (5B) sizes. For PaLM and PaLM2, we use the checkpoints pretrained in Chowdhery et al., [12] and Anil et al., 2023b [6]. All T5.1.1 decoder baselines, on the other hand, are trained with a casual language modeling objective for 262B tokens of C4 [54]. All experiments are executed on Tensor Processing Units (TPUs). More details on how we train T5.1.1 baselines are in Appendix A.
Once $z_{t}$ is computed for a document, we follow standard definitions in constructing the increment process $(x_{t})_{t∈\mathbb{N}}$ by normalizing $z_{t}$ to have a zero-mean and unit variance. Intuitively, fractal parameters are intended to measure a fundamental property of the process (e.g. LRD) that should not be affected by scale, hence the normalization. The integral process $(X_{t})_{t∈\mathbb{N}}$ is calculated based on $(x_{t})_{t∈\mathbb{N}}$ , as described earlier and depicted in Figure 1 (top). Normalizing bits (to have zero mean and unit variance) models language as a random walk. It is a standard approach used extensively in the literature in various contexts, such as in DNA sequences [51, 56, 47, 35, 58].
For analysis, we use The Pile validation split [23], consisting of 22 subdomains such as Wikipedia and GitHub. We restrict analysis to sufficiently-long documents of length $>4K$ tokens and use the first 2K tokens only, to sidestep potential effects of the finite length of documents and the model context. To mitigate noise, only domains with $>1K$ documents are compared; we report results for them separately and their median. We use bootstrapping [17] to estimate the error margin.
Notation. We write $f(x)\sim x^{c}$ if $f(x)=x^{c}L(x)$ for some function $L$ that satisfies $L(tx)/L(x)→ 1$ as $x→∞$ for all $t>0$ . Examples of slowly varying functions are constants $L(x)=c$ and $L(x)=\log x$ . When $f(x)\sim x^{c}$ , we abuse terminology slightly by referring to $f(x)$ as a power law.
Figure 4: left: Estimates of the self-similarity exponent $\mathrm{S}$ are generally robust to the choice of $\epsilon$ . right: The partial auto-correlation function calculated across domains. DM Mathematics has a much shorter dependence compared to the rest of the domains, in agreement with its Hurst parameter.
2.2 Self-similarity exponent
An integral process is said to be self-similar if it exhibits statistical self-similarity. More precisely, $(X_{t})_{t∈\mathbb{N}}$ is self-similar if $(X_{\tau t})_{t∈\mathbb{N}}$ is distributionally equivalent to $(\tau^{S}X_{t})_{t∈\mathbb{N}}$ for some exponent $\mathrm{S}$ . Thus, scaling of time is equivalent to an appropriate scaling of space. We will refer to $\tau$ as the granularity level and to the exponent $\mathrm{S}$ as the self-similarity or Hölder exponent [64]. Many time series in nature exhibit self-similar structures, such as human blood pressure and heart rate [27].
One approach for calculating $\mathrm{S}$ is as follows. Fix $\epsilon\ll 1$ and denote the $\tau$ -increments by $(X_{t+\tau}-X_{t})_{t∈\mathbb{N}}$ . These would correspond, for instance, to the number of bits used for clauses, sentences, paragraphs and longer texts as $\tau$ increases. In terms of the increment process $(x_{t})_{t∈\mathbb{N}}$ , this corresponds to aggregating increments into “bursts”. Let $p_{\epsilon}(\tau)$ be the probability mass of the event $\{|X_{t+\tau}-X_{t}|≤\epsilon\}_{t∈\mathbb{N}}$ . Then, $\mathrm{S}$ can be estimated by fitting a power law relation $p_{\epsilon}(\tau)\sim\tau^{-S}$ [64]. Generally, $\mathrm{S}$ is robust to the choice of $\epsilon∈[10^{-3},10^{-2}]$ as shown in Figure 4 (left) so we fix it to $\epsilon=5× 10^{-3}$ .
Figure 2 plots the probability $p_{\epsilon}(\tau)$ against $\tau$ using PaLM2-L. We indeed observe a power law relation; i.e. linear in a log-log scale, with a median self-similarity exponent of $\mathrm{S}=0.59± 0.08$ . Section 3 shows that the median $\mathrm{S}$ is robust to the choice of the LLM.
2.3 Hurst parameter
The Hurst parameter $\mathrm{H}∈[0,1]$ quantifies the degree of predictability or dependence over time [31]. It is calculated using the so-called rescaled-range (R/S) analysis. Let $(x_{t})_{t∈\mathbb{N}}$ be an increment process. For each $n∈\mathbb{N}$ , write $y_{t}=x_{t}-\frac{1}{t}\sum_{k=0}^{t}x_{k}$ and $Y_{t}=\sum_{k=0}^{t}y_{t}$ . The range and scale are defined, respectively, as $R(n)=\max_{t≤ n}Y_{t}-\min_{t≤ n}Y_{t}$ and $S(n)=\sigma\left(\{x_{k}\}_{k≤ n}\right)$ , where $\sigma$ is the standard deviation. Then, the Hurst parameter $\mathrm{H}$ is estimated by fitting a power law relation $R(n)/S(n)\sim n^{\mathrm{H}}$ . As stated earlier, for completely random processes, such as a simple Brownian motion, it can be shown that $\mathrm{H}=1/2$ . In addition, $H>1/2$ implies dependence over time [16, 65, 8].
Writing $\rho_{n}=\mathbb{E}[(x_{t+n}x_{t}]$ for the autocovariance function of the increment process $(x_{t})_{t∈\mathbb{N}}$ , the Hurst parameter satisfies $\mathrm{H}=1-\beta/2$ when $\rho_{n}\sim n^{-\beta}$ as $n→∞$ [26, 16]. Since in self-similar processes, $\mathrm{H}>1/2$ implies long-range dependence (LRD), LRD is equivalent to the condition that the autocovariances are not summable. In terms of the integral process, it can be shown that [57]: $\lim_{n→∞}\frac{\mathrm{Var}(X_{n})}{n}=1+2\sum_{i=1}^{∞}\rho_{i}$ . Hence, if $\mathrm{H}<1/2$ , the auto-covariances are summable and $\mathrm{Var}(X_{n})$ grows, at most, linearly fast on $n$ . On the other hand, if the process has LRD, $\mathrm{Var}(X_{n})$ grows superlinearly on $n$ . In particular, using the Euler-Maclaurin summation formula [7, 2], one obtains $\mathrm{Var}(X_{n})\sim n^{2H}$ if $H>1/2$ . Figure 3 plots the rescaled range $R(n)/S(n)$ against $n$ . We observe a power law relation with a median Hurst parameter of $\mathrm{H}=0.70± 0.09$ .
Table 1: A comparison of the fractal parameters across 8 different domains with $>1000$ documents each in The Pile benchmark (see Section 2.1 for selection criteria). DM-Mathematics is markedly different because each document consists of questions, with no LRD.
| $\mathrm{S}$ $\mathrm{H}$ $\mathrm{J}$ | $0.53±.05$ $0.68±.01$ $0.46±.01$ | $0.60±.05$ $0.79±.01$ $0.49±.00$ | $0.61±.05$ $0.68±.00$ $0.49±.00$ | $0.56±.03$ $0.70±.00$ $0.50±.00$ | $0.62±.02$ $0.74±.01$ $0.52±.00$ | $0.60±.07$ $0.65±.00$ $0.44±.00$ | $0.42±.03$ $0.50±.01$ $0.28±.00$ | $0.70±.03$ $0.72±.01$ $0.49±.00$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
2.4 Fractal dimension
Broadly speaking, the fractal dimension of an object describes its local complexity. For a geometric object $Z$ , such as the Koch curve, let $\tau$ be a chosen scale (e.g. a short ruler for measuring lengths or a small square for areas). Let $N(\tau)$ be the minimum number of objects of scale $\tau$ that cover $Z$ ; i.e. contain it entirely. Then, the fractal dimension of $Z$ , also called its Hausdorff dimension, is: $\mathrm{D}=-\lim_{\tau→ 0}\left\{\frac{\log N(\tau)}{\log\tau}\right\}$ [53]. For example, a line has a fractal dimension $1$ , in agreement with its topological dimension, because $N(\tau)=C/\tau$ for some constant $C>0$ .
By convention, an object is referred to as “fractal” if $\mathrm{D}$ is different from its topological dimension. For example, the fractal dimension of the Koch curve is about 1.26 when its topological dimension is 1. Fractals explain some puzzling observations, such as why estimates of the length of the coast of Britain varied significantly from one study to another, because lengths in fractals are scale-sensitive. Mandelbrot estimated the fractal dimension of the coast of Britain to be 1.25 [42].
The definition above for the fractal dimension $\mathrm{D}$ applies to geometric shapes, but an analogous definition has been introduced for stochastic processes. Let $(x_{t})_{t∈\mathbb{R}}$ be a stationary process with autocovariance $\rho_{n}$ . Then, its fractal dimension $\mathrm{D}$ is determined according to the local behavior of $\rho_{n}$ at the vicinity of $n=0$ , by first normalizing $(x_{t})_{t∈\mathbb{R}}$ to have a zero-mean and a unit variance, and modeling $\rho_{n}$ using a power law $\rho_{n}\sim 1-n^{\alpha}$ as $n→ 0^{+}$ , for $\alpha∈(0,2]$ . Then, the fractal dimension $\mathrm{D}∈[1,\,2]$ of $(x_{t})_{t∈\mathbb{R}}$ is defined by $\mathrm{D}=2-\alpha/2$ [26]. It can be shown that $\mathrm{D}=2-\mathrm{S}$ [26]. For language, this gives a median fractal dimension of $\mathrm{D}=1.41± 0.08$ .
2.5 Joseph effect
Finally, we examine another related parameter that is commonly studied in self-similar processes. The motivation behind it comes from the fact that in processes with LRD, one often observes burstiness as shown in Figure 1; i.e. clusters over time in which the process fully resides on one side of the mean, before switching to the other. This is quite unlike random noise, for instance, where measurements are evenly distributed on both sides of the mean. The effect is often referred to as the Joseph effect, named after the biblical story of the seven fat years and seven lean years [65, 45, 64].
A common way to quantify the Joseph effect for integral processes $(X_{t})_{t∈\mathbb{N}}$ is as follows [64]. First, let $\sigma_{\tau}$ be the standard deviation of the $\tau$ -increments $X_{t+\tau}-X_{t}$ . Then, fit a power law relation $\sigma_{\tau}\sim\tau^{\mathrm{J}}$ . The exponent $\mathrm{J}$ here is called the Joseph exponent. In an idealized fractional Brownian motion, both $\mathrm{J}$ and the self-similarity exponent $\mathrm{S}$ coincide. Figure 5 provides the detailed empirical results. Overall, we find that $\mathrm{J}=0.49± 0.08$ .
*[Error downloading image: extracted/2402.01825v2/figs/joseph/joseph_exponent_fit_OpenWebText2.png]*
*[Error downloading image: extracted/2402.01825v2/figs/joseph/joseph_exponent_fit_Github.png]*
*[Error downloading image: extracted/2402.01825v2/figs/joseph/joseph_exponent_fit_FreeLaw.png]*
*[Error downloading image: extracted/2402.01825v2/figs/joseph/joseph_exponent_fit_Pile-CC.png]*
*[Error downloading image: extracted/2402.01825v2/figs/joseph/joseph_exponent_fit_Wikipediaen.png]*
*[Error downloading image: extracted/2402.01825v2/figs/joseph/joseph_exponent_fit_PubMedCentral.png]*
*[Error downloading image: extracted/2402.01825v2/figs/joseph/joseph_exponent_fit_DMMathematics.png]*
*[Error downloading image: extracted/2402.01825v2/figs/joseph/joseph_exponent_fit_ArXiv.png]*
Figure 5: The standard deviation $\sigma$ of the $\tau$ -increments $X_{t+\tau}-X_{t}$ is plotted against the scale $\tau$ . We, again, observe another power law relation $\sigma\sim\tau^{\mathrm{J}}$ , with a Joseph exponent $\mathrm{J}=0.49± 0.08$ .
3 Analysis
Comparative Analysis.
Table 1 compares fractal parameters across different domains, such as ArXiv, Github and Wikipedia. In general, most domains share similar self-similarity and Hurst exponents with a few exceptions. The first notable exception is DM-Mathematics, which has a Hurst parameter of about 0.5, indicating a lack of LRD. Upon closer inspection, however, a value of $\mathrm{H}=0.5$ is not surprising for DM-Mathematics because its documents consist of independent mathematical questions as shown in Figure 6. In Figure 4 (right), we plot the partial autocorrelation function for each of the 8 domains against time lag (context length). Indeed, we see that DM-Mathematics shows markedly less dependence compared to the other domains. The second notable observation is the relatively larger value of $\mathrm{H}=0.79$ in GitHub, indicating more structure in code. This is in agreement with earlier findings by Kokol and Podgorelec, [35] who estimated LRD in computer languages to be greater than in natural language. In Table 2, we compare the three fractal parameters $\mathrm{S}$ , $\mathrm{H}$ and $\mathrm{J}$ using different families of LLM and different model sizes. Overall, we observe that the parameters are generally robust to the choice of the architecture.
Table 2: A comparison of the estimated median fractal parameters by various LLMs over the entire Pile validation split. Estimates are generally robust to the choice of the LLM, but the tiny variations in median $\mathrm{H}$ reflect improvements in the model quality. See Section 3.
| 110M Self-similarity exponent $\mathrm{S}$ | 340M | 1B | 5B | 8B | 62B | 540B | XXS | XS | S | M | L |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| $.58^{±.06}$ | $.60^{±.06}$ | $.60^{±.05}$ | $.58^{±.08}$ | $.60^{±.07}$ | $.62^{±.08}$ | $.64^{±.08}$ | $.59^{±.06}$ | $.57^{±.08}$ | $.56^{±.05}$ | $.59^{±.07}$ | $.60^{±.08}$ |
| Hurst exponent $\mathrm{H}$ | | | | | | | | | | | |
| $.64^{±.08}$ | $.64^{±.08}$ | $.64^{±.09}$ | $.64^{±.08}$ | $.66^{±.07}$ | $.68^{±.07}$ | $.68^{±.07}$ | $.66^{±.07}$ | $.66^{±.07}$ | $.67^{±.08}$ | $.68^{±.09}$ | $.69^{±.09}$ |
| Joseph exponent $\mathrm{J}$ | | | | | | | | | | | |
| $.44^{±.06}$ | $.44^{±.06}$ | $.44^{±.06}$ | $.44^{±.06}$ | $.47^{±.06}$ | $.47^{±.06}$ | $.48^{±.06}$ | $.47^{±.06}$ | $.47^{±.06}$ | $.48^{±.07}$ | $.48^{±.07}$ | $.49^{±.08}$ |
Figure 6: Two examples of documents from the DM-Mathematics subset of The Pile benchmark [23]. Each document comprises of multiple independent questions. The lack of LRD in this data is reflected in its Hurst parameter of $\mathrm{H}=0.50± 0.01$
| Col1 |
| --- |
| Document I: What is the square root of 211269 to the nearest integer? 460. What is the square root of 645374 to the nearest integer? 803... |
| Document II: Suppose 5*l = r - 35, -2*r + 5*l - 15 = -70. Is r a multiple of 4? True. Suppose 2*l + 11 - 1 = 0. Does 15 divide (-2)/l - 118/(-5)? False... |
Downstream Performance.
By definition, fractal parameters are calculated on the sequence of negative log-probability scores after normalizing them to zero-mean and unit variance. Hence, they may offer an assessment of downstream performance that improves upon using a perplexity-based metric like bits-per-byte (BPB) alone. To test this hypothesis, we evaluate the 12 models in Table 2 on challenging downstream zero- and few-shot benchmarks focusing on language understanding and reasoning. We include results for 0-shot (0S) and 3-shot (3S) evaluation for BIG-Bench Hard tasks [62, 63] reporting both direct and chain-of-thought (CoT) prompting results following Chung et al., [13]. In addition we report 0-shot and 5-shot (5S) MMLU [30], and 8-shot (8S) GSM8K [14] with CoT. Raw accuracy is reported for all tasks. BBH and MMLU scores are averaged across all 21 tasks and 57 subjects, respectively. All prompt templates for our evaluation are taken from Chung et al., [13], Longpre et al., [41], which we refer the reader to for more details. We prompt all models using a 2048 context length. See Table 8 of Appendix C for the full results.
The first (surprising) observation is that the median Hurst parameter is itself strongly correlated with the BPB scores with an absolute Pearson correlation coefficient of 0.83, even though the Hurst exponent is calculated after normalizing all token losses to zero-mean and unit variance! Informally, this implies that second-order statistics on the sequence of token losses of a particular model can predict its mean! Self-similarity exponent, by contrast, has an absolute correlation of 0.23 with BPB.
Figure 7 displays downstream performance against both the median Hurst exponent and the median BPB score, where median values are calculated on the 8 domains in The Pile benchmark listed in Table 1. In general, both the BPB score and the median Hurst are good predictors of downstream performance. However, we observe that improvements in BPB alone without impacting the median Hurst exponent do not directly translate into improvements downstream. This is verified quantitatively in Table 3 (middle), which reports the adjusted $R^{2}$ values – the proportion of variance in each downstream metric that can be predicted using BPB, $\mathrm{H}$ , or by combining them together into $\mathrm{H}_{B}=1/\mathrm{BPB}+\mathrm{H}$ , with BPB replaced with its reciprocal so that higher values are better. We observe that $\mathrm{H}_{B}$ yields indeed a stronger predictor of downstream performance. Hence, while $\mathrm{H}$ and BPB are correlated, combining them yields a better predictor, so each of $\mathrm{H}$ and BPB conveys useful information not captured by the other metric. See Appendix C for similar analysis using the exponents $\mathrm{S}$ and $\mathrm{J}$ .
Table 3: middle three columns show the adjusted $R^{2}$ : the proportion of variation in downstream performance (row) predictable by a linear function of the input (column). Median Hurst ( $\mathrm{H}$ ) and (especially) the combined metric $\mathrm{H}_{B}$ predict downstream performance better than BPB alone. $\mathrm{S}$ and $\mathrm{J}$ do not give any improvement (see Appendix C). right: the downstream performance for three decoder-only T5.1.1. models pretrained on 100B tokens with 2K, 4K, or 8K context lengths.
| Benchmark 0S BBH Direct 0S MMLU | BPB 0.785 0.653 | $\mathrm{H}$ 0.841 0.831 | $\mathrm{H}_{B}$ 0.883 0.825 | 2K 0 1.81 25.73 | 4K 0 1.68 26.04 | 8K 0 1.76 25.81 |
| --- | --- | --- | --- | --- | --- | --- |
| 0S BBH+MMLU | 0.685 | 0.849 | 0.852 | 13.39 | 13.49 | 13.42 |
| 3S BBH Direct | 0.767 | 0.895 | 0.926 | 21.35 | 24.76 | 23.14 |
| 3S BBH CoT | 0.881 | 0.892 | 0.979 | 16.87 | 12.21 | 0 7.14 |
| 5S MMLU | 0.660 | 0.853 | 0.832 | 26.57 | 26.69 | 27.07 |
| 8S GSM8K CoT | 0.654 | 0.867 | 0.851 | 0 1.06 | 0 1.21 | 0 1.74 |
| FS BBH+MMLU+GSM8K | 0.717 | 0.890 | 0.891 | 15.58 | 15.46 | 14.65 |
Figure 7: Downstream metric, indicated by bubble size where larger is better, is plotted vs. the median Hurst and the median BPB for all 12 language models.
Context Length at Training Time (Negative Result).
Finally, we present a negative result. Self-similarity and LRD point to an intriguing possibility: the importance of training the model with extensive contexts in order to capture the fractal-nature of language, which may elevate the model’s capabilities regardless of the context length needed during inference. To test this hypothesis, we pretrain three decoder-only T5.1.1 models with 1B parameters on SlimPajama-627B [61] for up to 100B tokens using three context lengths: 2K, 4K and 8K, all observing the same number of tokens per batch. We use SlimPajama-627B instead of C4 because most documents in C4 are short ( $≈ 94\%$ of them are $<2K$ tokens in length). Refer to Appendix A for details. These models are, then, evaluated on the same downstream benchmarks listed in Figure 7. As shown in Table 3 (right) however, we do not observe any improvements in performance with context length in this particular setup.
4 Related Works and Directions for Future Research
The statistical attributes of human language have long piqued scholarly curiosity, such as One example is Zipf’s law, which Shannon leveraged to estimate the entropy of English to be around 1 bit per letter [59], but his calculation did not consider second-order statistics. More recently, Eftekhari, [18] proposed a refinement to Zipf’s law, suggesting its application to letters rather than words. Another related result is Heap’s law, which states that the number of unique words is a power law function of the document’s length [29]. However, both Zipf’s and Heap’s laws are invariant to the semantic ordering of text, so they do not capture important aspects, such as long-range dependence (LRD) [48].
In terms of self-similarity in language, the Menzerath-Altmann law stipulates a self-similar behavior in the following sense: when the size of a language construct increases, the size of its constituents decreases, and this happens at all scales [48, 4]. In Ausloos, [9], the authors model texts as a time series by replacing a word with its length. After that, they study the fractal behavior of language. However, as mentioned in [22], replacing a word with its length is invalid because it is not translation-independent (i.e. one could map every word to an arbitrary token, including tokens of equal length). In our work, we model language as a series of bits calculated from conditional entropies, reflecting the intrinsic structure of the language itself, inspired by findings in linguistics such as [28, 22, 40].
In Najafi and Darooneh, [48], the authors define a fractal dimension for each word. Informally, they examine the recurrence of a single, predetermined word as a binary series, similar to the approach used in Altmann et al., [3]. However, this only applies to individual words and cannot model higher-level clauses. For instance, it does not distinguish between “time” in the phrase “once upon a time” and “time” in “space and time.” Kokol and Podgorelec, [35] estimate LRD in natural language, and suggest that its LRD is close to that of pure noise! They conjecture this was due to their use of ASCII encoding. In computer languages, they observe LRD and suggest it is because they are formal.
Besides the above concerns in prior studies that examined the self-similar structure in language, another concern is that they sometimes give extremely large values of the fractal dimension, sometimes exceeding 10 [4]! Such values are difficult to interpret because the fractal dimension $\mathrm{D}$ should fall in $\mathrm{D}∈[1,2]$ for time series. We do not observe such issues in our analysis. In our case, $\mathrm{D}=1.41± 0.08$ .
Limitations and Future Research.
Our analysis is currently limited to the English language so it may not apply to other languages that differ significantly. For instance, some languages such as Pirahã (spoken in the Amazon) do not have a recursive structure like most languages do [20]. We also do not model the semantic or lexical form of language. While our information-theoretic approach is well-founded and captures the intrinsic complexity of language, it does not account for the semantic nuances that contribute to meaning. Thirdly, self-similarity may explain why parameter sharing, such as in ALBERT [37], can be successful but exploiting self-similarity more directly in LLMs could lead to further optimizations. Exploring these aspects are promising directions for future research.
5 Concluding Remarks
In this work, we highlight intriguing insights into the underlying fractal structure of language and how it may be interconnected with the remarkable capabilities of LLMs. Our formalism quantifies properties of language that may have been suspected, but not previously formally shown. In particular, the need in LLMs to balance between short- and long-term contexts is reflected in the self-similar structure of language, while long-range dependence is quantifiable using the Hurst parameter. For instance, the absence of LRD in DM-Mathematics is reflected in its Hurst parameter of $\mathrm{H}≈ 0.5$ . Interestingly, the estimated median Hurst value of $\mathrm{H}=0.70± 0.09$ in language reflects an intriguing balance between predictability and noise that is similar to many other phenomena, and combining both $\mathrm{H}$ with BPB together yields a stronger predictor of downstream performance. We carry out an extensive comparative analysis across different domains and model architectures, revealing that fractal parameters are generally robust. We hope that future research can further probe into these fractal properties, unearthing deeper understandings of the relation between intelligence and language.
Acknowledgement
The authors would like to thank Justin Gilmer and Olivier Bousquet for their feedback on earlier drafts of this manuscript, and both Google Deepmind and Google Research teams at large for the insightful discussions and providing a supportive research environment.
References
- Abry et al., [1995] Abry, P., Gonçalvés, P., and Flandrin, P. (1995). Wavelets, spectrum analysis and 1/f processes. Wavelets and statistics, pages 15–29.
- Alabdulmohsin, [2018] Alabdulmohsin, I. M. (2018). Summability calculus: A comprehensive theory of fractional finite sums. Springer.
- Altmann et al., [2012] Altmann, E. G., Cristadoro, G., and Esposti, M. D. (2012). On the origin of long-range correlations in texts. Proceedings of the National Academy of Sciences, 109(29):11582–11587.
- Andres, [2009] Andres, J. (2009). On de Saussure’s principle of linearity and visualization of language structures. Glottotheory, 2(2):1–14.
- [5] Anil, R., Borgeaud, S., Wu, Y., Alayrac, J.-B., Yu, J., Soricut, R., Schalkwyk, J., Dai, A. M., Hauth, A., Millican, K., Silver, D., Petrov, S., Johnson, M., Antonoglou, I., Schrittwieser, J., Glaese, A., Chen, J., Pitler, E., et al. (2023a). Gemini: A family of highly capable multimodal models. arXiv:2312.11805v1 [cs.CL].
- [6] Anil, R., Dai, A. M., Firat, O., Johnson, M., Lepikhin, D., Passos, A., Shakeri, S., Taropa, E., Bailey, P., Chen, Z., Chu, E., Clark, J. H., Shafey, L. E., Huang, Y., Meier-Hellstern, K., Mishra, G., Moreira, E., Omernick, M., Robinson, K., Ruder, S., Tay, Y., Xiao, K., Xu, Y., Zhang, Y., Abrego, G. H., Ahn, J., Austin, J., Barham, P., Botha, J., Bradbury, J., Brahma, S., Brooks, K., Catasta, M., Cheng, Y., Cherry, C., Choquette-Choo, C. A., Chowdhery, A., Crepy, C., Dave, S., Dehghani, M., Dev, S., Devlin, J., Díaz, M., Du, N., Dyer, E., Feinberg, V., Feng, F., Fienber, V., Freitag, M., Garcia, X., Gehrmann, S., Gonzalez, L., Gur-Ari, G., Hand, S., Hashemi, H., Hou, L., Howland, J., Hu, A., Hui, J., Hurwitz, J., Isard, M., Ittycheriah, A., Jagielski, M., Jia, W., Kenealy, K., Krikun, M., Kudugunta, S., Lan, C., Lee, K., Lee, B., Li, E., Li, M., Li, W., Li, Y., Li, J., Lim, H., Lin, H., Liu, Z., Liu, F., Maggioni, M., Mahendru, A., Maynez, J., Misra, V., Moussalem, M., Nado, Z., Nham, J., Ni, E., Nystrom, A., Parrish, A., Pellat, M., Polacek, M., Polozov, A., Pope, R., Qiao, S., Reif, E., Richter, B., Riley, P., Ros, A. C., Roy, A., Saeta, B., Samuel, R., Shelby, R., Slone, A., Smilkov, D., So, D. R., Sohn, D., Tokumine, S., Valter, D., Vasudevan, V., Vodrahalli, K., Wang, X., Wang, P., Wang, Z., Wang, T., Wieting, J., Wu, Y., Xu, K., Xu, Y., Xue, L., Yin, P., Yu, J., Zhang, Q., Zheng, S., Zheng, C., Zhou, W., Zhou, D., Petrov, S., and Wu, Y. (2023b). PaLM 2 technical report. arXiv:2305.10403v3 [cs.CL].
- Apostol, [1999] Apostol, T. M. (1999). An elementary view of Euler’s summation formula. The American Mathematical Monthly, 106(5):409–418.
- Aref, [1998] Aref, S. (1998). Hurst phenomenon and fractal dimensions in long-term yield data. In Conference on Applied Statistics in Agriculture.
- Ausloos, [2012] Ausloos, M. (2012). Generalized Hurst exponent and multifractal function of original and translated texts mapped into frequency and length time series. Physical Review E, 86(3):031108.
- Bradbury et al., [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M. J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., and Zhang, Q. (2018). JAX: composable transformations of Python+NumPy programs.
- Bubeck et al., [2023] Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y., Lundberg, S., Nori, H., Palangi, H., Ribeiro, M. T., and Zhang, Y. (2023). Sparks of artificial general intelligence: Early experiments with GPT-4.
- Chowdhery et al., [2022] Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., Barham, P., Chung, H. W., Sutton, C., Gehrmann, S., et al. (2022). PaLM: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
- Chung et al., [2022] Chung, H. W., Hou, Le and, L. S., Zoph, B., Tay, Y., Fedus, W., and et al. (2022). Scaling instruction-finetuned language models. arXiv:2210.11416v5 [cs.LG].
- Cobbe et al., [2021] Cobbe, K., Kosaraju, V., Bavarian, M., Hilton, J., Nakano, R., Hesse, C., and Schulman, J. (2021). Training verifiers to solve math word problems. arXiv:2110.14168v2 [cs.LG].
- Cover, [1999] Cover, T. M. (1999). Elements of information theory. John Wiley & Sons.
- Crovella and Bestavros, [1995] Crovella, M. E. and Bestavros, A. (1995). Explaining world wide web traffic self-similarity. Technical report, Boston University Computer Science Department.
- Efron and Tibshirani, [1994] Efron, B. and Tibshirani, R. J. (1994). An introduction to the bootstrap. CRC press.
- Eftekhari, [2006] Eftekhari, A. (2006). Fractal geometry of texts: An initial application to the works of Shakespeare. Journal of Quantitative Linguistics, 13(2-3):177–193.
- Embrechts and Maejima, [2000] Embrechts, P. and Maejima, M. (2000). An introduction to the theory of self-similar stochastic processes. International journal of modern physics B, 14(12n13):1399–1420.
- Everett, [2005] Everett, D. (2005). Cultural constraints on grammar and cognition in pirahã: Another look at the design features of human language. Current anthropology, 46(4):621–646.
- Feller, [1951] Feller, W. (1951). The Asymptotic Distribution of the Range of Sums of Independent Random Variables. The Annals of Mathematical Statistics, 22(3):427 – 432.
- Futrell and Hahn, [2022] Futrell, R. and Hahn, M. (2022). Information theory as a bridge between language function and language form. Frontiers in Communication, 7:657725.
- Gao et al., [2020] Gao, L., Biderman, S., Black, S., Golding, L., Hoppe, T., Foster, C., Phang, J., He, H., Thite, A., Nabeshima, N., Presser, S., and Leahy, C. (2020). The Pile: An 800GB dataset of diverse text for language modeling. arXiv:2101.00027v1 [cs.CL].
- Geweke and Porter-Hudak, [1983] Geweke, J. and Porter-Hudak, S. (1983). The estimation and application of long memory time series models. Journal of time series analysis, 4(4):221–238.
- Gibson et al., [2019] Gibson, E., Futrell, R., Piantadosi, S. P., Dautriche, I., Mahowald, K., Bergen, L., and Levy, R. (2019). How efficiency shapes human language. Trends in cognitive sciences, 23(5):389–407.
- Gneiting and Schlather, [2004] Gneiting, T. and Schlather, M. (2004). Stochastic models that separate fractal dimension and the Hurst effect. SIAM Review, 46(2):269–282.
- Goldberger et al., [2002] Goldberger, A. L., Amaral, L. A., Hausdorff, J. M., Ivanov, P. C., Peng, C.-K., and Stanley, H. E. (2002). Fractal dynamics in physiology: alterations with disease and aging. Proceedings of the national academy of sciences, 99(suppl_1):2466–2472.
- Hale, [2001] Hale, J. (2001). A probabilistic earley parser as a psycholinguistic model. In Second meeting of the north american chapter of the association for computational linguistics.
- Heaps, [1978] Heaps, H. S. (1978). Information retrieval, computational and theoretical aspects. Academic Press.
- Hendrycks et al., [2020] Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D., and Steinhardt, J. (2020). Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300.
- Hurst, [1951] Hurst, H. E. (1951). Long-term storage capacity of reservoirs. Transactions of the American society of civil engineers, 116(1):770–799.
- Jouppi et al., [2020] Jouppi, N. P., Yoon, D. H., Kurian, G., Li, S., Patil, N., Laudon, J., Young, C., and Patterson, D. (2020). A domain-specific supercomputer for training deep neural networks. Communications of the ACM, 63(7):67–78.
- Kadavath et al., [2022] Kadavath, S., Conerly, T., Askell, A., Henighan, T., Drain, D., Perez, E., Schiefer, N., Hatfield-Dodds, Z., DasSarma, N., Tran-Johnson, E., et al. (2022). Language models (mostly) know what they know. arXiv preprint arXiv:2207.05221.
- Kidd and Hayden, [2015] Kidd, C. and Hayden, B. Y. (2015). The psychology and neuroscience of curiosity. Neuron, 88(3):449–460.
- Kokol and Podgorelec, [2000] Kokol, P. and Podgorelec, V. (2000). Complexity and human writings. Complexity, 7:1–6.
- Kolmogorov, [1940] Kolmogorov, A. N. (1940). Wienersche spiralen und einige andere interessante kurven in hilbertscen raum, cr (doklady). Acad. Sci. URSS (NS), 26:115–118.
- Lan et al., [2019] Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., and Soricut, R. (2019). ALBERT: A lite BERT for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942.
- Leland et al., [1994] Leland, W. E., Taqqu, M. S., Willinger, W., and Wilson, D. V. (1994). On the self-similar nature of Ethernet traffic. IEEE/ACM Transactions on networking, 2(1):1–15.
- Leland and Wilson, [1991] Leland, W. E. and Wilson, D. V. (1991). High time-resolution measurement and analysis of LAN traffic: Implications for LAN interconnection. In IEEE INFCOM.
- Levy, [2008] Levy, R. (2008). Expectation-based syntactic comprehension. Cognition, 106(3):1126–1177.
- Longpre et al., [2023] Longpre, S., Hou, L., Vu, T., Webson, A., Chung, H. W., Tay, Y., Zhou, D., Le, Q. V., Zoph, B., Wei, J., and Roberts, A. (2023). The flan collection: designing data and methods for effective instruction tuning. In Proceedings of the 40th International Conference on Machine Learning, ICML’23. JMLR.org.
- Mandelbrot, [1967] Mandelbrot, B. (1967). How long is the coast of Britain? Statistical self-similarity and fractional dimension. science, 156(3775):636–638.
- Mandelbrot, [2002] Mandelbrot, B. (2002). Gaussian self-affinity and fractals: globality, the earth, 1/f noise, and R/S. Springer Science and Business Media.
- Mandelbrot, [1982] Mandelbrot, B. B. (1982). The fractal geometry of nature. WH freeman New York.
- Mandelbrot and Wallis, [1968] Mandelbrot, B. B. and Wallis, J. R. (1968). Noah, Joseph, and operational hydrology. Water resources research, 4(5):909–918.
- Mollica et al., [2021] Mollica, F., Bacon, G., Zaslavsky, N., Xu, Y., Regier, T., and Kemp, C. (2021). The forms and meanings of grammatical markers support efficient communication. Proceedings of the National Academy of Sciences, 118(49):e2025993118.
- Montemurro and Pury, [2002] Montemurro, M. A. and Pury, P. A. (2002). Long-range fractal correlations in literary corpora. Fractals, 10(04):451–461.
- Najafi and Darooneh, [2015] Najafi, E. and Darooneh, A. H. (2015). The fractal patterns of words in a text: a method for automatic keyword extraction. PloS one, 10(6):e0130617.
- OpenAI, [2023] OpenAI (2023). GPT-4 technical report. arXiv:2303.08774v4 [cs.CL].
- Paxson and Floyd, [1995] Paxson, V. and Floyd, S. (1995). Wide area traffic: the failure of Poisson modeling. IEEE/ACM Transactions on networking, 3(3):226–244.
- Peng et al., [1992] Peng, C.-K., Buldyrev, S. V., Goldberger, A. L., Havlin, S., Sciortino, F., Simons, M., and Stanley, H. E. (1992). Long-range correlations in nucleotide sequences. Nature, 356(6365):168–170.
- Perfors et al., [2010] Perfors, A., Tenenbaum, J., Gibson, E., and Regier, T. (2010). How recursive is language? a bayesian exploration. Recursion and human language, pages 159–175.
- Pilgrim and Taylor, [2018] Pilgrim, I. and Taylor, R. P. (2018). Fractal analysis of time-series data sets: Methods and challenges. In Ouadfeul, S.-A., editor, Fractal Analysis, chapter 2. IntechOpen, Rijeka.
- Raffel et al., [2019] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J. (2019). Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv:1910.10683v4 [cs.LG].
- Roberts et al., [2022] Roberts, A., Chung, H. W., Levskaya, A., Mishra, G., Bradbury, J., Andor, D., Narang, S., Lester, B., Gaffney, C., Mohiuddin, A., Hawthorne, C., Lewkowycz, A., Salcianu, A., van Zee, M., Austin, J., Goodman, S., Soares, L. B., Hu, H., Tsvyashchenko, S., Chowdhery, A., Bastings, J., Bulian, J., Garcia, X., Ni, J., Chen, A., Kenealy, K., Clark, J. H., Lee, S., Garrette, D., Lee-Thorp, J., Raffel, C., Shazeer, N., Ritter, M., Bosma, M., Passos, A., Maitin-Shepard, J., Fiedel, N., Omernick, M., Saeta, B., Sepassi, R., Spiridonov, A., Newlan, J., and Gesmundo, A. (2022). Scaling up models and data with t5x and seqio.
- Roche et al., [2003] Roche, S., Bicout, D., Maciá, E., and Kats, E. (2003). Long range correlations in DNA: scaling properties and charge transfer efficiency. Physical review letters, 91(22):228101.
- Samorodnitsky, [2006] Samorodnitsky, G. (2006). Long memory and self-similar processes. In Annales de la Faculté des sciences de Toulouse: Mathématiques, volume 15, pages 107–123.
- Schenkel et al., [1993] Schenkel, A., Zhang, J., and Zhang, Y.-C. (1993). Long range correlation in human writings. Fractals, 1(01):47–57.
- Shannon, [1951] Shannon, C. E. (1951). Prediction and entropy of printed English. Bell system technical journal, 30(1):50–64.
- Shazeer and Stern, [2018] Shazeer, N. and Stern, M. (2018). Adafactor: Adaptive learning rates with sublinear memory cost. In International Conference on Machine Learning, pages 4596–4604. PMLR.
- Soboleva et al., [2023] Soboleva, D., Al-Khateeb, F., Myers, R., Steeves, J. R., Hestness, J., and Dey, N. (2023). SlimPajama: A 627B token cleaned and deduplicated version of RedPajama.
- Srivastava et al., [2022] Srivastava, A., Rastogi, A., Rao, A., Shoeb, A. A. M., Abid, A., Fisch, A., Brown, A. R., Santoro, A., Gupta, A., Garriga-Alonso, A., et al. (2022). Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615.
- Suzgun et al., [2022] Suzgun, M., Scales, N., Scharli, N., Gehrmann, S., Tay, Y., Chung, H. W., Chowdhery, A., Le, Q. V., Chi, E. H., Zhou, D., and Wei, J. (2022). Challenging BIG-Bench tasks and whether chain-of-thought can solve them. arXiv:2210.09261v1 [cs.CL].
- Watkins, [2019] Watkins, N. (2019). Mandelbrot’s stochastic time series models. Earth and Space Science, 6(11):2044–2056.
- Willinger et al., [1995] Willinger, W., Taqqu, M. S., Leland, W. E., and Wilson, D. V. (1995). Self-similarity in high-speed packet traffic: analysis and modeling of Ethernet traffic measurements. Statistical science, pages 67–85.
- Willinger et al., [1997] Willinger, W., Taqqu, M. S., Sherman, R., and Wilson, D. V. (1997). Self-similarity through high-variability: statistical analysis of Ethernet LAN traffic at the source level. IEEE/ACM Transactions on networking, 5(1):71–86.
Appendix A Experiment Details
All of our experiments are conducted in JAX/Flax [10] using the open source T5X framework [55].
T5 baselines in Table 2 and 3 are pretrained from scratch using the open source T5.1.1 decoder-only architecture from the T5X library. https://github.com/google-research/t5x/tree/main/t5x/examples/decoder_only/models. We pretrain using a causal language modeling objective over the C4 corpus with the default T5 vocabulary as per Raffel et al., [54]. Training is done for 500k steps with a sequence length of 1024 and batch size of 512, resulting in a total of 262B tokens seen during pretraining. We optimize our model with the Adafactor [60] optimizer with an inverse square root learning rate schedule, 1k warmup steps, and an initial learning rate of 1e-2. Models are trained using 256 TPUv5e chips [32].
T5 context length ablation experiments in Table 3 are trained with the same pretraining objective but over the SlimPajama-627B corpus [61] and using a modified version of the T5 vocabulary that preserves whitespace and introduces byte-fallback for out of vocabulary tokens. This is similar to Chowdhery et al., [12], but preserving the original T5 vocabulary. Models with sequence lengths 2048, 4096, 8192 are trained with batch sizes of 512, 256, and 128 respectively to preserve the number of tokens seen per batch and overall training steps. We train all models for 100k steps, using the same learning rate schedule described above. Hence, all models observe 100B tokens.
Appendix B Full Results
In this section, we provide the full list of parameters calculated for each combination of LLM and domain. We use bootstrapping [17] to estimate the error margin.
Table 4: Log-perplexity (NLL) scores evaluated on the first 2048 tokens, after trimming the first 100 tokens, of documents belonging to each of the shown domains. Only documents with a minimum length of 4K tokens are used.
| Model T5-Decoder-110M T5-Decoder-340M | OpenWebText2 2.89 2.60 | Github 1.82 1.56 | FreeLaw 2.45 2.14 | Pile-CC 2.88 2.62 | Wikipedia 2.80 2.52 | PubMed 2.36 2.08 | Mathematics 2.28 2.10 | ArXiv 2.70 2.42 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| T5-Decoder-1B | 2.38 | 1.37 | 1.91 | 2.41 | 2.29 | 1.88 | 2.00 | 2.19 |
| T5-Decoder-5B | 2.19 | 1.22 | 1.73 | 2.25 | 2.11 | 1.73 | 1.91 | 2.01 |
| PaLM1-8B | 2.26 | 0.79 | 1.66 | 2.36 | 2.08 | 1.89 | 1.40 | 2.08 |
| PaLM1-62B | 2.02 | 0.62 | 1.44 | 2.14 | 1.80 | 1.68 | 1.30 | 1.83 |
| PaLM1-540B | 1.88 | 0.54 | 1.33 | 2.01 | 1.58 | 1.57 | 1.25 | 1.68 |
| PaLM2-XXS | 2.37 | 0.87 | 1.77 | 2.46 | 2.17 | 1.96 | 1.38 | 1.96 |
| PaLM2-XS | 2.12 | 0.73 | 1.53 | 2.22 | 1.92 | 1.72 | 1.27 | 1.72 |
| PaLM2-S | 1.95 | 0.60 | 1.37 | 2.06 | 1.71 | 1.57 | 1.19 | 1.55 |
| PaLM2-M | 1.88 | 0.56 | 1.31 | 1.99 | 1.59 | 1.51 | 1.12 | 1.48 |
| PaLM2-L | 1.75 | 0.46 | 1.23 | 1.88 | 1.22 | 1.43 | 1.08 | 1.36 |
Table 5: Self-similarity exponent $\mathrm{S}$ evaluated on the first 2048 tokens, after trimming the first 100 tokens, of documents belonging to each of the shown domains. Only documents with a minimum length of 4K tokens are used.
| Model T5-Decoder-110M T5-Decoder-340M | OpenWebText2 $0.58± 0.04$ $0.52± 0.03$ | Github $0.67± 0.03$ $0.59± 0.05$ | FreeLaw $0.51± 0.02$ $0.63± 0.04$ | Pile-CC $0.54± 0.07$ $0.58± 0.04$ | Wikipedia $0.59± 0.04$ $0.61± 0.03$ | PubMed $0.59± 0.03$ $0.61± 0.03$ | Mathematics $0.51± 0.04$ $0.48± 0.04$ | ArXiv $0.58± 0.05$ $0.61± 0.05$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| T5-Decoder-1B | $0.54± 0.01$ | $0.66± 0.11$ | $0.61± 0.06$ | $0.57± 0.06$ | $0.59± 0.05$ | $0.60± 0.02$ | $0.50± 0.03$ | $0.63± 0.02$ |
| T5-Decoder-5B | $0.51± 0.04$ | $0.70± 0.04$ | $0.60± 0.04$ | $0.58± 0.02$ | $0.58± 0.03$ | $0.57± 0.02$ | $0.45± 0.02$ | $0.67± 0.05$ |
| PaLM1-8B | $0.56± 0.03$ | $0.67± 0.05$ | $0.63± 0.05$ | $0.58± 0.01$ | $0.55± 0.04$ | $0.62± 0.03$ | $0.50± 0.03$ | $0.68± 0.07$ |
| PaLM1-62B | $0.49± 0.03$ | $0.65± 0.09$ | $0.63± 0.09$ | $0.57± 0.03$ | $0.63± 0.05$ | $0.61± 0.04$ | $0.48± 0.05$ | $0.68± 0.03$ |
| PaLM1-540B | $0.51± 0.04$ | $0.68± 0.09$ | $0.64± 0.05$ | $0.58± 0.04$ | $0.67± 0.03$ | $0.64± 0.08$ | $0.48± 0.03$ | $0.65± 0.04$ |
| PaLM2-XXS | $0.53± 0.02$ | $0.61± 0.05$ | $0.58± 0.04$ | $0.60± 0.04$ | $0.57± 0.05$ | $0.61± 0.03$ | $0.52± 0.02$ | $0.70± 0.04$ |
| PaLM2-XS | $0.54± 0.04$ | $0.57± 0.06$ | $0.58± 0.03$ | $0.56± 0.04$ | $0.60± 0.04$ | $0.57± 0.06$ | $0.45± 0.02$ | $0.73± 0.06$ |
| PaLM2-S | $0.55± 0.02$ | $0.55± 0.15$ | $0.59± 0.02$ | $0.54± 0.08$ | $0.65± 0.04$ | $0.58± 0.05$ | $0.49± 0.04$ | $0.61± 0.03$ |
| PaLM2-M | $0.58± 0.02$ | $0.62± 0.06$ | $0.59± 0.04$ | $0.60± 0.05$ | $0.70± 0.03$ | $0.56± 0.04$ | $0.46± 0.04$ | $0.62± 0.05$ |
| PaLM2-L | $0.53± 0.05$ | $0.60± 0.05$ | $0.61± 0.05$ | $0.56± 0.03$ | $0.62± 0.02$ | $0.60± 0.07$ | $0.42± 0.03$ | $0.70± 0.03$ |
Table 6: Hurst exponent $\mathrm{H}$ evaluated on the first 2048 tokens, after trimming the first 100 tokens, of documents belonging to each of the shown domains. Only documents with a minimum length of 4K tokens are used.
| Model T5-Decoder-110M T5-Decoder-340M | OpenWebText2 $0.63± 0.00$ $0.63± 0.01$ | Github $0.82± 0.01$ $0.82± 0.01$ | FreeLaw $0.62± 0.01$ $0.62± 0.00$ | Pile-CC $0.67± 0.01$ $0.67± 0.00$ | Wikipedia $0.62± 0.01$ $0.62± 0.01$ | PubMed $0.65± 0.00$ $0.64± 0.01$ | Mathematics $0.54± 0.01$ $0.54± 0.00$ | ArXiv $0.68± 0.01$ $0.67± 0.01$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| T5-Decoder-1B | $0.63± 0.01$ | $0.83± 0.01$ | $0.63± 0.01$ | $0.67± 0.00$ | $0.62± 0.01$ | $0.64± 0.00$ | $0.54± 0.00$ | $0.67± 0.00$ |
| T5-Decoder-5B | $0.63± 0.01$ | $0.82± 0.00$ | $0.62± 0.01$ | $0.67± 0.01$ | $0.62± 0.01$ | $0.64± 0.01$ | $0.54± 0.00$ | $0.67± 0.00$ |
| PaLM1-8B | $0.65± 0.01$ | $0.81± 0.01$ | $0.66± 0.00$ | $0.68± 0.01$ | $0.66± 0.00$ | $0.65± 0.01$ | $0.57± 0.00$ | $0.69± 0.01$ |
| PaLM1-62B | $0.66± 0.01$ | $0.80± 0.00$ | $0.67± 0.01$ | $0.69± 0.01$ | $0.68± 0.00$ | $0.65± 0.00$ | $0.57± 0.00$ | $0.70± 0.00$ |
| PaLM1-540B | $0.67± 0.00$ | $0.79± 0.01$ | $0.68± 0.00$ | $0.69± 0.01$ | $0.71± 0.01$ | $0.65± 0.01$ | $0.56± 0.00$ | $0.70± 0.01$ |
| PaLM2-XXS | $0.65± 0.01$ | $0.81± 0.01$ | $0.65± 0.01$ | $0.68± 0.01$ | $0.66± 0.01$ | $0.65± 0.01$ | $0.58± 0.00$ | $0.71± 0.01$ |
| PaLM2-XS | $0.65± 0.01$ | $0.81± 0.01$ | $0.66± 0.01$ | $0.68± 0.01$ | $0.67± 0.00$ | $0.65± 0.00$ | $0.56± 0.01$ | $0.71± 0.01$ |
| PaLM2-S | $0.67± 0.01$ | $0.80± 0.01$ | $0.66± 0.01$ | $0.69± 0.00$ | $0.68± 0.01$ | $0.65± 0.01$ | $0.54± 0.00$ | $0.71± 0.00$ |
| PaLM2-M | $0.67± 0.01$ | $0.80± 0.01$ | $0.67± 0.01$ | $0.70± 0.01$ | $0.70± 0.01$ | $0.65± 0.01$ | $0.52± 0.01$ | $0.72± 0.01$ |
| PaLM2-L | $0.68± 0.01$ | $0.79± 0.01$ | $0.68± 0.00$ | $0.70± 0.00$ | $0.74± 0.01$ | $0.65± 0.00$ | $0.50± 0.01$ | $0.72± 0.01$ |
Table 7: Joseph exponent $\mathrm{J}$ evaluated on the first 2048 tokens, after trimming the first 100 tokens, of documents belonging to each of the shown domains. Only documents with a minimum length of 4K tokens are used.
| Model T5-Decoder-110M T5-Decoder-340M | OpenWebText2 $0.44± 0.01$ $0.44± 0.02$ | Github $0.53± 0.00$ $0.53± 0.00$ | FreeLaw $0.42± 0.00$ $0.43± 0.00$ | Pile-CC $0.49± 0.01$ $0.49± 0.00$ | Wikipedia $0.45± 0.00$ $0.45± 0.01$ | PubMed $0.43± 0.00$ $0.43± 0.00$ | Mathematics $0.33± 0.00$ $0.33± 0.00$ | ArXiv $0.45± 0.00$ $0.45± 0.00$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| T5-Decoder-1B | $0.43± 0.01$ | $0.53± 0.00$ | $0.43± 0.01$ | $0.49± 0.01$ | $0.45± 0.01$ | $0.42± 0.00$ | $0.33± 0.00$ | $0.45± 0.01$ |
| T5-Decoder-5B | $0.43± 0.01$ | $0.53± 0.00$ | $0.44± 0.00$ | $0.49± 0.01$ | $0.45± 0.00$ | $0.42± 0.00$ | $0.34± 0.00$ | $0.45± 0.00$ |
| PaLM1-8B | $0.45± 0.00$ | $0.51± 0.00$ | $0.46± 0.00$ | $0.49± 0.01$ | $0.48± 0.01$ | $0.44± 0.01$ | $0.34± 0.00$ | $0.48± 0.01$ |
| PaLM1-62B | $0.45± 0.00$ | $0.50± 0.01$ | $0.47± 0.00$ | $0.49± 0.01$ | $0.49± 0.00$ | $0.44± 0.00$ | $0.33± 0.00$ | $0.48± 0.01$ |
| PaLM1-540B | $0.46± 0.01$ | $0.49± 0.01$ | $0.47± 0.00$ | $0.50± 0.01$ | $0.50± 0.00$ | $0.44± 0.00$ | $0.33± 0.01$ | $0.48± 0.00$ |
| PaLM2-XXS | $0.44± 0.01$ | $0.50± 0.00$ | $0.45± 0.00$ | $0.50± 0.01$ | $0.48± 0.00$ | $0.45± 0.00$ | $0.34± 0.00$ | $0.49± 0.00$ |
| PaLM2-XS | $0.45± 0.01$ | $0.50± 0.01$ | $0.46± 0.01$ | $0.49± 0.00$ | $0.48± 0.00$ | $0.44± 0.00$ | $0.33± 0.01$ | $0.49± 0.00$ |
| PaLM2-S | $0.45± 0.00$ | $0.49± 0.00$ | $0.47± 0.00$ | $0.50± 0.01$ | $0.50± 0.01$ | $0.44± 0.00$ | $0.31± 0.00$ | $0.49± 0.00$ |
| PaLM2-M | $0.45± 0.01$ | $0.49± 0.01$ | $0.48± 0.01$ | $0.50± 0.01$ | $0.50± 0.00$ | $0.44± 0.00$ | $0.29± 0.00$ | $0.49± 0.01$ |
| PaLM2-L | $0.46± 0.01$ | $0.49± 0.00$ | $0.49± 0.00$ | $0.50± 0.00$ | $0.52± 0.00$ | $0.44± 0.00$ | $0.28± 0.00$ | $0.49± 0.00$ |
Appendix C Predicting Downstream Performance
Table 8 presents detailed downstream performance results, along with corresponding upstream metrics.
In Table 9, we repeat the same analysis in Section 3 using the adjusted $R^{2}$ coefficient, but with the self-similarity $\mathrm{S}$ and Joseph exponents $\mathrm{J}$ . Unlike in the median Hurst exponent, we do not observe any improvement when combining perplexity scores with the self-similarity exponent $\mathrm{S}$ or the Joseph exponent $\mathrm{J}$ .
Table 8: Full downstream few-shot evaluation results compared to upstream BPB. Here, BPB is computed over The Pile validation split using the first 2048 tokens of every document. All evaluation results are reported as raw (un-normalized) accuracy. Please note that our results are not directly comparable to all previous published results for the same models; please cite the original results from [12, 6]. Here, we only aim for a fair comparison between models: only pretrained models without instruction tuning are used, we do not optimize any prompts for each model, and we evaluate all models using only a 2K sequence length.
| Model | BPB | 0S BBH Direct | 0S BBH CoT | 0S MMLU | 3S BBH Direct | 3S BBH CoT | 5S MMLU | 8S GSM8K CoT | 0S BBH +MMLU | FS BBH +MMLU +GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| T5-Decoder-110M | 1.11 | 0.83 | 0.11 | 25.65 | 21.36 | 5.69 | 25.62 | 0.91 | 13.06 | 13.35 |
| T5-Decoder-340M | 1.00 | 0.96 | 0.17 | 25.72 | 23.57 | 10.03 | 25.98 | 1.59 | 13.14 | 14.79 |
| T5-Decoder-1B | 0.92 | 1.29 | 0.14 | 25.99 | 24.26 | 13.19 | 24.82 | 1.14 | 13.35 | 14.90 |
| T5-Decoder-5B | 0.85 | 2.13 | 0.48 | 24.41 | 24.76 | 18.05 | 25.63 | 2.20 | 12.86 | 16.41 |
| PaLM1-8B | 0.78 | 6.46 | 1.21 | 23.53 | 32.18 | 27.60 | 24.56 | 5.16 | 13.68 | 19.87 |
| PaLM1-62B | 0.70 | 13.79 | 0.83 | 51.86 | 39.51 | 39.70 | 54.78 | 29.57 | 29.59 | 41.32 |
| PaLM1-540B | 0.66 | 23.26 | 4.72 | 67.78 | 52.44 | 56.02 | 70.50 | 56.79 | 40.89 | 60.51 |
| PaLM2-XXS | 0.81 | 8.99 | 0.13 | 25.26 | 30.71 | 26.08 | 24.72 | 2.96 | 14.91 | 18.69 |
| PaLM2-XS | 0.73 | 16.68 | 0.95 | 49.69 | 38.28 | 37.64 | 47.42 | 22.14 | 29.25 | 35.84 |
| PaLM2-S | 0.67 | 23.60 | 4.24 | 69.89 | 48.88 | 50.88 | 68.12 | 50.49 | 41.91 | 56.16 |
| PaLM2-M | 0.65 | 21.32 | 5.70 | 69.62 | 52.49 | 56.04 | 69.33 | 59.21 | 41.57 | 60.94 |
| PaLM2-L | 0.61 | 24.00 | 10.19 | 79.10 | 66.34 | 66.66 | 78.64 | 80.36 | 48.10 | 75.17 |
Table 9: Adjusted $R^{2}$ , which measures the proportion of variation in downstream performance (row) that is predictable from the given input(s) (column) using a trained linear regressor. Unlike in the median Hurst exponent, we do not observe any improvement when combining BPB scores with the self-similarity exponent $\mathrm{S}$ or the Joseph exponent $\mathrm{J}$ .
| 0S BBH Direct 0S MMLU | BPB 0.785 0.653 | $\mathrm{S}$ -0.060 -0.067 | $\mathrm{J}$ 0.673 0.426 | BPB+ $\mathrm{S}$ 0.761 0.614 | BPB+ $\mathrm{J}$ 0.794 0.614 |
| --- | --- | --- | --- | --- | --- |
| 0S BBH+MMLU | 0.685 | -0.065 | 0.472 | 0.650 | 0.651 |
| 3S BBH Direct | 0.767 | -0.030 | 0.599 | 0.744 | 0.754 |
| 3S BBH CoT | 0.881 | -0.026 | 0.678 | 0.870 | 0.879 |
| 5S MMLU | 0.660 | -0.044 | 0.421 | 0.624 | 0.622 |
| 8S GSM8K CoT | 0.654 | -0.037 | 0.427 | 0.619 | 0.616 |
| FS BBH + MMLU+GSM8K | 0.717 | -0.036 | 0.489 | 0.687 | 0.686 |