# Automatic Change-Point Detection in Time Series via Deep Learning
**Authors**: Jie Li111Addresses for correspondence: Jie Li, Department of Statistics, London School of Economics and Political Science, London, WC2A 2AE.Email: j.li196@lse.ac.uk, Paul Fearnhead, Piotr Fryzlewicz, Tengyao Wang
> Department of Statistics, London School of Economics and Political Science, London, UK
> Department of Mathematics and Statistics, Lancaster University, Lancaster, UK
Abstract
Detecting change-points in data is challenging because of the range of possible types of change and types of behaviour of data when there is no change. Statistically efficient methods for detecting a change will depend on both of these features, and it can be difficult for a practitioner to develop an appropriate detection method for their application of interest. We show how to automatically generate new offline detection methods based on training a neural network. Our approach is motivated by many existing tests for the presence of a change-point being representable by a simple neural network, and thus a neural network trained with sufficient data should have performance at least as good as these methods. We present theory that quantifies the error rate for such an approach, and how it depends on the amount of training data. Empirical results show that, even with limited training data, its performance is competitive with the standard CUSUM-based classifier for detecting a change in mean when the noise is independent and Gaussian, and can substantially outperform it in the presence of auto-correlated or heavy-tailed noise. Our method also shows strong results in detecting and localising changes in activity based on accelerometer data.
Keywords— Automatic statistician; Classification; Likelihood-free inference; Neural networks; Structural breaks; Supervised learning {textblock}
12.5[0,0](2,1) [To be read before The Royal Statistical Society at the Society’s 2023 annual conference held in Harrogate on Wednesday, September 6th, 2023, the President, Dr Andrew Garrett, in the Chair.] {textblock} 12.5[0,0](2,2) [Accepted (with discussion), to appear]
1 Introduction
Detecting change-points in data sequences is of interest in many application areas such as bioinformatics (Picard et al., 2005), climatology (Reeves et al., 2007), signal processing (Haynes et al., 2017) and neuroscience (Oh et al., 2005). In this work, we are primarily concerned with the problem of offline change-point detection, where the entire data is available to the analyst beforehand. Over the past few decades, various methodologies have been extensively studied in this area, see Killick et al. (2012); Jandhyala et al. (2013); Fryzlewicz (2014, 2023); Wang and Samworth (2018); Truong et al. (2020) and references therein. Most research on change-point detection has concentrated on detecting and localising different types of change, e.g. change in mean (Killick et al., 2012; Fryzlewicz, 2014), variance (Gao et al., 2019; Li et al., 2015), median (Fryzlewicz, 2021) or slope (Baranowski et al., 2019; Fearnhead et al., 2019), amongst many others. Many change-point detection methods are based upon modelling data when there is no change and when there is a single change, and then constructing an appropriate test statistic to detect the presence of a change (e.g. James et al., 1987; Fearnhead and Rigaill, 2020). The form of a good test statistic will vary with our modelling assumptions and the type of change we wish to detect. This can lead to difficulties in practice. As we use new models, it is unlikely that there will be a change-point detection method specifically designed for our modelling assumptions. Furthermore, developing an appropriate method under a complex model may be challenging, while in some applications an appropriate model for the data may be unclear but we may have substantial historical data that shows what patterns of data to expect when there is, or is not, a change. In these scenarios, currently a practitioner would need to choose the existing change detection method which seems the most appropriate for the type of data they have and the type of change they wish to detect. To obtain reliable performance, they would then need to adapt its implementation, for example tuning the choice of threshold for detecting a change. Often, this would involve applying the method to simulated or historical data. To address the challenge of automatically developing new change detection methods, this paper is motivated by the question: Can we construct new test statistics for detecting a change based only on having labelled examples of change-points? We show that this is indeed possible by training a neural network to classify whether or not a data set has a change of interest. This turns change-point detection in a supervised learning problem. A key motivation for our approach are results that show many common test statistics for detecting changes, such as the CUSUM test for detecting a change in mean, can be represented by simple neural networks. This means that with sufficient training data, the classifier learnt by such a neural network will give performance at least as good as classifiers corresponding to these standard tests. In scenarios where a standard test, such as CUSUM, is being applied but its modelling assumptions do not hold, we can expect the classifier learnt by the neural network to outperform it. There has been increasing recent interest in whether ideas from machine learning, and methods for classification, can be used for change-point detection. Within computer science and engineering, these include a number of methods designed for and that show promise on specific applications (e.g. Ahmadzadeh, 2018; De Ryck et al., 2021; Gupta et al., 2022; Huang et al., 2023). Within statistics, Londschien et al. (2022) and Lee et al. (2023) consider training a classifier as a way to estimate the likelihood-ratio statistic for a change. However these methods train the classifier in an un-supervised way on the data being analysed, using the idea that a classifier would more easily distinguish between two segments of data if they are separated by a change-point. Chang et al. (2019) use simulated data to help tune a kernel-based change detection method. Methods that use historical, labelled data have been used to train the tuning parameters of change-point algorithms (e.g. Hocking et al., 2015; Liehrmann et al., 2021). Also, neural networks have been employed to construct similarity scores of new observations to learned pre-change distributions for online change-point detection (Lee et al., 2023). However, we are unaware of any previous work using historical, labelled data to develop offline change-point methods. As such, and for simplicity, we focus on the most fundamental aspect, namely the problem of detecting a single change. Detecting and localising multiple changes is considered in Section 6 when analysing activity data. We remark that by viewing the change-point detection problem as a classification instead of a testing problem, we aim to control the overall misclassification error rate instead of handling the Type I and Type II errors separately. In practice, asymmetric treatment of the two error types can be achieved by suitably re-weighting misclassification in the two directions in the training loss function. The method we develop has parallels with likelihood-free inference methods Gourieroux et al. (1993); Beaumont (2019) in that one application of our work is to use the ability to simulate from a model so as to circumvent the need to analytically calculate likelihoods. However, the approach we take is very different from standard likelihood-free methods which tend to use simulation to estimate the likelihood function itself. By comparison, we directly target learning a function of the data that can discriminate between instances that do or do not contain a change (though see Gutmann et al., 2018, for likelihood-free methods based on re-casting the likelihood as a classification problem). For an introduction to the statistical aspects of neural network-based classification, albeit not specifically in a change-point context, see Ripley (1994). We now briefly introduce our notation. For any $n∈\mathbb{Z}^{+}$ , we define $[n]\coloneqq\{1,...,n\}$ . We take all vectors to be column vectors unless otherwise stated. Let $\boldsymbol{1}_{n}$ be the all-one vector of length $n$ . Let $\mathbbm{1}\{·\}$ represent the indicator function. The vertical symbol $|·|$ represents the absolute value or cardinality of $·$ depending on the context. For vector $\boldsymbol{x}=(x_{1},...,x_{n})^{→p}$ , we define its $p$ -norm as $\|\boldsymbol{x}\|_{p}\coloneqq\big{(}\sum_{i=1}^{n}|x_{i}|^{p}\big{)}^{1/p},p≥
1$ ; when $p=∞$ , define $\|\boldsymbol{x}\|_{∞}\coloneqq\max_{i}|x_{i}|$ . All proofs, as well as additional simulations and real data analyses appear in the supplement.
2 Neural networks
The initial focus of our work is on the binary classification problem for whether a change-point exists in a given time series. We will work with multilayer neural networks with Rectified Linear Unit (ReLU) activation functions and binary output. The multilayer neural network consists of an input layer, hidden layers and an output layer, and can be represented by a directed acyclic graph, see Figure 1.
<details>
<summary>x1.png Details</summary>

### Visual Description
# Technical Document Extraction: Neural Network Architecture Diagram
## Diagram Overview
The image depicts a **feedforward neural network architecture** with labeled components and mathematical notation. Below is a structured extraction of all textual and symbolic information.
---
### 1. **Component Labels and Flow**
#### Input Layer
- **Nodes**:
- `x₁`, `x₂`, `x₃` (input features)
- **Position**: Leftmost column of orange nodes.
#### Hidden Layers
- **Nodes**:
- Blue circular nodes with no explicit labels (universal representation).
- **Connections**:
- Directed edges (blue arrows) between all input-to-hidden and hidden-to-output nodes.
- **Activation Function**:
- Notation: `σ(wᵀx + b)` (sigmoid function applied to weighted input + bias).
- Position: Annotated at the bottom of the hidden layer.
#### Output Layer
- **Nodes**:
- `y₁`, `y₂` (output predictions).
- **Position**: Rightmost column of orange nodes.
---
### 2. **Architectural Structure**
- **Layers**:
- **Input**: 3 nodes (`x₁`, `x₂`, `x₃`).
- **Hidden**: 6 nodes (arranged in two rows of three nodes each).
- **Output**: 2 nodes (`y₁`, `y₂`).
- **Connections**:
- Fully connected (dense) architecture: Every input node connects to every hidden node, and every hidden node connects to every output node.
- No skip connections or recurrent loops.
---
### 3. **Mathematical Notation**
- **Activation Function**:
- `σ(wᵀx + b)`:
- `w`: Weight vector (transposed, `wᵀ`).
- `x`: Input vector.
- `b`: Bias term.
- `σ`: Sigmoid function (commonly used for binary classification).
---
### 4. **Color Coding**
- **Node Colors**:
- **Input/Output**: Orange (`x₁`, `x₂`, `x₃`, `y₁`, `y₂`).
- **Hidden**: Blue (universal representation).
- **Edges**: Blue arrows (directional flow from input → hidden → output).
---
### 5. **Spatial Grounding**
- **Input Layer**: Leftmost column (x-axis: 0–1).
- **Hidden Layers**: Middle region (x-axis: 1–2).
- **Output Layer**: Rightmost column (x-axis: 2–3).
- **Activation Function**: Annotated at the bottom of the hidden layer (y-axis: -0.5).
---
### 6. **Key Trends and Observations**
- **Data Flow**: Unidirectional (input → hidden → output).
- **Complexity**:
- Total connections:
- Input-to-hidden: 3 inputs × 6 hidden = 18 edges.
- Hidden-to-output: 6 hidden × 2 outputs = 12 edges.
- Total parameters (weights + biases):
- Weights: 18 + 12 = 30.
- Biases: 6 (hidden) + 2 (output) = 8.
- **Total**: 38 trainable parameters.
---
### 7. **Missing Elements**
- **No Legends**: No explicit legend present (colors are self-explanatory).
- **No Data Table**: Diagram is structural, not data-driven.
- **No Axes**: No numerical axes (qualitative spatial layout only).
---
### 8. **Summary**
This diagram illustrates a **3-input, 6-hidden, 2-output feedforward neural network** with a sigmoid activation function. The architecture is fully connected, with 38 trainable parameters. Inputs (`x₁`, `x₂`, `x₃`) are processed through hidden layers to produce outputs (`y₁`, `y₂`).
</details>
Figure 1: A neural network with 2 hidden layers and width vector $\mathbf{m}=(4,4)$ .
Let $L∈\mathbb{Z}^{+}$ represent the number of hidden layers and $\boldsymbol{m}={(m_{1},...,m_{L})}^{→p}$ the vector of the hidden layers widths, i.e. $m_{i}$ is the number of nodes in the $i$ th hidden layer. For a neural network with $L$ hidden layers we use the convention that $m_{0}=n$ and $m_{L+1}=1$ . For any bias vector $\boldsymbol{b}={(b_{1},b_{2},...,b_{r})}^{→p}∈\mathbb{R}^{r}$ , define the shifted activation function $\sigma_{\boldsymbol{b}}:\mathbb{R}^{r}→\mathbb{R}^{r}$ :
$$
\sigma_{\boldsymbol{b}}((y_{1},\ldots,y_{r})^{\top})=(\sigma(y_{1}-b_{1}),%
\ldots,\sigma(y_{r}-b_{r}))^{\top},
$$
where $\sigma(x)=\max(x,0)$ is the ReLU activation function. The neural network can be mathematically represented by the composite function $h:\mathbb{R}^{n}→\{0,1\}$ as
$$
h(\boldsymbol{x})\coloneqq\sigma^{*}_{\lambda}W_{L}\sigma_{\boldsymbol{b}_{L}}%
W_{L-1}\sigma_{\boldsymbol{b}_{L-1}}\cdots W_{1}\sigma_{\boldsymbol{b}_{1}}W_{%
0}\boldsymbol{x}, \tag{1}
$$
where $\sigma^{*}_{\lambda}(x)=\mathbbm{1}\{x>\lambda\}$ , $\lambda>0$ and $W_{\ell}∈\mathbb{R}^{m_{\ell+1}× m_{\ell}}$ for $\ell∈\{0,...,L\}$ represent the weight matrices. We define the function class $\mathcal{H}_{L,\boldsymbol{m}}$ to be the class of functions $h(\boldsymbol{x})$ with $L$ hidden layers and width vector $\boldsymbol{m}$ . The output layer in (1) employs the shifted heaviside function $\sigma^{*}_{\lambda}(x)$ , which is used for binary classification as the final activation function. This choice is guided by the fact that we use the 0-1 loss, which focuses on the percentage of samples assigned to the correct class, a natural performance criterion for binary classification. Besides its wide adoption in machine learning practice, another advantage of using the 0-1 loss is that it is possible to utilise the theory of the Vapnik–Chervonenkis (VC) dimension (see, e.g. Shalev-Shwartz and Ben-David, 2014, Definition 6.5) to bound the generalisation error of a binary classifier equipped with this loss; indeed, this is the approach we take in this work. The relevant results regarding the VC dimension of neural network classifiers are e.g. in Bartlett et al. (2019). As in Schmidt-Hieber (2020), we work with the exact minimiser of the empirical risk. In both binary or multiclass classification, it is possible to work with other losses which make it computationally easier to minimise the corresponding risk, see e.g. Bos and Schmidt-Hieber (2022), who use a version of the cross-entropy loss. However, loss functions different from the 0-1 loss make it impossible to use VC-dimension arguments to control the generalisation error, and more involved arguments, such as those using the covering number (Bos and Schmidt-Hieber, 2022) need to be used instead. We do not pursue these generalisations in the current work.
3 CUSUM-based classifier and its generalisations are neural networks
3.1 Change in mean
We initially consider the case of a single change-point with an unknown location $\tau∈[n-1]$ , $n≥ 2$ , in the model
| | $\displaystyle\boldsymbol{X}$ | $\displaystyle=\boldsymbol{\mu}+\boldsymbol{\xi},$ | |
| --- | --- | --- | --- |
where $\mu_{\mathrm{L}},\mu_{\mathrm{R}}$ are the unknown signal values before and after the change-point; $\boldsymbol{\xi}\sim N_{n}(0,I_{n})$ . The CUSUM test is widely used to detect mean changes in univariate data. For the observation $\boldsymbol{x}$ , the CUSUM transformation $\mathcal{C}:\mathbb{R}^{n}→\mathbb{R}^{n-1}$ is defined as $\mathcal{C}(\boldsymbol{x}):=(\boldsymbol{v}_{1}^{→p}\boldsymbol{x},...,%
\boldsymbol{v}_{n-1}^{→p}\boldsymbol{x})^{→p}$ , where $\boldsymbol{v}_{i}\coloneqq\bigl{(}\sqrt{\frac{n-i}{in}}\boldsymbol{1}_{i}^{%
→p},-\sqrt{\frac{i}{(n-i)n}}\boldsymbol{1}_{n-i}^{→p}\bigr{)}^{→p}$ for $i∈[n-1]$ . Here, for each $i∈[n-1]$ , $(\boldsymbol{v}_{i}^{→p}\boldsymbol{x})^{2}$ is the log likelihood-ratio statistic for testing a change at time $i$ against the null of no change (e.g. Baranowski et al., 2019). For a given threshold $\lambda>0$ , the classical CUSUM test for a change in the mean of the data is defined as
$$
h^{\mathrm{CUSUM}}_{\lambda}(\boldsymbol{x})=\mathbbm{1}\{\|\mathcal{C}(%
\boldsymbol{x})\|_{\infty}>\lambda\}.
$$
The following lemma shows that $h^{\mathrm{CUSUM}}_{\lambda}(\boldsymbol{x})$ can be represented as a neural network.
**Lemma 3.1**
*For any $\lambda>0$ , we have $h^{\mathrm{CUSUM}}_{\lambda}(\boldsymbol{x})∈\mathcal{H}_{1,2n-2}$ .*
The fact that the widely-used CUSUM statistic can be viewed as a simple neural network has far-reaching consequences: this means that given enough training data, a neural network architecture that permits the CUSUM-based classifier as its special case cannot do worse than CUSUM in classifying change-point versus no-change-point signals. This serves as the main motivation for our work, and a prelude to our next results.
3.2 Beyond the mean change model
We can generalise the simple change in mean model to allow for different types of change or for non-independent noise. In this section, we consider change-point models that can be expressed as a change in regression problem, where the model for data given a change at $\tau$ is of the form
$$
\boldsymbol{X}=\boldsymbol{Z}\boldsymbol{\beta}+\boldsymbol{c}_{\tau}\phi+%
\boldsymbol{\Gamma}\boldsymbol{\xi}, \tag{2}
$$
where for some $p≥ 1$ , $\boldsymbol{Z}$ is an $n× p$ matrix of covariates for the model with no change, $\boldsymbol{c}_{\tau}$ is an $n× 1$ vector of covariates specific to the change at $\tau$ , and the parameters $\boldsymbol{\beta}$ and $\phi$ are, respectively, a $p× 1$ vector and a scalar. The noise is defined in terms of an $n× n$ matrix $\boldsymbol{\Gamma}$ and an $n× 1$ vector of independent standard normal random variables, $\boldsymbol{\xi}$ . For example, the change in mean problem has $p=1$ , with $\boldsymbol{Z}$ a column vector of ones, and $\boldsymbol{c}_{\tau}$ being a vector whose first $\tau$ entries are zeros, and the remaining entries are ones. In this formulation $\beta$ is the pre-change mean, and $\phi$ is the size of the change. The change in slope problem Fearnhead et al. (2019) has $p=2$ with the columns of $\boldsymbol{Z}$ being a vector of ones, and a vector whose $i$ th entry is $i$ ; and $\boldsymbol{c}_{\tau}$ has $i$ th entry that is $\max\{0,i-\tau\}$ . In this formulation $\boldsymbol{\beta}$ defines the pre-change linear mean, and $\phi$ the size of the change in slope. Choosing $\boldsymbol{\Gamma}$ to be proportional to the identity matrix gives a model with independent, identically distributed noise; but other choices would allow for auto-correlation. The following result is a generalisation of Lemma 3.1, which shows that the likelihood-ratio test for (2), viewed as a classifier, can be represented by our neural network.
**Lemma 3.2**
*Consider the change-point model (2) with a possible change at $\tau∈[n-1]$ . Assume further that $\boldsymbol{\Gamma}$ is invertible. Then there is an $h^{*}∈\mathcal{H}_{1,2n-2}$ equivalent to the likelihood-ratio test for testing $\phi=0$ against $\phi≠ 0$ .*
Importantly, this result shows that for this much wider class of change-point models, we can replicate the likelihood-ratio-based classifier for change using a simple neural network. Other types of changes can be handled by suitably pre-transforming the data. For instance, squaring the input data would be helpful in detecting changes in the variance and if the data followed an AR(1) structure, then changes in autocorrelation could be handled by including transformations of the original input of the form $(x_{t}x_{t+1})_{t=1,...,n-1}$ . On the other hand, even if such transformations are not supplied as the input, a neural network of suitable depth is able to approximate these transformations and consequently successfully detect the change (Schmidt-Hieber, 2020, Lemma A.2). This is illustrated in Figure 7 of appendix, where we compare the performance of neural network based classifiers of various depths constructed with and without using the transformed data as inputs.
4 Generalisation error of neural network change-point classifiers
In Section 3, we showed that CUSUM and generalised CUSUM could be represented by a neural network. Therefore, with a large enough amount of training data, a trained neural network classifier that included CUSUM, or generalised CUSUM, as a special case, would perform no worse than it on unseen data. In this section, we provide generalisation bounds for a neural network classifier for the change-in-mean problem, given a finite amount of training data. En route to this main result, stated in Theorem 4.3, we provide generalisation bounds for the CUSUM-based classifier, in which the threshold has been chosen on a finite training data set. We write $P(n,\tau,\mu_{\mathrm{L}},\mu_{\mathrm{R}})$ for the distribution of the multivariate normal random vector $\boldsymbol{X}\sim N_{n}(\boldsymbol{\mu},I_{n})$ where $\boldsymbol{\mu}\coloneqq{(\mu_{\mathrm{L}}\mathbbm{1}\{i≤\tau\}+\mu_{%
\mathrm{R}}\mathbbm{1}\{i>\tau\})}_{i∈[n]}$ . Define $\eta\coloneqq\tau/n$ . Lemma 4.1 and Corollary 4.1 control the misclassification error of the CUSUM-based classifier.
**Lemma 4.1**
*Fix $\varepsilon∈(0,1)$ . Suppose $\boldsymbol{X}\sim P(n,\tau,\mu_{\mathrm{L}},\mu_{\mathrm{R}})$ for some $\tau∈\mathbb{Z}^{+}$ and $\mu_{\mathrm{L}},\mu_{\mathrm{R}}∈\mathbb{R}$ .
1. If $\mu_{\mathrm{L}}=\mu_{\mathrm{R}}$ , then $\mathbb{P}\bigl{\{}\|\mathcal{C}(\boldsymbol{X})\|_{∞}>\sqrt{2\log(n/%
\varepsilon)}\bigr{\}}≤\varepsilon.$
1. If $|\mu_{\mathrm{L}}-\mu_{\mathrm{R}}|\sqrt{\eta(1-\eta)}>\sqrt{8\log(n/%
\varepsilon)/n}$ , then $\mathbb{P}\bigl{\{}\|\mathcal{C}(\boldsymbol{X})\|_{∞}≤\sqrt{2\log(n/%
\varepsilon)}\bigr{\}}≤\varepsilon.$*
For any $B>0$ , define
$$
\Theta(B)\coloneqq\left\{(\tau,\mu_{\mathrm{L}},\mu_{\mathrm{R}})\in[n-1]%
\times\mathbb{R}\times\mathbb{R}:|\mu_{\mathrm{L}}-\mu_{\mathrm{R}}|\sqrt{\tau%
(n-\tau)}/n\in\{0\}\cup\left(B,\infty\right)\right\}.
$$
Here, $|\mu_{\mathrm{L}}-\mu_{\mathrm{R}}|\sqrt{\tau(n-\tau)}/n=|\mu_{\mathrm{L}}-\mu%
_{\mathrm{R}}|\sqrt{\eta(1-\eta)}$ can be interpreted as the signal-to-noise ratio of the mean change problem. Thus, $\Theta(B)$ is the parameter space of data distributions where there is either no change, or a single change-point in mean whose signal-to-noise ratio is at least $B$ . The following corollary controls the misclassification risk of a CUSUM statistics-based classifier:
**Corollary 4.1**
*Fix $B>0$ . Let $\pi_{0}$ be any prior distribution on $\Theta(B)$ , then draw $(\tau,\mu_{\mathrm{L}},\mu_{\mathrm{R}})\sim\pi_{0}$ and $\boldsymbol{X}\sim P(n,\tau,\mu_{\mathrm{L}},\mu_{\mathrm{R}})$ , and define $Y=\mathbbm{1}\{\mu_{\mathrm{L}}≠\mu_{\mathrm{R}}\}$ . For $\lambda=B\sqrt{n}/2$ , the classifier $h^{\mathrm{CUSUM}}_{\lambda}$ satisfies
$$
\mathbb{P}(h^{\mathrm{CUSUM}}_{\lambda}(\boldsymbol{X})\neq Y)\leq ne^{-nB^{2}%
/8}.
$$*
Theorem 4.2 below, which is based on Corollary 4.1, Bartlett et al. (2019, Theorem 7) and Mohri et al. (2012, Corollary 3.4), shows that the empirical risk minimiser in the neural network class $\mathcal{H}_{1,2n-2}$ has good generalisation properties over the class of change-point problems parameterised by $\Theta(B)$ . Given training data $(\boldsymbol{X}^{(1)},Y^{(1)}),...,(\boldsymbol{X}^{(N)},Y^{(N)})$ and any $h:\mathbb{R}^{n}→\{0,1\}$ , we define the empirical risk of $h$ as
$$
L_{N}(h)\coloneqq\frac{1}{N}\sum_{i=1}^{N}\mathbbm{1}\{Y^{(i)}\neq h(%
\boldsymbol{X}^{(i)})\}.
$$
**Theorem 4.2**
*Fix $B>0$ and let $\pi_{0}$ be any prior distribution on $\Theta(B)$ . We draw $(\tau,\mu_{\mathrm{L}},\mu_{\mathrm{R}})\sim\pi_{0}$ , $\boldsymbol{X}\sim P(n,\tau,\mu_{\mathrm{L}},\mu_{\mathrm{R}})$ , and set $Y=\mathbbm{1}\{\mu_{\mathrm{L}}≠\mu_{\mathrm{R}}\}$ . Suppose that the training data $\mathcal{D}:=\bigl{(}(\boldsymbol{X}^{(1)},Y^{(1)}),...,(\boldsymbol{X}^{(N%
)},Y^{(N)})\bigr{)}$ consist of independent copies of $(\boldsymbol{X},Y)$ and $h_{\mathrm{ERM}}\coloneqq\operatorname*{arg\,min}_{h∈\mathcal{H}_{1,2n-2}}L_%
{N}(h)$ is the empirical risk minimiser. There exists a universal constant $C>0$ such that for any $\delta∈(0,1)$ , (3) holds with probability $1-\delta$ .
$$
\mathbb{P}(h_{\mathrm{ERM}}(\boldsymbol{X})\neq Y\mid\mathcal{D})\leq ne^{-nB^%
{2}/8}+C\sqrt{\frac{n^{2}\log(n)\log(N)+\log(1/\delta)}{N}}. \tag{3}
$$*
The theoretical results derived for the neural network-based classifier, here and below, all rely on the fact that the training and test data are drawn from the same distribution. However, we observe that in practice, even when the training and test sets have different error distributions, neural network-based classifiers still provide accurate results on the test set; see our discussion of Figure 2 in Section 5 for more details. The misclassification error in (3) is bounded by two terms. The first term represents the misclassification error of CUSUM-based classifier, see Corollary 4.1, and the second term depends on the complexity of the neural network class measured in its VC dimension. Theorem 4.2 suggests that for training sample size $N\gg n^{2}\log n$ , a well-trained single-hidden-layer neural network with $2n-2$ hidden nodes would have comparable performance to that of the CUSUM-based classifier. However, as we will see in Section 5, in practice, a much smaller training sample size $N$ is needed for the neural network to be competitive in the change-point detection task. This is because the $2n-2$ hidden layer nodes in the neural network representation of $h^{\mathrm{CUSUM}}_{\lambda}$ encode the components of the CUSUM transformation $(±\boldsymbol{v}_{t}^{→p}\boldsymbol{x}:t∈[n-1])$ , which are highly correlated. By suitably pruning the hidden layer nodes, we can show that a single-hidden-layer neural network with $O(\log n)$ hidden nodes is able to represent a modified version of the CUSUM-based classifier with essentially the same misclassification error. More precisely, let $Q:=\lfloor\log_{2}(n/2)\rfloor$ and write $T_{0}:=\{2^{q}:0≤ q≤ Q\}\cup\{n-2^{q}:0≤ q≤ Q\}$ . We can then define
$$
h^{\mathrm{CUSUM}_{*}}_{\lambda^{*}}(\boldsymbol{X})=\mathbbm{1}\Bigl{\{}\max_%
{t\in T_{0}}|\boldsymbol{v}_{t}^{\top}\boldsymbol{X}|>\lambda^{*}\Bigr{\}}.
$$
By the same argument as in Lemma 3.1, we can show that $h^{\mathrm{CUSUM}_{*}}_{\lambda^{*}}∈\mathcal{H}_{1,4\lfloor\log_{2}(n)\rfloor}$ for any $\lambda^{*}>0$ . The following Theorem shows that high classification accuracy can be achieved under a weaker training sample size condition compared to Theorem 4.2.
**Theorem 4.3**
*Fix $B>0$ and let the training data $\mathcal{D}$ be generated as in Theorem 4.2. Let $h_{\mathrm{ERM}}\coloneqq\operatorname*{arg\,min}_{h∈\mathcal{H}_{L,%
\boldsymbol{m}}}L_{N}(h)$ be the empirical risk minimiser for a neural network with $L≥ 1$ layers and $\boldsymbol{m}=(m_{1},...,m_{L})^{→p}$ hidden layer widths. If $m_{1}≥ 4\lfloor\log_{2}(n)\rfloor$ and $m_{r}m_{r+1}=O(n\log n)$ for all $r∈[L-1]$ , then there exists a universal constant $C>0$ such that for any $\delta∈(0,1)$ , (4) holds with probability $1-\delta$ .
$$
\mathbb{P}(h_{\mathrm{ERM}}(\boldsymbol{X})\neq Y\mid\mathcal{D})\leq 2\lfloor%
\log_{2}(n)\rfloor e^{-nB^{2}/24}+C\sqrt{\frac{L^{2}n\log^{2}(Ln)\log(N)+\log(%
1/\delta)}{N}}. \tag{4}
$$*
Theorem 4.3 generalises the single hidden layer neural network representation in Theorem 4.2 to multiple hidden layers. In practice, multiple hidden layers help keep the misclassification error rate low even when $N$ is small, see the numerical study in Section 5. Theorems 4.2 and 4.3 are examples of how to derive generalisation errors of a neural network-based classifier in the change-point detection task. The same workflow can be employed in other types of changes, provided that suitable representation results of likelihood-based tests in terms of neural networks (e.g. Lemma 3.2) can be obtained. In a general result of this type, the generalisation error of the neural network will again be bounded by a sum of the error of the likelihood-based classifier together with a term originating from the VC-dimension bound of the complexity of the neural network architecture. We further remark that for simplicity of discussion, we have focused our attention on data models where the noise vector $\boldsymbol{\xi}=\boldsymbol{X}-\mathbb{E}\boldsymbol{X}$ has independent and identically distributed normal components. However, since CUSUM-based tests are available for temporally correlated or sub-Weibull data, with suitably adjusted test threshold values, the above theoretical results readily generalise to such settings. See Theorems A.3 and A.5 in appendix for more details.
5 Numerical study
We now investigate empirically our approach of learning a change-point detection method by training a neural network. Motivated by the results from the previous section we will fit a neural network with a single layer and consider how varying the number of hidden layers and the amount of training data affects performance. We will compare to a test based on the CUSUM statistic, both for scenarios where the noise is independent and Gaussian, and for scenarios where there is auto-correlation or heavy-tailed noise. The CUSUM test can be sensitive to the choice of threshold, particularly when we do not have independent Gaussian noise, so we tune its threshold based on training data. When training the neural network, we first standardise the data onto $[0,1]$ , i.e. $\tilde{\boldsymbol{x}}_{i}=((x_{ij}-x_{i}^{\mathrm{min}})/(x_{i}^{\mathrm{max}%
}-x_{i}^{\mathrm{min}}))_{j∈[n]}$ where $x_{i}^{\mathrm{max}}:=\max_{j}x_{ij},x_{i}^{\mathrm{min}}:=\min_{j}x_{ij}$ . This makes the neural network procedure invariant to either adding a constant to the data or scaling the data by a constant, which are natural properties to require. We train the neural network by minimising the cross-entropy loss on the training data. We run training for 200 epochs with a batch size of 32 and a learning rate of 0.001 using the Adam optimiser (Kingma and Ba, 2015). These hyperparameters are chosen based on a training dataset with cross-validation, more details can be found in Appendix B. We generate our data as follows. Given a sequence of length $n$ , we draw $\tau\sim\mathrm{Unif}\{2,...,n-2\}$ , set $\mu_{\mathrm{L}}=0$ and draw $\mu_{\mathrm{R}}|\tau\sim\mathrm{Unif}([-1.5b,-0.5b]\cup[0.5b,1.5b])$ , where $b:=\sqrt{\frac{8n\log(20n)}{\tau(n-\tau)}}$ is chosen in line with Lemma 4.1 to ensure a good range of signal-to-noise ratios. We then generate $\boldsymbol{x}_{1}=(\mu_{\mathrm{L}}\mathbbm{1}_{\{t≤\tau\}}+\mu_{\mathrm{R%
}}\mathbbm{1}_{\{t>\tau\}}+\varepsilon_{t})_{t∈[n]}$ , with the noise $(\varepsilon_{t})_{t∈[n]}$ following an $\mathrm{AR}(1)$ model with possibly time-varying autocorrelation $\varepsilon_{t}|\rho_{t}=\xi_{1}$ for $t=1$ and $\rho_{t}\varepsilon_{t-1}+\xi_{t}$ for $t≥ 2$ , where $(\xi_{t})_{t∈[n]}$ are independent, possibly heavy-tailed noise. The autocorrelations $\rho_{t}$ and innovations $\xi_{t}$ are from one of the three scenarios:
1. $n=100$ , $N∈\{100,200,...,700\}$ , $\rho_{t}=0$ and $\xi_{t}\sim N(0,1)$ .
1. $n=100$ , $N∈\{100,200,...,700\}$ , $\rho_{t}=0.7$ and $\xi_{t}\sim N(0,1)$ .
1. $n=100$ , $N∈\{100,200,...,1000\}$ , $\rho_{t}\sim\mathrm{Unif}([0,1])$ and $\xi_{t}\sim N(0,2)$ .
1. $n=100$ , $N∈\{100,200,...,1000\}$ , $\rho_{t}=0$ and $\xi_{t}\sim\text{Cauchy}(0,0.3)$ .
The above procedure is then repeated $N/2$ times to generate independent sequences $\boldsymbol{x}_{1},...,\boldsymbol{x}_{N/2}$ with a single change, and the associated labels are $(y_{1},...,y_{N/2})^{→p}=\mathbf{1}_{N/2}$ . We then repeat the process another $N/2$ times with $\mu_{\mathrm{R}}=\mu_{\mathrm{L}}$ to generate sequences without changes $\boldsymbol{x}_{N/2+1},...,\boldsymbol{x}_{N}$ with $(y_{N/2+1},...,y_{N})^{→p}=\mathbf{0}_{N/2}$ . The data with and without change $(\boldsymbol{x}_{i},y_{i})_{i∈[N]}$ are combined and randomly shuffled to form the training data. The test data are generated in a similar way, with a sample size $N_{\mathrm{test}}=30000$ and the slight modification that $\mu_{\mathrm{R}}|\tau\sim\mathrm{Unif}([-1.75b,-0.25b]\cup[0.25b,1.75b])$ when a change occurs. We note that the test data is drawn from the same distribution as the training set, though potentially having changes with signal-to-noise ratios outside the range covered by the training set. We have also conducted robustness studies to investigate the effect of training the neural networks on scenario S1 and test on S1 ${}^{\prime}$ , S2 or S3. Qualitatively similar results to Figure 2 have been obtained in this misspecified setting (see Figure 6 in appendix).
<details>
<summary>x2.png Details</summary>

### Visual Description
# Technical Document Analysis of Line Chart
## Chart Overview
The image is a line chart comparing the Mean Error Rate (MER) average across different statistical methods as a function of sample size (N). The chart includes five distinct data series with varying trends.
---
## Axis Labels and Scales
- **X-axis (Horizontal):**
- Label: `N` (sample size)
- Range: 100 to 700 (increments of 100)
- **Y-axis (Vertical):**
- Label: `MER Average` (Mean Error Rate)
- Range: 0.06 to 0.16 (increments of 0.02)
---
## Legend and Data Series
The legend is positioned in the **upper-right corner** of the chart. Each data series is represented by a unique color and marker type:
| Legend Label | Color | Marker Type | Corresponding Line |
|--------------------|--------|-------------|--------------------|
| CUSUM | Blue | Circle | Solid blue line |
| `m^(1),L=1` | Orange | Triangle | Dashed orange line |
| `m^(2),L=1` | Green | Diamond | Dotted green line |
| `m^(1),L=5` | Red | Square | Dash-dot red line |
| `m^(1),L=10` | Purple | X | Dotted purple line |
---
## Key Trends and Data Points
### 1. **CUSUM (Blue Line)**
- **Trend:** Sharp decline from N=100 to N=200, followed by fluctuations.
- **Data Points:**
- N=100: 0.06
- N=200: 0.08
- N=300: 0.07
- N=400: 0.06
- N=500: 0.08
- N=600: 0.07
- N=700: 0.06
### 2. **`m^(1),L=1` (Orange Line)**
- **Trend:** Steep decline from N=100 to N=400, then stabilizes.
- **Data Points:**
- N=100: 0.16
- N=200: 0.08
- N=300: 0.07
- N=400: 0.06
- N=500: 0.06
- N=600: 0.06
- N=700: 0.06
### 3. **`m^(2),L=1` (Green Line)**
- **Trend:** Rapid decline from N=100 to N=200, followed by gradual stabilization.
- **Data Points:**
- N=100: 0.13
- N=200: 0.09
- N=300: 0.07
- N=400: 0.06
- N=500: 0.06
- N=600: 0.06
- N=700: 0.06
### 4. **`m^(1),L=5` (Red Line)**
- **Trend:** Gradual decline from N=100 to N=300, then plateaus.
- **Data Points:**
- N=100: 0.08
- N=200: 0.07
- N=300: 0.06
- N=400: 0.06
- N=500: 0.06
- N=600: 0.06
- N=700: 0.06
### 5. **`m^(1),L=10` (Purple Line)**
- **Trend:** Slight peak at N=200, followed by stabilization.
- **Data Points:**
- N=100: 0.06
- N=200: 0.07
- N=300: 0.06
- N=400: 0.06
- N=500: 0.06
- N=600: 0.06
- N=700: 0.06
---
## Spatial Grounding and Validation
- **Legend Placement:** Upper-right corner (confirmed via visual inspection).
- **Color-Marker Consistency:**
- All data points match their legend labels (e.g., blue circles for CUSUM, orange triangles for `m^(1),L=1`).
- Example: At N=100, the orange triangle (highest point) corresponds to `m^(1),L=1` (0.16).
---
## Observations
1. **CUSUM** exhibits the most volatility, with a sharp initial drop and subsequent fluctuations.
2. **`m^(1),L=1`** and **`m^(2),L=1`** show the steepest declines, stabilizing near 0.06 by N=400.
3. **`m^(1),L=5`** and **`m^(1),L=10`** demonstrate slower convergence, with `m^(1),L=10` having the least variability.
---
## Conclusion
The chart illustrates how different statistical methods (`m^(1),L=1`, `m^(2),L=1`, etc.) and the CUSUM method perform in terms of MER average as sample size (N) increases. All methods converge toward lower error rates as N grows, with `m^(1),L=10` showing the most stable performance.
</details>
<details>
<summary>x3.png Details</summary>

### Visual Description
# Technical Document Analysis of Chart
## 1. Labels, Axis Titles, Legends, and Axis Markers
- **X-Axis Label**: `N` (ranging from 100 to 700 in increments of 100).
- **Y-Axis Label**: `MER Average` (ranging from 0.18 to 0.32 in increments of 0.02).
- **Legend Entries**:
- `CUSUM` (blue circles).
- `m^(1),L=1` (orange triangles).
- `m^(2),L=1` (green diamonds).
- `m^(1),L=5` (red squares).
- `m^(1),L=10` (purple crosses).
## 2. Categories and Sub-Categories
- **Categories**:
- `CUSUM`
- `m^(1),L=1`
- `m^(2),L=1`
- `m^(1),L=5`
- `m^(1),L=10`
- **Sub-Categories**:
- For `m^(1)` and `m^(2)` methods, `L` values (1, 5, 10) represent different model configurations.
## 3. Text Embedded in Diagram
- **Legend Text**:
- `CUSUM` (blue circle).
- `m^(1),L=1` (orange triangle).
- `m^(2),L=1` (green diamond).
- `m^(1),L=5` (red square).
- `m^(1),L=10` (purple cross).
## 4. Data Table (Not Applicable)
- No data table is present in the image.
## 5. Legend Color/Label Cross-Reference
- **Blue Circles**: Confirmed as `CUSUM`.
- **Orange Triangles**: Confirmed as `m^(1),L=1`.
- **Green Diamonds**: Confirmed as `m^(2),L=1`.
- **Red Squares**: Confirmed as `m^(1),L=5`.
- **Purple Crosses**: Confirmed as `m^(1),L=10`.
## 6. Spatial Grounding of Legend
- **Legend Placement**: Top-right corner of the chart.
## 7. Trend Verification and Data Points
### CUSUM (Blue Circles)
- **Trend**: Starts at ~0.28 (N=100), dips slightly, then remains relatively flat with minor fluctuations. Ends at ~0.245 (N=700).
- **Data Points**:
- N=100: 0.28
- N=200: 0.25
- N=300: 0.248
- N=400: 0.245
- N=500: 0.255
- N=600: 0.248
- N=700: 0.245
### m^(1),L=1 (Orange Triangles)
- **Trend**: Starts at ~0.325 (N=100), drops sharply to ~0.24 (N=200), then gradually decreases to ~0.20 (N=700).
- **Data Points**:
- N=100: 0.325
- N=200: 0.24
- N=300: 0.235
- N=400: 0.23
- N=500: 0.215
- N=600: 0.205
- N=700: 0.20
### m^(2),L=1 (Green Diamonds)
- **Trend**: Starts at ~0.315 (N=100), drops to ~0.23 (N=200), fluctuates slightly, then trends downward to ~0.19 (N=700).
- **Data Points**:
- N=100: 0.315
- N=200: 0.23
- N=300: 0.22
- N=400: 0.225
- N=500: 0.21
- N=600: 0.205
- N=700: 0.19
### m^(1),L=5 (Red Squares)
- **Trend**: Starts at ~0.275 (N=100), drops to ~0.21 (N=200), fluctuates, then trends downward to ~0.195 (N=700).
- **Data Points**:
- N=100: 0.275
- N=200: 0.21
- N=300: 0.205
- N=400: 0.215
- N=500: 0.205
- N=600: 0.205
- N=700: 0.195
### m^(1),L=10 (Purple Crosses)
- **Trend**: Starts at ~0.29 (N=100), drops to ~0.21 (N=200), fluctuates, then trends downward to ~0.185 (N=700).
- **Data Points**:
- N=100: 0.29
- N=200: 0.21
- N=300: 0.195
- N=400: 0.215
- N=500: 0.205
- N=600: 0.195
- N=700: 0.185
## 8. Component Isolation
- **Header**: Chart title (not explicitly labeled but implied by context).
- **Main Chart**: Plot area with axes, data lines, and markers.
- **Footer**: Legend box in the top-right corner.
## 9. Key Observations
- **CUSUM** maintains the most stable MER Average across all N values.
- **m^(1),L=1** and **m^(2),L=1** show significant declines as N increases, with `m^(1),L=1` starting higher but declining more sharply.
- **m^(1),L=5** and **m^(1),L=10** exhibit similar trends but with less pronounced declines compared to `m^(1),L=1`.
## 10. Conclusion
The chart illustrates the performance of different statistical methods (CUSUM, m^(1),L=1, m^(2),L=1, m^(1),L=5, m^(1),L=10) in terms of MER Average as a function of sample size (N). CUSUM demonstrates the most consistent performance, while methods with higher `L` values (e.g., L=5, L=10) show improved stability at larger N.
</details>
(a) Scenario S1 with $\rho_{t}=0$ (b) Scenario S1 ${}^{\prime}$ with $\rho_{t}=0.7$
<details>
<summary>x4.png Details</summary>

### Visual Description
# Technical Document Analysis of Chart
## Title
- **Chart Title**: "MER Average"
## Axes
- **X-Axis**:
- Label: "N"
- Range: 200 to 1000 (increments of 200)
- **Y-Axis**:
- Label: "MER Average"
- Range: 0.18 to 0.35 (increments of 0.01)
## Legend
- **Position**: Top-right corner
- **Entries**:
1. **CUSUM** (Blue, Circle marker)
2. **m^(1),L=1** (Orange, Triangle marker)
3. **m^(2),L=1** (Green, Diamond marker)
4. **m^(1),L=5** (Red, Square marker)
5. **m^(1),L=10** (Purple, Star marker)
## Data Series & Trends
1. **CUSUM (Blue)**:
- **Trend**: Relatively flat line with minor fluctuations.
- **Key Points**:
- Starts at ~0.24 (N=200)
- Dips slightly to ~0.23 (N=400)
- Stabilizes around 0.23–0.24 for N ≥ 600.
2. **m^(1),L=1 (Orange)**:
- **Trend**: Sharp initial decline, then gradual stabilization.
- **Key Points**:
- Starts at ~0.34 (N=200)
- Drops to ~0.20 (N=400)
- Fluctuates between 0.19–0.21 for N ≥ 600.
3. **m^(2),L=1 (Green)**:
- **Trend**: Steep initial drop, followed by volatility.
- **Key Points**:
- Starts at ~0.36 (N=200)
- Plummets to ~0.20 (N=400)
- Oscillates between 0.18–0.22 for N ≥ 600.
4. **m^(1),L=5 (Red)**:
- **Trend**: Moderate decline, then stabilization.
- **Key Points**:
- Starts at ~0.30 (N=200)
- Decreases to ~0.19 (N=400)
- Stabilizes around 0.18–0.20 for N ≥ 600.
5. **m^(1),L=10 (Purple)**:
- **Trend**: Gradual decline with minor fluctuations.
- **Key Points**:
- Starts at ~0.28 (N=200)
- Drops to ~0.18 (N=400)
- Remains flat around 0.18–0.19 for N ≥ 600.
## Spatial Grounding & Validation
- **Legend Colors Match Data Points**:
- Blue (CUSUM) matches all blue circles.
- Orange (m^(1),L=1) matches all orange triangles.
- Green (m^(2),L=1) matches all green diamonds.
- Red (m^(1),L=5) matches all red squares.
- Purple (m^(1),L=10) matches all purple stars.
## Conclusion
The chart illustrates the performance of five statistical methods (CUSUM, m^(1),L=1, m^(2),L=1, m^(1),L=5, m^(1),L=10) across varying sample sizes (N). CUSUM demonstrates the most stability, while m^(2),L=1 exhibits the highest initial variability. All methods converge toward lower MER averages as N increases, with m^(1),L=10 showing the most consistent decline.
</details>
<details>
<summary>x5.png Details</summary>

### Visual Description
# Technical Document Analysis of Chart
## Chart Overview
The image is a line chart comparing the **MER Average** (Mean Error Rate) across different statistical methods as a function of **N** (sample size). The chart includes five data series, each represented by distinct colors, markers, and labels in the legend.
---
### **Axis Labels and Scales**
- **X-axis (Horizontal):**
- Label: **N**
- Range: 200 to 1000 (increments of 200)
- Units: Not explicitly stated, but contextually represents sample size.
- **Y-axis (Vertical):**
- Label: **MER Average**
- Range: 0.25 to 0.50 (increments of 0.05)
- Units: Likely a normalized error rate (e.g., proportion of errors).
---
### **Legend and Data Series**
The legend is positioned in the **top-right corner** of the chart. Each data series is mapped to a unique color, marker, and label:
1. **CUSUM**
- Color: **Blue**
- Marker: **Circle (●)**
- Trend: Starts at ~0.35, remains relatively flat with minor fluctuations.
2. **m^(1),L=1**
- Color: **Orange**
- Marker: **Triangle (▼)**
- Trend: Starts at ~0.42, sharply declines to ~0.28 by N=600, then stabilizes.
3. **m^(2),L=1**
- Color: **Green**
- Marker: **Diamond (◇)**
- Trend: Starts at ~0.40, declines to ~0.28 by N=600, then fluctuates slightly.
4. **m^(1),L=5**
- Color: **Red**
- Marker: **Square (■)**
- Trend: Starts at ~0.38, declines to ~0.26 by N=600, then fluctuates.
5. **m^(1),L=10**
- Color: **Purple**
- Marker: **Cross (✗)**
- Trend: Starts at ~0.34, declines to ~0.27 by N=600, then fluctuates.
---
### **Key Observations**
1. **CUSUM (Blue Line):**
- Maintains the highest MER Average (~0.35–0.36) across all N values.
- Exhibits minimal variability compared to other methods.
2. **m^(1),L=1 (Orange Line):**
- Begins with the highest MER Average (~0.42 at N=200).
- Shows the steepest decline, stabilizing near ~0.28 by N=600.
3. **m^(2),L=1 (Green Line):**
- Starts slightly below m^(1),L=1 (~0.40 at N=200).
- Declines more gradually than m^(1),L=1 but remains above m^(1),L=5 and m^(1),L=10.
4. **m^(1),L=5 (Red Line) and m^(1),L=10 (Purple Line):**
- Both exhibit similar trends: sharp declines followed by oscillations.
- m^(1),L=10 starts lower (~0.34 at N=200) but converges with m^(1),L=5 at higher N values.
---
### **Spatial Grounding**
- **Legend Position:** Top-right corner of the chart.
- **Data Series Alignment:**
- Colors and markers in the legend match the corresponding lines in the plot.
- Example: The orange line with triangles corresponds to **m^(1),L=1**.
---
### **Trend Verification**
- **CUSUM:** Flat trend (no significant increase/decrease).
- **m^(1),L=1:** Sharp decline followed by stabilization.
- **m^(2),L=1:** Gradual decline with minor fluctuations.
- **m^(1),L=5 and m^(1),L=10:** Steeper declines with post-stabilization oscillations.
---
### **Conclusion**
The chart illustrates how MER Average varies with sample size (N) for different statistical methods. CUSUM performs consistently, while methods with smaller L values (e.g., m^(1),L=1) show sharper improvements in error rates as N increases.
</details>
(c) Scenario S2 with $\rho_{t}\sim\text{Unif}([0,1])$ (d) Scenario S3 with Cauchy noise
Figure 2: Plot of the test set MER, computed on a test set of size $N_{\mathrm{test}}=30000$ , against training sample size $N$ for detecting the existence of a change-point on data series of length $n=100$ . We compare the performance of the CUSUM test and neural networks from four function classes: $\mathcal{H}_{1,m^{(1)}}$ , $\mathcal{H}_{1,m^{(2)}}$ , $\mathcal{H}_{5,m^{(1)}\mathbf{1}_{5}}$ and $\mathcal{H}_{10,m^{(1)}\mathbf{1}_{10}}$ where $m^{(1)}=4\lfloor\log_{2}(n)\rfloor$ and $m^{(2)}=2n-2$ respectively under scenarios S1, S1 ${}^{\prime}$ , S2 and S3 described in Section 5.
We compare the performance of the CUSUM-based classifier with the threshold cross-validated on the training data with neural networks from four function classes: $\mathcal{H}_{1,m^{(1)}}$ , $\mathcal{H}_{1,m^{(2)}}$ , $\mathcal{H}_{5,m^{(1)}\mathbf{1}_{5}}$ and $\mathcal{H}_{10,m^{(1)}\mathbf{1}_{10}}$ where $m^{(1)}=4\lfloor\log_{2}(n)\rfloor$ and $m^{(2)}=2n-2$ respectively (cf. Theorem 4.3 and Lemma 3.1). Figure 2 shows the test misclassification error rate (MER) of the four procedures in the four scenarios S1, S1 ${}^{\prime}$ , S2 and S3. We observe that when data are generated with independent Gaussian noise ( Figure 2 (a)), the trained neural networks with $m^{(1)}$ and $m^{(2)}$ single hidden layer nodes attain very similar test MER compared to the CUSUM-based classifier. This is in line with our Theorem 4.3. More interestingly, when noise has either autocorrelation ( Figure 2 (b, c)) or heavy-tailed distribution ( Figure 2 (d)), trained neural networks with $(L,\mathbf{m})$ : $(1,m^{(1)})$ , $(1,m^{(2)})$ , $(5,m^{(1)}\mathbf{1}_{5})$ and $(10,m^{(1)}\mathbf{1}_{10})$ outperform the CUSUM-based classifier, even after we have optimised the threshold choice of the latter. In addition, as shown in Figure 5 in the online supplement, when the first two layers of the network are set to carry out truncation, which can be seen as a composition of two ReLU operations, the resulting neural network outperforms the Wilcoxon statistics-based classifier (Dehling et al., 2015), which is a standard benchmark for change-point detection in the presence of heavy-tailed noise. Furthermore, from Figure 2, we see that increasing $L$ can significantly reduce the average MER when $N≤ 200$ . Theoretically, as the number of layers $L$ increases, the neural network is better able to approximate the optimal decision boundary, but it becomes increasingly difficult to train the weights due to issues such as vanishing gradients (He et al., 2016). A combination of these considerations leads us to develop deep neural network architecture with residual connections for detecting multiple changes and multiple change types in Section 6.
6 Detecting multiple changes and multiple change types – case study
From the previous section, we see that single and multiple hidden layer neural networks can represent CUSUM or generalised CUSUM tests and may perform better than likelihood-based test statistics when the model is misspecified. This prompted us to seek a general network architecture that can detect, and even classify, multiple types of change. Motivated by the similarities between signal processing and image recognition, we employed a deep convolutional neural network (CNN) (Yamashita et al., 2018) to learn the various features of multiple change-types. However, stacking more CNN layers cannot guarantee a better network because of vanishing gradients in training (He et al., 2016). Therefore, we adopted the residual block structure (He et al., 2016) for our neural network architecture. After experimenting with various architectures with different numbers of residual blocks and fully connected layers on synthetic data, we arrived at a network architecture with 21 residual blocks followed by a number of fully connected layers. Figure 9 shows an overview of the architecture of the final general-purpose deep neural network for change-point detection. The precise architecture and training methodology of this network $\widehat{NN}$ can be found in Appendix C. Neural Architecture Search (NAS) approaches (see Paaß and Giesselbach, 2023, Section 2.4.3) offer principled ways of selecting neural architectures. Some of these approaches could be made applicable in our setting. We demonstrate the power of our general purpose change-point detection network in a numerical study. We train the network on $N=10000$ instances of data sequences generated from a mixture of no change-point in mean or variance, change in mean only, change in variance only, no-change in a non-zero slope and change in slope only, and compare its classification performance on a test set of size $2500$ against that of oracle likelihood-based classifiers (where we pre-specify whether we are testing for change in mean, variance or slope) and adaptive likelihood-based classifiers (where we combine likelihood based tests using the Bayesian Information Criterion). Details of the data-generating mechanism and classifiers can be found in Appendix B. The classification accuracy of the three approaches in weak and strong signal-to-noise ratio settings are reported in Table 1. We see that the neural network-based approach achieves similar classification accuracy as adaptive likelihood based method for weak SNR and higher classification accuracy than the adaptive likelihood based method for strong SNR. We would not expect the neural network to outperform the oracle likelihood-based classifiers as it has no knowledge of the exact change-type of each time series.
Table 1: Test classification accuracy of oracle likelihood-ratio based method (LR ${}^{\mathrm{oracle}}$ ), adaptive likelihood ratio method (LR ${}^{\mathrm{adapt}}$ ) and our residual neural network (NN) classifier for setups with weak and strong signal-to-noise ratios (SNR). Data are generated as a mixture of no change-point in mean or variance (Class 1), change in mean only (Class 2), change in variance only (Class 3), no-change in a non-zero slope (Class 4), change in slope only (Class 5). We report the true positive rate of each class and the accuracy in the last row.
Weak SNR Strong SNR LR ${}^{\mathrm{oracle}}$ LR ${}^{\mathrm{adapt}}$ NN LR ${}^{\mathrm{oracle}}$ LR ${}^{\mathrm{adapt}}$ NN Class 1 0.9787 0.9457 0.8062 0.9787 0.9341 0.9651 Class 2 0.8443 0.8164 0.8882 1.0000 0.7784 0.9860 Class 3 0.8350 0.8291 0.8585 0.9902 0.9902 0.9705 Class 4 0.9960 0.9453 0.8826 0.9980 0.9372 0.9312 Class 5 0.8729 0.8604 0.8353 0.9958 0.9917 0.9147 Accuracy 0.9056 0.8796 0.8660 0.9924 0.9260 0.9672
We now consider an application to detecting different types of change. The HASC (Human Activity Sensing Consortium) project data contain motion sensor measurements during a sequence of human activities, including “stay”, “walk”, “jog”, “skip”, “stair up” and “stair down”. Complex changes in sensor signals occur during transition from one activity to the next (see Figure 3). We have 28 labels in HASC data, see Figure 10 in appendix. To agree with the dimension of the output, we drop two dense layers “Dense(10)” and “Dense(20)” in Figure 9. The resulting network can be effectively applied for change-point detection in sensory signals of human activities, and can achieve high accuracy in change-point classification tasks (Figure 12 in appendix). Finally, we remark that our neural network-based change-point detector can be utilised to detect multiple change-points. Algorithm 1 outlines a general scheme for turning a change-point classifier into a location estimator, where we employ an idea similar to that of MOSUM (Eichinger and Kirch, 2018) and repeatedly apply a classifier $\psi$ to data from a sliding window of size $n$ . Here, we require $\psi$ applied to each data segment $\boldsymbol{X}^{*}_{[i,i+n)}$ to output both the class label $L_{i}=0$ or $1$ if no change or a change is predicted and the corresponding probability $p_{i}$ of having a change. In our particular example, for each data segment $\boldsymbol{X}^{*}_{[i,i+n)}$ of length $n=700$ , we define $\psi(\boldsymbol{X}^{*}_{[i,i+n)})=0$ if $\widehat{NN}(\boldsymbol{X}^{*}_{[i,i+n)})$ predicts a class label in $\{0,4,8,12,16,22\}$ (see Figure 10 in appendix) and 1 otherwise. The thresholding parameter $\gamma∈\mathbb{Z}^{+}$ is chosen to be $1/2$ .
Input: new data $\boldsymbol{x}_{1}^{*},...,\boldsymbol{x}_{n^{*}}^{*}∈\mathbb{R}^{d}$ , a trained classifier $\psi:\mathbb{R}^{d× n}→\{0,1\}$ , $\gamma>0$ .
1 Form $\boldsymbol{X}_{[i,i+n)}^{*}:=(\boldsymbol{x}_{i}^{*},...,\boldsymbol{x}_{i%
+n-1})$ and compute $L_{i}←\psi(\boldsymbol{X}^{*}_{[i,i+n)})$ for all $i=1,...,n^{*}-n+1$ ;
2 Compute $\bar{L}_{i}← n^{-1}\sum_{j=i-n+1}^{i}L_{j}$ for $i=n,...,n^{*}-n+1$ ;
3 Let $\{[s_{1},e_{1}],...,[s_{\hat{\nu}},e_{\hat{\nu}}]\}$ be the set of all maximal segments such that $\bar{L}_{i}≥\gamma$ for all $i∈[s_{r},e_{r}]$ , $r∈[\hat{\nu}]$ ;
4 Compute $\hat{\tau}_{r}←\operatorname*{arg\,max}_{i∈[s_{r},e_{r}]}\bar{L}_{i}$ for all $r∈[\hat{\nu}]$ ;
Output: Estimated change-points $\hat{\tau}_{1},...,\hat{\tau}_{\hat{\nu}}$
Algorithm 1 Algorithm for change-point localisation
Figure 4 illustrates the result of multiple change-point detection in HASC data which provides evidence that the trained neural network can detect both the multiple change-types and multiple change-points.
<details>
<summary>x6.png Details</summary>

### Visual Description
# Technical Document Extraction: Multi-Axis Time Series Analysis
## Overview
The image presents a multi-axis time series visualization with three distinct subplots (X, Y, Z) representing different data dimensions. Each subplot contains time-based data with annotations and highlighted regions. The visualization uses color-coded lines and geometric markers to denote specific events or states.
---
## Subplot Details
### **X-Axis (Blue Line)**
- **Axis Title**: `x`
- **Range**: 0 to 3500 (horizontal axis)
- **Y-Axis Range**: -2 to 2
- **Data Characteristics**:
- **Line Color**: Blue
- **Trend**:
- Initial high-frequency oscillations (0–500)
- Sharp drop to baseline (500–1000)
- Sustained plateau (1000–1500)
- Increased amplitude oscillations (1500–3500)
- **Annotations**:
- **Red Rectangles**: Highlighted regions at:
- 1000–1500 (centered at 1250)
- 1500–2000 (centered at 1750)
- 2500–3000 (centered at 2750)
- **Vertical Dashed Lines**:
- 500, 1000, 1500, 2000, 2500, 3000, 3500
### **Y-Axis (Orange Line)**
- **Axis Title**: `y`
- **Range**: 0 to 2 (horizontal axis)
- **Y-Axis Range**: -2 to 2
- **Data Characteristics**:
- **Line Color**: Orange
- **Trend**:
- High-frequency oscillations (0–500)
- Sustained plateau (500–1000)
- Increased amplitude oscillations (1000–1500)
- Reduced amplitude oscillations (1500–3500)
- **Annotations**:
- **Red Rectangles**: Highlighted regions at:
- 1000–1500 (centered at 1250)
- 1500–2000 (centered at 1750)
- 2500–3000 (centered at 2750)
- **Vertical Dashed Lines**: Same as X-axis.
### **Z-Axis (Green Line)**
- **Axis Title**: `z`
- **Range**: 0 to 2 (horizontal axis)
- **Y-Axis Range**: -4 to 2
- **Data Characteristics**:
- **Line Color**: Green
- **Trend**:
- High-frequency oscillations (0–500)
- Sustained plateau (500–1000)
- Increased amplitude oscillations (1000–1500)
- Reduced amplitude oscillations (1500–3500)
- **Annotations**:
- **Text Labels**:
- **"stair down"**: Points to the 500–1000 plateau (green line drops to -4).
- **"stay"**: Points to the 1000–1500 plateau (green line remains flat).
- **"stair up"**: Points to the 1500–2000 plateau (green line rises to 2).
- **"walk"**: Points to the 2500–3000 plateau (green line remains flat).
- **Red Rectangles**: Highlighted regions at:
- 1000–1500 (centered at 1250)
- 1500–2000 (centered at 1750)
- 2500–3000 (centered at 2750)
- **Vertical Dashed Lines**: Same as X-axis.
---
## Key Observations
1. **Synchronized Events**:
- Red rectangles and vertical dashed lines align across all subplots, indicating synchronized events or state transitions.
- Example: The 1000–1500 region in all subplots shows a plateau in Y and Z, while X exhibits a sharp drop.
2. **State Transitions**:
- **"stair down"**: Z-axis drops to -4 (500–1000).
- **"stay"**: Z-axis remains flat (1000–1500).
- **"stair up"**: Z-axis rises to 2 (1500–2000).
- **"walk"**: Z-axis remains flat (2500–3000).
3. **Data Consistency**:
- All subplots share the same X-axis time scale (0–3500).
- Color coding (blue for X, orange for Y, green for Z) is consistent across subplots.
---
## Legend and Spatial Grounding
- **Legend**: Not explicitly present. Color coding is inferred from subplot labels:
- **Blue**: X-axis data
- **Orange**: Y-axis data
- **Green**: Z-axis data
- **Spatial Grounding**:
- All annotations and markers are spatially aligned with the X-axis time scale.
---
## Conclusion
The visualization captures time-dependent behavior across three dimensions (X, Y, Z), with clear state transitions marked by annotations and geometric highlights. The red rectangles and dashed lines suggest critical events or thresholds, while the color-coded lines enable cross-subplot analysis.
</details>
Figure 3: The sequence of accelerometer data in $x,y$ and $z$ axes. From left to right, there are 4 activities: “stair down”, “stay”, “stair up” and “walk”, their change-points are 990, 1691, 2733 respectively marked by black solid lines. The grey rectangles represent the group of “no-change” with labels: “stair down”, “stair up” and “walk”; The red rectangles represent the group of “one-change” with labels: “stair down $→$ stay”, “stay $→$ stair up” and “stair up $→$ walk”.
<details>
<summary>x7.png Details</summary>

### Visual Description
# Technical Document Extraction: Signal Analysis Chart
## 1. Axis Labels and Markers
- **X-Axis (Horizontal):**
- Label: `Time`
- Markers: `0`, `2000`, `4000`, `6000`, `8000`, `10000`
- **Y-Axis (Vertical):**
- Label: `Signal`
- Markers: `-2`, `-1`, `0`, `1`, `2`
## 2. Legend
- **Position:** Top-right corner
- **Labels and Colors:**
- `x` (blue)
- `y` (orange)
- `z` (green)
## 3. Activity Segments and Labels
Vertical lines segment the chart into labeled activity intervals. Each segment includes:
- **Start/End Time** (approximate):
- `walk`: 0–1000
- `skip`: 1000–2000
- `stay`: 2000–3000
- `jog`: 3000–4000
- `walk`: 4000–5000
- `stUp`: 5000–6000
- `stay`: 6000–7000
- `stDown`: 7000–8000
- `walk`: 8000–9000
- `stay`: 9000–10000
- `skip`: 10000–11000
- `jog`: 11000–12000
## 4. Data Series Trends
### X-Series (Blue)
- **Trend:**
- High amplitude fluctuations during `walk`, `jog`, and `skip`.
- Near-zero baseline during `stay` and `stUp`.
- **Key Data Points:**
- Peaks at ±1.5–2.0 during active phases.
- Minima at ±0.1 during stationary phases.
### Y-Series (Orange)
- **Trend:**
- Moderate amplitude during `walk` and `jog`.
- Near-zero baseline during `stay` and `stUp`.
- **Key Data Points:**
- Peaks at ±1.0–1.2 during active phases.
- Minima at ±0.05 during stationary phases.
### Z-Series (Green)
- **Trend:**
- High amplitude during `walk`, `jog`, and `skip`.
- Near-zero baseline during `stay` and `stUp`.
- **Key Data Points:**
- Peaks at ±1.8–2.0 during active phases.
- Minima at ±0.1 during stationary phases.
## 5. Cross-Referenced Legend Validation
- **Color Consistency:**
- Blue (`x`) matches all blue lines.
- Orange (`y`) matches all orange lines.
- Green (`z`) matches all green lines.
## 6. Spatial Grounding
- **Legend Position:** Top-right corner (no explicit coordinates provided).
- **Axis Alignment:**
- X-axis spans 0–12,000 (time).
- Y-axis spans -2 to 2 (signal).
## 7. Activity-Specific Observations
- **Walk:**
- X and Z series exhibit synchronized high-amplitude oscillations.
- Y series shows lower but consistent activity.
- **Jog:**
- X and Z series peak at similar magnitudes (~2.0).
- Y series lags slightly behind X/Z.
- **Stay:**
- All series flatten to near-zero baseline.
- **StUp/StDown:**
- Minimal signal variation across all axes.
## 8. Data Table Reconstruction
| Activity | Start Time | End Time | X-Series Peak | Y-Series Peak | Z-Series Peak |
|------------|------------|----------|---------------|---------------|---------------|
| walk | 0 | 1000 | ±1.5–2.0 | ±1.0–1.2 | ±1.8–2.0 |
| skip | 1000 | 2000 | ±1.2–1.5 | ±0.8–1.0 | ±1.5–1.8 |
| stay | 2000 | 3000 | ±0.1 | ±0.05 | ±0.1 |
| jog | 3000 | 4000 | ±1.8–2.0 | ±1.2–1.4 | ±1.9–2.0 |
| walk | 4000 | 5000 | ±1.5–2.0 | ±1.0–1.2 | ±1.8–2.0 |
| stUp | 5000 | 6000 | ±0.1 | ±0.05 | ±0.1 |
| stay | 6000 | 7000 | ±0.1 | ±0.05 | ±0.1 |
| stDown | 7000 | 8000 | ±0.1 | ±0.05 | ±0.1 |
| walk | 8000 | 9000 | ±1.5–2.0 | ±1.0–1.2 | ±1.8–2.0 |
| stay | 9000 | 10000 | ±0.1 | ±0.05 | ±0.1 |
| skip | 10000 | 11000 | ±1.2–1.5 | ±0.8–1.0 | ±1.5–1.8 |
| jog | 11000 | 12000 | ±1.8–2.0 | ±1.2–1.4 | ±1.9–2.0 |
## 9. Additional Notes
- **Language:** All text is in English.
- **Missing Elements:** No explicit chart title or numerical data table in the image.
- **Assumptions:** Time intervals for activities are approximate based on visual segmentation.
</details>
Figure 4: Change-point detection in HASC data. The red vertical lines represent the underlying change-points, the blue vertical lines represent the estimated change-points. More details on multiple change-point detection can be found in Appendix C.
7 Discussion
Reliable testing for change-points and estimating their locations, especially in the presence of multiple change-points, other heterogeneities or untidy data, is typically a difficult problem for the applied statistician: they need to understand what type of change is sought, be able to characterise it mathematically, find a satisfactory stochastic model for the data, formulate the appropriate statistic, and fine-tune its parameters. This makes for a long workflow, with scope for errors at its every stage. In this paper, we showed how a carefully constructed statistical learning framework could automatically take over some of those tasks, and perform many of them ‘in one go’ when provided with examples of labelled data. This turned the change-point detection problem into a supervised learning problem, and meant that the task of learning the appropriate test statistic and fine-tuning its parameters was left to the ‘machine’ rather than the human user. The crucial question was that of choosing an appropriate statistical learning framework. The key factor behind our choice of neural networks was the discovery that the traditionally-used likelihood-ratio-based change-point detection statistics could be viewed as simple neural networks, which (together with bounds on generalisation errors beyond the training set) enabled us to formulate and prove the corresponding learning theory. However, there are a plethora of other excellent predictive frameworks, such as XGBoost, LightGBM or Random Forests (Chen and Guestrin, 2016; Ke et al., 2017; Breiman, 2001) and it would be of interest to establish whether and why they could or could not provide a viable alternative to neural nets here. Furthermore, if we view the neural network as emulating the likelihood-ratio test statistic, in that it will create test statistics for each possible location of a change and then amalgamate these into a single classifier, then we know that test statistics for nearby changes will often be similar. This suggests that imposing some smoothness on the weights of the neural network may be beneficial. A further challenge is to develop methods that can adapt easily to input data of different sizes, without having to train a different neural network for each input size. For changes in the structure of the mean of the data, it may be possible to use ideas from functional data analysis so that we pre-process the data, with some form of smoothing or imputation, to produce input data of the correct length. If historical labelled examples of change-points, perhaps provided by subject-matter experts (who are not necessarily statisticians) are not available, one question of interest is whether simulation can be used to obtain such labelled examples artificially, based on (say) a single dataset of interest. Such simulated examples would need to come in two flavours: one batch ‘likely containing no change-points’ and the other containing some artificially induced ones. How to simulate reliably in this way is an important problem, which this paper does not solve. Indeed, we can envisage situations in which simulating in this way may be easier than solving the original unsupervised change-point problem involving the single dataset at hand, with the bulk of the difficulty left to the ‘machine’ at the learning stage when provided with the simulated data. For situations where there is no historical data, but there are statistical models, one can obtain training data by simulation from the model. In this case, training a neural network to detect a change has similarities with likelihood-free inference methods in that it replaces analytic calculations associated with a model by the ability to simulate from the model. It is of interest whether ideas from that area of statistics can be used here. The main focus of our work was on testing for a single offline change-point, and we treated location estimation and extensions to multiple-change scenarios only superficially, via the heuristics of testing-based estimation in Section 6. Similar extensions can be made to the online setting once the neural network is trained, by retaining the final $n$ observations in an online stream in memory and applying our change-point classifier sequentially. One question of interest is whether and how these heuristics can be made more rigorous: equipped with an offline classifier only, how can we translate the theoretical guarantee of this offline classifier to that of the corresponding location estimator or online detection procedure? In addition to this approach, how else can a neural network, however complex, be trained to estimate locations or detect change-points sequentially? In our view, these questions merit further work.
Availability of data and computer code
The data underlying this article are available in http://hasc.jp/hc2011/index-en.html. The computer code and algorithm are available in Python Package: AutoCPD.
Acknowledgement
This work was supported by the High End Computing Cluster at Lancaster University, and EPSRC grants EP/V053590/1, EP/V053639/1 and EP/T02772X/1. We highly appreciate Yudong Chen’s contribution to debug our Python scripts and improve their readability.
Conflicts of Interest
We have no conflicts of interest to disclose.
References
- Ahmadzadeh (2018) Ahmadzadeh, F. (2018). Change point detection with multivariate control charts by artificial neural network. J. Adv. Manuf. Technol. 97 (9), 3179–3190.
- Aminikhanghahi and Cook (2017) Aminikhanghahi, S. and D. J. Cook (2017). Using change point detection to automate daily activity segmentation. In 2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), pp. 262–267.
- Baranowski et al. (2019) Baranowski, R., Y. Chen, and P. Fryzlewicz (2019). Narrowest-over-threshold detection of multiple change points and change-point-like features. J. Roy. Stat. Soc., Ser. B 81 (3), 649–672.
- Bartlett et al. (2019) Bartlett, P. L., N. Harvey, C. Liaw, and A. Mehrabian (2019). Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks. J. Mach. Learn. Res. 20 (63), 1–17.
- Beaumont (2019) Beaumont, M. A. (2019). Approximate Bayesian computation. Annu. Rev. Stat. Appl. 6, 379–403.
- Bengio et al. (1994) Bengio, Y., P. Simard, and P. Frasconi (1994). Learning long-term dependencies with gradient descent is difficult. IEEE T. Neural Networ. 5 (2), 157–166.
- Bos and Schmidt-Hieber (2022) Bos, T. and J. Schmidt-Hieber (2022). Convergence rates of deep ReLU networks for multiclass classification. Electron. J. Stat. 16 (1), 2724–2773.
- Breiman (2001) Breiman, L. (2001). Random forests. Mach. Learn. 45 (1), 5–32.
- Chang et al. (2019) Chang, W.-C., C.-L. Li, Y. Yang, and B. Póczos (2019). Kernel change-point detection with auxiliary deep generative models. In International Conference on Learning Representations.
- Chen and Gupta (2012) Chen, J. and A. K. Gupta (2012). Parametric Statistical Change Point Analysis: With Applications to Genetics, Medicine, and Finance (2nd ed.). New York: Birkhäuser.
- Chen and Guestrin (2016) Chen, T. and C. Guestrin (2016). XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794.
- De Ryck et al. (2021) De Ryck, T., M. De Vos, and A. Bertrand (2021). Change point detection in time series data using autoencoders with a time-invariant representation. IEEE T. Signal Proces. 69, 3513–3524.
- Dehling et al. (2015) Dehling, H., R. Fried, I. Garcia, and M. Wendler (2015). Change-point detection under dependence based on two-sample U-statistics. In D. Dawson, R. Kulik, M. Ould Haye, B. Szyszkowicz, and Y. Zhao (Eds.), Asymptotic Laws and Methods in Stochastics: A Volume in Honour of Miklós Csörgő, pp. 195–220. New York, NY: Springer New York.
- Dürre et al. (2016) Dürre, A., R. Fried, T. Liboschik, and J. Rathjens (2016). robts: Robust Time Series Analysis. R package version 0.3.0/r251.
- Eichinger and Kirch (2018) Eichinger, B. and C. Kirch (2018). A MOSUM procedure for the estimation of multiple random change points. Bernoulli 24 (1), 526–564.
- Fearnhead et al. (2019) Fearnhead, P., R. Maidstone, and A. Letchford (2019). Detecting changes in slope with an $l_{0}$ penalty. J. Comput. Graph. Stat. 28 (2), 265–275.
- Fearnhead and Rigaill (2020) Fearnhead, P. and G. Rigaill (2020). Relating and comparing methods for detecting changes in mean. Stat 9 (1), 1–11.
- Fryzlewicz (2014) Fryzlewicz, P. (2014). Wild binary segmentation for multiple change-point detection. Ann. Stat. 42 (6), 2243–2281.
- Fryzlewicz (2021) Fryzlewicz, P. (2021). Robust narrowest significance pursuit: Inference for multiple change-points in the median. arXiv preprint, arxiv:2109.02487.
- Fryzlewicz (2023) Fryzlewicz, P. (2023). Narrowest significance pursuit: Inference for multiple change-points in linear models. J. Am. Stat. Assoc., to appear.
- Gao et al. (2019) Gao, Z., Z. Shang, P. Du, and J. L. Robertson (2019). Variance change point detection under a smoothly-changing mean trend with application to liver procurement. J. Am. Stat. Assoc. 114 (526), 773–781.
- Glorot and Bengio (2010) Glorot, X. and Y. Bengio (2010). Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256. JMLR Workshop and Conference Proceedings.
- Gourieroux et al. (1993) Gourieroux, C., A. Monfort, and E. Renault (1993). Indirect inference. J. Appl. Econom. 8 (S1), S85–S118.
- Gupta et al. (2022) Gupta, M., R. Wadhvani, and A. Rasool (2022). Real-time change-point detection: A deep neural network-based adaptive approach for detecting changes in multivariate time series data. Expert Syst. Appl. 209, 1–16.
- Gutmann et al. (2018) Gutmann, M. U., R. Dutta, S. Kaski, and J. Corander (2018). Likelihood-free inference via classification. Stat. Comput. 28 (2), 411–425.
- Haynes et al. (2017) Haynes, K., I. A. Eckley, and P. Fearnhead (2017). Computationally efficient changepoint detection for a range of penalties. J. Comput. Graph. Stat. 26 (1), 134–143.
- He and Sun (2015) He, K. and J. Sun (2015). Convolutional neural networks at constrained time cost. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5353–5360.
- He et al. (2016) He, K., X. Zhang, S. Ren, and J. Sun (2016, June). Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778.
- Hocking et al. (2015) Hocking, T., G. Rigaill, and G. Bourque (2015). PeakSeg: constrained optimal segmentation and supervised penalty learning for peak detection in count data. In International Conference on Machine Learning, pp. 324–332. PMLR.
- Huang et al. (2023) Huang, T.-J., Q.-L. Zhou, H.-J. Ye, and D.-C. Zhan (2023). Change point detection via synthetic signals. In 8th Workshop on Advanced Analytics and Learning on Temporal Data.
- Ioffe and Szegedy (2015) Ioffe, S. and C. Szegedy (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, pp. 448–456. JMLR.org.
- James et al. (1987) James, B., K. L. James, and D. Siegmund (1987). Tests for a change-point. Biometrika 74 (1), 71–83.
- Jandhyala et al. (2013) Jandhyala, V., S. Fotopoulos, I. MacNeill, and P. Liu (2013). Inference for single and multiple change-points in time series. J. Time Ser. Anal. 34 (4), 423–446.
- Ke et al. (2017) Ke, G., Q. Meng, T. Finley, T. Wang, W. Chen, W. Ma, Q. Ye, and T.-Y. Liu (2017). LightGBM: A highly efficient gradient boosting decision tree. Adv. Neur. In. 30, 3146–3154.
- Killick et al. (2012) Killick, R., P. Fearnhead, and I. A. Eckley (2012). Optimal detection of changepoints with a linear computational cost. J. Am. Stat. Assoc. 107 (500), 1590–1598.
- Kingma and Ba (2015) Kingma, D. P. and J. Ba (2015). Adam: A method for stochastic optimization. In Y. Bengio and Y. LeCun (Eds.), ICLR (Poster).
- Kuchibhotla and Chakrabortty (2022) Kuchibhotla, A. K. and A. Chakrabortty (2022). Moving beyond sub-Gaussianity in high-dimensional statistics: Applications in covariance estimation and linear regression. Inf. Inference: A Journal of the IMA 11 (4), 1389–1456.
- Lee et al. (2023) Lee, J., Y. Xie, and X. Cheng (2023). Training neural networks for sequential change-point detection. In IEEE ICASSP 2023, pp. 1–5. IEEE.
- Li et al. (2015) Li, F., Z. Tian, Y. Xiao, and Z. Chen (2015). Variance change-point detection in panel data models. Econ. Lett. 126, 140–143.
- Li et al. (2023) Li, J., P. Fearnhead, P. Fryzlewicz, and T. Wang (2023). Automatic change-point detection in time series via deep learning. submitted, arxiv:2211.03860.
- Li et al. (2023) Li, M., Y. Chen, T. Wang, and Y. Yu (2023). Robust mean change point testing in high-dimensional data with heavy tails. arXiv preprint, arxiv:2305.18987.
- Liehrmann et al. (2021) Liehrmann, A., G. Rigaill, and T. D. Hocking (2021). Increased peak detection accuracy in over-dispersed ChIP-seq data with supervised segmentation models. BMC Bioinform. 22 (1), 1–18.
- Londschien et al. (2022) Londschien, M., P. Bühlmann, and S. Kovács (2022). Random forests for change point detection. arXiv preprint, arxiv:2205.04997.
- Mohri et al. (2012) Mohri, M., A. Rostamizadeh, and A. Talwalkar (2012). Foundations of Machine Learning. Adaptive Computation and Machine Learning Series. Cambridge, MA: MIT Press.
- Ng (2004) Ng, A. Y. (2004). Feature selection, l 1 vs. l 2 regularization, and rotational invariance. In Proceedings of the Twenty-First International Conference on Machine Learning, ICML ’04, New York, NY, USA, pp. 78. Association for Computing Machinery.
- Oh et al. (2005) Oh, K. J., M. S. Moon, and T. Y. Kim (2005). Variance change point detection via artificial neural networks for data separation. Neurocomputing 68, 239–250.
- Paaß and Giesselbach (2023) Paaß, G. and S. Giesselbach (2023). Foundation Models for Natural Language Processing: Pre-trained Language Models Integrating Media. Artificial Intelligence: Foundations, Theory, and Algorithms. Springer International Publishing.
- Picard et al. (2005) Picard, F., S. Robin, M. Lavielle, C. Vaisse, and J.-J. Daudin (2005). A statistical approach for array CGH data analysis. BMC Bioinform. 6 (1).
- Reeves et al. (2007) Reeves, J., J. Chen, X. L. Wang, R. Lund, and Q. Q. Lu (2007). A review and comparison of changepoint detection techniques for climate data. J. Appl. Meteorol. Clim. 46 (6), 900–915.
- Ripley (1994) Ripley, B. D. (1994). Neural networks and related methods for classification. J. Roy. Stat. Soc., Ser. B 56 (3), 409–456.
- Schmidt-Hieber (2020) Schmidt-Hieber, J. (2020). Nonparametric regression using deep neural networks with ReLU activation function. Ann. Stat. 48 (4), 1875–1897.
- Shalev-Shwartz and Ben-David (2014) Shalev-Shwartz, S. and S. Ben-David (2014). Understanding Machine Learning: From Theory to Algorithms. New York, NY, USA: Cambridge University Press.
- Truong et al. (2020) Truong, C., L. Oudre, and N. Vayatis (2020). Selective review of offline change point detection methods. Signal Process. 167, 107299.
- Verzelen et al. (2020) Verzelen, N., M. Fromont, M. Lerasle, and P. Reynaud-Bouret (2020). Optimal change-point detection and localization. arXiv preprint, arxiv:2010.11470.
- Wang and Samworth (2018) Wang, T. and R. J. Samworth (2018). High dimensional change point estimation via sparse projection. J. Roy. Stat. Soc., Ser. B 80 (1), 57–83.
- Yamashita et al. (2018) Yamashita, R., M. Nishio, R. K. G. Do, and K. Togashi (2018). Convolutional neural networks: an overview and application in radiology. Insights into Imaging 9 (4), 611–629.
This is the appendix for the main paper Li, Fearnhead, Fryzlewicz, and Wang (2023), hereafter referred to as the main text. We present proofs of our main lemmas and theorems. Various technical details, results of numerical study and real data analysis are also listed here.
Appendix A Proofs
A.1 The proof of Lemma 3.1
Define $W_{0}\coloneqq(\boldsymbol{v}_{1},...,\boldsymbol{v}_{n-1},-\boldsymbol{v}_%
{1},...,-\boldsymbol{v}_{n-1})^{→p}$ and $W_{1}\coloneqq\boldsymbol{1}_{2n-2}$ , $\boldsymbol{b}_{1}\coloneqq\lambda\boldsymbol{1}_{2n-2}$ and $b_{2}\coloneqq 0$ . Then $h(\boldsymbol{x})\coloneqq\sigma^{*}_{b_{2}}W_{1}\sigma_{\boldsymbol{b}_{1}}W_%
{0}\boldsymbol{x}∈\mathcal{H}_{1,2n-2}$ can be rewritten as
$$
h(\boldsymbol{x})=\mathbbm{1}\biggl{\{}\sum_{i=1}^{n-1}\bigl{\{}(\boldsymbol{v%
}_{i}^{\top}\boldsymbol{x}-\lambda)_{+}+(-\boldsymbol{v}_{i}^{\top}\boldsymbol%
{x}-\lambda)_{+}\bigr{\}}>b_{2}\biggr{\}}=\mathbbm{1}\{\|\mathcal{C}(%
\boldsymbol{x})\|_{\infty}>\lambda\}=h_{\lambda}^{\mathrm{CUSUM}}(\boldsymbol{%
x}),
$$
as desired.
A.2 The Proof of Lemma 3.2
As $\boldsymbol{\Gamma}$ is invertible, (2) in main text is equivalent to
$$
\boldsymbol{\Gamma}^{-1}\boldsymbol{X}=\boldsymbol{\Gamma}^{-1}\boldsymbol{Z}%
\boldsymbol{\beta}+\boldsymbol{\Gamma}^{-1}\boldsymbol{c}_{\tau}\phi+%
\boldsymbol{\xi}.
$$
Write $\tilde{\boldsymbol{X}}=\boldsymbol{\Gamma}^{-1}\boldsymbol{X}$ , $\tilde{\boldsymbol{Z}}=\boldsymbol{\Gamma}^{-1}\boldsymbol{Z}$ and $\tilde{\boldsymbol{c}}_{\tau}=\boldsymbol{\Gamma}^{-1}\boldsymbol{c}_{\tau}$ . If $\tilde{\boldsymbol{c}}_{\tau}$ lies in the column span of $\tilde{\boldsymbol{Z}}$ , then the model with a change at $\tau$ is equivalent to the model with no change, and the likelihood-ratio test statistic will be 0. Otherwise we can assume, without loss of generality that $\tilde{\boldsymbol{c}}_{\tau}$ is orthogonal to each column of $\tilde{\boldsymbol{Z}}$ : if this is not the case we can construct an equivalent model where we replace $\tilde{\boldsymbol{c}}_{\tau}$ with its projection to the space that is orthogonal to the column span of $\tilde{\boldsymbol{Z}}$ . As $\boldsymbol{\xi}$ is a vector of independent standard normal random variables, the likelihood-ratio statistic for a change at $\tau$ against no change is a monotone function of the reduction in the residual sum of squares of the model with a change at $\tau$ . The residual sum of squares of the no change model is
$$
\tilde{\boldsymbol{X}}^{\top}\tilde{\boldsymbol{X}}-\tilde{\boldsymbol{X}}^{%
\top}\tilde{\boldsymbol{Z}}(\tilde{\boldsymbol{Z}}^{\top}\tilde{\boldsymbol{Z}%
})^{-1}\tilde{\boldsymbol{Z}}^{\top}\tilde{\boldsymbol{X}}.
$$
The residual sum of squares for the model with a change at $\tau$ is
$$
\tilde{\boldsymbol{X}}^{\top}\tilde{\boldsymbol{X}}-\tilde{\boldsymbol{X}}^{%
\top}[\tilde{\boldsymbol{Z}},\tilde{\boldsymbol{c}}_{\tau}]([\tilde{%
\boldsymbol{Z}},\tilde{\boldsymbol{c}}_{\tau}]^{\top}[\tilde{\boldsymbol{Z}},%
\tilde{\boldsymbol{c}}_{\tau}])^{-1}[\tilde{\boldsymbol{Z}},\tilde{\boldsymbol%
{c}}_{\tau}]^{\top}\tilde{\boldsymbol{X}}=\tilde{\boldsymbol{X}}^{\top}\tilde{%
\boldsymbol{X}}-\tilde{\boldsymbol{X}}^{\top}\tilde{\boldsymbol{Z}}(\tilde{%
\boldsymbol{Z}}^{\top}\tilde{\boldsymbol{Z}})^{-1}\tilde{\boldsymbol{Z}}^{\top%
}\tilde{\boldsymbol{X}}-\tilde{\boldsymbol{X}}^{\top}\tilde{\boldsymbol{c}}_{%
\tau}(\tilde{\boldsymbol{c}}_{\tau}^{\top}\tilde{\boldsymbol{c}}_{\tau})^{-1}%
\tilde{\boldsymbol{c}}_{\tau}^{\top}\tilde{\boldsymbol{X}}.
$$
Thus, the reduction in residual sum of square of the model with the change at $\tau$ over the no change model is
$$
\tilde{\boldsymbol{X}}^{\top}\tilde{\boldsymbol{c}}_{\tau}(\tilde{\boldsymbol{%
c}}_{\tau}^{\top}\tilde{\boldsymbol{c}}_{\tau})^{-1}\tilde{\boldsymbol{c}}_{%
\tau}^{\top}\tilde{\boldsymbol{X}}=\left(\frac{1}{\sqrt{\tilde{\boldsymbol{c}}%
_{\tau}^{\top}\tilde{\boldsymbol{c}}_{\tau}}}\tilde{\boldsymbol{c}}_{\tau}^{%
\top}\tilde{\boldsymbol{X}}\right)^{2}
$$
Thus if we define
$$
\boldsymbol{v}_{\tau}=\frac{1}{\sqrt{\tilde{\boldsymbol{c}}_{\tau}^{\top}%
\tilde{\boldsymbol{c}}_{\tau}}}\tilde{\boldsymbol{c}}_{\tau}^{\top}\boldsymbol%
{\Gamma}^{-1,}
$$
then the likelihood-ratio test statistic is a monotone function of $|\boldsymbol{v}_{\tau}\boldsymbol{X}|$ . This is true for all $\tau$ so the likelihood-ratio test is equivalent to
$$
\max_{\tau\in[n-1]}|\boldsymbol{v}_{\tau}\boldsymbol{X}|>\lambda,
$$
for some $\lambda$ . This is of a similar form to the standard CUSUM test, except that the form of $\boldsymbol{v}_{\tau}$ is different. Thus, by the same argument as for Lemma 3.1 in main text, we can replicate this test with $h(\boldsymbol{x})∈\mathcal{H}_{1,2n-2}$ , but with different weights to represent the different form for $\boldsymbol{v}_{\tau}$ .
A.3 The Proof of Lemma 4.1
* Proof*
(a) For each $i∈[n-1]$ , since ${\|\boldsymbol{v}_{i}\|_{2}}=1$ , we have $\boldsymbol{v}_{i}^{→p}\boldsymbol{X}\sim N(0,1)$ . Hence, by the Gaussian tail bound and a union bound,
$$
\mathbb{P}\Bigl{\{}\|\mathcal{C}(\boldsymbol{X})\|_{\infty}>t\Bigr{\}}\leq\sum%
_{i=1}^{n-1}\mathbb{P}\left(\left|\boldsymbol{v}_{i}^{\top}\boldsymbol{X}%
\right|>t\right)\leq n\exp(-t^{2}/2).
$$
The result follows by taking $t=\sqrt{2\log(n/\varepsilon)}$ . (b) We write $\boldsymbol{X}=\boldsymbol{\mu}+\boldsymbol{Z}$ , where $\boldsymbol{Z}\sim N_{n}(0,I_{n})$ . Since the CUSUM transformation is linear, we have $\mathcal{C}(\boldsymbol{X})=\mathcal{C}(\boldsymbol{\mu})+\mathcal{C}(%
\boldsymbol{Z})$ . By part (a) there is an event $\Omega$ with probability at least $1-\varepsilon$ on which $\|\mathcal{C}(\boldsymbol{Z})\|_{∞}≤\sqrt{2\log(n/\varepsilon)}$ . Moreover, we have $\|\mathcal{C}(\boldsymbol{\mu})\|_{∞}=|\boldsymbol{v}_{\tau}^{→p}%
\boldsymbol{\mu}|=|\mu_{\mathrm{L}}-\mu_{\mathrm{R}}|\sqrt{n\eta(1-\eta)}$ . Hence on $\Omega$ , we have by the triangle inequality that
$$
\|\mathcal{C}(\boldsymbol{X})\|_{\infty}\geq\|\mathcal{C}(\boldsymbol{\mu})\|_%
{\infty}-\|\mathcal{C}(\boldsymbol{Z})\|_{\infty}\geq|\mu_{\mathrm{L}}-\mu_{%
\mathrm{R}}|\sqrt{n\eta(1-\eta)}-\sqrt{2\log(n/\varepsilon)}>\sqrt{2\log(n/%
\varepsilon)},
$$
as desired. ∎
A.4 The Proof of Corollary 4.1
* Proof*
From Lemma 4.1 in main text with $\varepsilon=ne^{-nB^{2}/8}$ , we have
$$
\mathbb{P}(h_{\lambda}^{\mathrm{CUSUM}}(\boldsymbol{X})\neq Y\mid\tau,\mu_{%
\mathrm{L}},\mu_{\mathrm{R}})\leq ne^{-nB^{2}/8},
$$
and the desired result follows by integrating over $\pi_{0}$ . ∎
A.5 Auxiliary Lemma
**Lemma A.1**
*Define $T^{\prime}\coloneqq\{t_{0}∈\mathbb{Z}^{+}:{\left\lvert t_{0}-\tau\right%
\rvert}≤\min(\tau,n-\tau)/2\}$ , for any $t_{0}∈ T^{\prime}$ , we have
$$
\min_{t_{0}\in T^{\prime}}|\boldsymbol{v}_{t_{0}}^{\top}\boldsymbol{\mu}|\geq%
\frac{\sqrt{3}}{3}|\mu_{\mathrm{L}}-\mu_{\mathrm{R}}|\sqrt{n\eta(1-\eta)}.
$$*
* Proof*
For simplicity, let $\Delta\coloneqq|\mu_{\mathrm{L}}-\mu_{\mathrm{R}}|$ , we can compute the CUSUM test statistics $a_{i}=|\boldsymbol{v}_{i}^{→p}\boldsymbol{\mu}|$ as:
$$
a_{i}=\begin{cases}\Delta\left(1-\eta\right)\sqrt{\frac{ni}{n-i}}&1\leq i\leq%
\tau\\
\Delta\eta\sqrt{\frac{n\left(n-i\right)}{i}}&\tau<i\leq n-1\end{cases}
$$
It is easy to verified that $a_{\tau}\coloneqq\max_{i}(a_{i})=\Delta\sqrt{n\eta(1-\eta)}$ when $i=\tau$ . Next, we only discuss the case of $1≤\tau≤\lfloor n/2\rfloor$ as one can obtain the same result when $\lceil n/2\rceil≤\tau≤ n$ by the similar discussion. When $1≤\tau≤\lfloor n/2\rfloor$ , ${\left\lvert t_{0}-\tau\right\rvert}≤\min(\tau,n-\tau)/2$ implies that $t_{l}≤ t_{0}≤ t_{u}$ where $t_{l}\coloneqq\lceil\tau/2\rceil,t_{u}\coloneqq\lfloor 3\tau/2\rfloor$ . Because $a_{i}$ is an increasing function of $i$ on $[1,\tau]$ and a decreasing function of $i$ on $[\tau+1,n-1]$ respectively, the minimum of $a_{t_{0}},t_{l}≤ t_{0}≤ t_{u}$ happens at either $t_{l}$ or $t_{u}$ . Hence, we have
| | $\displaystyle a_{t_{l}}$ | $\displaystyle≥ a_{\tau/2}=a_{\tau}\sqrt{\frac{n-\tau}{2n-\tau}}$ | |
| --- | --- | --- | --- |
Define $f(x)\coloneqq\sqrt{\frac{n-x}{2n-x}}$ and $g(x)\coloneqq\sqrt{\frac{2n-3x}{3(n-x)}}$ . We notice that $f(x)$ and $g(x)$ are both decreasing functions of $x∈[1,n]$ , therefore $f(\lfloor n/2\rfloor)≥ f(n/2)=\sqrt{3}/3$ and $g(\lfloor n/2\rfloor)≥ g(n/2)=\sqrt{3}/3$ as desired. ∎
A.6 The Proof of Theorem 4.2
* Proof*
Given any $L≥ 1$ and $\boldsymbol{m}=(m_{1},...,m_{L})^{→p}$ , let $m_{0}:=n$ and $m_{L+1}:=1$ and set $W^{*}=\sum_{r=1}^{L+1}m_{r-1}m_{r}$ . Let $d\coloneqq\mathrm{VCdim}(\mathcal{H}_{L,\boldsymbol{m}})$ , then by Bartlett et al. (2019, Theorem 7), we have $d=O(LW^{*}\log(W^{*}))$ . Thus, by Mohri et al. (2012, Corollary 3.4), for some universal constant $C>0$ , we have with probability at least $1-\delta$ that
$$
\mathbb{P}(h_{\mathrm{ERM}}(\boldsymbol{X})\neq Y\mid\mathcal{D})\leq\min_{h%
\in\mathcal{H}_{L,\boldsymbol{m}}}\mathbb{P}(h(\boldsymbol{X})\neq Y)+\sqrt{%
\frac{8d\log(2eN/d)+8\log(4/\delta)}{N}}. \tag{5}
$$
Here, we have $L=1$ , $m=2n-2$ , $W^{*}=O(n^{2})$ , so $d=O(n^{2}\log(n))$ . In addition, since $h^{\mathrm{CUSUM}}_{\lambda}∈\mathcal{H}_{1,2n-2}$ , we have $\min_{h∈\mathcal{H}_{L,\boldsymbol{m}}}≤\mathbb{P}(h^{\mathrm{CUSUM}}_{%
\lambda}(\boldsymbol{X})≠ Y)≤ ne^{-nB^{2}/8}$ . Substituting these bounds into (5) we arrive at the desired result. ∎
A.7 The Proof of Theorem 4.3
The following lemma, gives the misclassification for the generalised CUSUM test where we only test for changes on a grid of $O(\log n)$ values.
**Lemma A.2**
*Fix $\varepsilon∈(0,1)$ and suppose that $\boldsymbol{X}\sim P(n,\tau,\mu_{\mathrm{L}},\mu_{\mathrm{R}})$ for some $\tau∈[n-1]$ and $\mu_{\mathrm{L}},\mu_{\mathrm{R}}∈\mathbb{R}$ .
1. If $\mu_{\mathrm{L}}=\mu_{\mathrm{R}}$ , then
$$
\mathbb{P}\Bigl{\{}\max_{t\in T_{0}}|\boldsymbol{v}_{t}^{\top}\boldsymbol{X}|>%
\sqrt{2\log(|T_{0}|/\varepsilon)}\Bigr{\}}\leq\varepsilon.
$$
1. If $|\mu_{\mathrm{L}}-\mu_{\mathrm{R}}|\sqrt{\eta(1-\eta)}>\sqrt{24\log(|T_{0}|/%
\varepsilon)/n}$ , then we have
$$
\mathbb{P}\Bigl{\{}\max_{t\in T_{0}}|\boldsymbol{v}_{t}^{\top}\boldsymbol{X}|%
\leq\sqrt{2\log(|T_{0}|/\varepsilon)}\Bigr{\}}\leq\varepsilon.
$$*
* Proof*
(a) For each $t∈[n-1]$ , since ${\|\boldsymbol{v}_{t}\|_{2}}=1$ , we have $\boldsymbol{v}_{t}^{→p}\boldsymbol{X}\sim N(0,1)$ . Hence, by the Gaussian tail bound and a union bound,
$$
\mathbb{P}\Bigl{\{}\max_{t\in T_{0}}|\boldsymbol{v}_{t}^{\top}\boldsymbol{X}|>%
y\Bigr{\}}\leq\sum_{t\in T_{0}}\mathbb{P}\left(\left|\boldsymbol{v}_{t}^{\top}%
\boldsymbol{X}\right|>y\right)\leq|T_{0}|\exp(-y^{2}/2).
$$
The result follows by taking $y=\sqrt{2\log(|T_{0}|/\varepsilon)}$ . (b) There exists some $t_{0}∈ T_{0}$ such that $|t_{0}-\tau|≤\min\{\tau,n-\tau\}/2$ . By Lemma A.1, we have
$$
|\boldsymbol{v}_{t_{0}}^{\top}\mathbb{E}\boldsymbol{X}|\geq\frac{\sqrt{3}}{3}%
\|\mathcal{C}(\mathbb{E}\boldsymbol{X})\|_{\infty}\geq\frac{\sqrt{3}}{3}|\mu_{%
\mathrm{L}}-\mu_{\mathrm{R}}|\sqrt{n\eta(1-\eta)}\geq 2\sqrt{2\log(|T_{0}|/%
\varepsilon)}.
$$
Consequently, by the triangle inequality and result from part (a), we have with probability at least $1-\varepsilon$ that
$$
\max_{t\in T_{0}}|\boldsymbol{v}_{t}^{\top}\boldsymbol{X}|\geq|\boldsymbol{v}_%
{t_{0}}^{\top}\boldsymbol{X}|\geq|\boldsymbol{v}_{t_{0}}^{\top}\mathbb{E}%
\boldsymbol{X}|-|\boldsymbol{v}_{t_{0}}^{\top}(\boldsymbol{X}-\mathbb{E}%
\boldsymbol{X})|\geq\sqrt{2\log(|T_{0}|/\varepsilon)},
$$
as desired. ∎
Using the above lemma we have the following result.
**Corollary A.1**
*Fix $B>0$ . Let $\pi_{0}$ be any prior distribution on $\Theta(B)$ , then draw $(\tau,\mu_{\mathrm{L}},\mu_{\mathrm{R}})\sim\pi_{0}$ , $\boldsymbol{X}\sim P(n,\tau,\mu_{\mathrm{L}},\mu_{\mathrm{R}})$ , and define $Y=\mathbbm{1}\{\mu_{\mathrm{L}}≠\mu_{\mathrm{R}}\}$ . Then for $\lambda^{*}=B\sqrt{3n}/6$ , the test $h^{\mathrm{CUSUM}_{*}}_{\lambda^{*}}$ satisfies
$$
\mathbb{P}(h^{\mathrm{CUSUM}_{*}}_{\lambda^{*}}(\boldsymbol{X})\neq Y)\leq 2%
\lfloor\log_{2}(n)\rfloor e^{-nB^{2}/24}.
$$*
* Proof*
Setting $\varepsilon=|T_{0}|e^{-nB^{2}/24}$ in Lemma A.2, we have for any $(\tau,\mu_{\mathrm{L}},\mu_{\mathrm{R}})∈\Theta(B)$ that
$$
\mathbb{P}(h^{\mathrm{CUSUM}_{*}}_{\lambda^{*}}(\boldsymbol{X})\neq\mathbbm{1}%
\{\mu_{\mathrm{L}}\neq\mu_{\mathrm{R}}\})\leq|T_{0}|e^{-nB^{2}/24}.
$$
The result then follows by integrating over $\pi_{0}$ and the fact that $|T_{0}|=2\lfloor\log_{2}(n)\rfloor$ . ∎
* Proof ofTheorem4.3*
We follow the proof of Theorem 4.2 up to (5). From the conditions of the theorem, we have $W^{*}=O(Ln\log n)$ . Moreover, we have $h^{\mathrm{CUSUM}_{*}}_{\lambda^{*}}∈\mathcal{H}_{1,4\lfloor\log_{2}(n)%
\rfloor}⊂eq\mathcal{H}_{L,\boldsymbol{m}}$ . Thus,
| | $\displaystyle\mathbb{P}(h_{\mathrm{ERM}}(\boldsymbol{X})≠ Y\mid\mathcal{D})$ | $\displaystyle≤\mathbb{P}(h^{\mathrm{CUSUM}_{*}}_{\lambda^{*}}(\boldsymbol{X%
})≠ Y)+C\sqrt{\frac{L^{2}n\log n\log(Ln)\log(N)+\log(1/\delta)}{N}}$ | |
| --- | --- | --- | --- |
as desired. ∎
A.8 Generalisation to time-dependent or heavy-tailed observations
So far, for simplicity of exposition, we have primarily focused on change-point models with independent and identically distributed Gaussian observations. However, neural network based procedures can also be applied to time-dependent or heavy-tailed observations. We first considered the case where the noise series $\xi_{1},...,\xi_{n}$ is a centred stationary Gaussian process with short-ranged temporal dependence. Specifically, writing $K(u):=\mathrm{cov}(\xi_{t},\xi_{t+u})$ , we assume that
$$
\sum_{u=0}^{n-1}K(u)\leq D. \tag{6}
$$
**Theorem A.3**
*Fix $B>0$ , $n>0$ and let $\pi_{0}$ be any prior distribution on $\Theta(B)$ . We draw $(\tau,\mu_{\mathrm{L}},\mu_{\mathrm{R}})\sim\pi_{0}$ , set $Y:=\mathbbm{1}\{\mu_{\mathrm{L}}≠\mu_{\mathrm{R}}\}$ and generate $\boldsymbol{X}:=\boldsymbol{\mu}+\boldsymbol{\xi}$ such that $\boldsymbol{\mu}:=(\mu_{\mathrm{L}}\mathbbm{1}\{i≤\tau\}+\mu_{\mathrm{R}}%
\mathbbm{1}\{i>\tau\})_{i∈[n]}$ and $\boldsymbol{\xi}$ is a centred stationary Gaussian process satisfying (6). Suppose that the training data $\mathcal{D}:=\bigl{(}(\boldsymbol{X}^{(1)},Y^{(1)}),...,(\boldsymbol{X}^{(N%
)},Y^{(N)})\bigr{)}$ consist of independent copies of $(\boldsymbol{X},Y)$ and let $h_{\mathrm{ERM}}:=\operatorname*{arg\,min}_{h∈\mathcal{L}_{L,\boldsymbol{m}}%
}L_{N}(h)$ be the empirical risk minimiser for a neural network with $L≥ 1$ layers and $\boldsymbol{m}=(m_{1},...,m_{L})^{→p}$ hidden layer widths. If $m_{1}≥ 4\lfloor\log_{2}(n)\rfloor$ and $m_{r}m_{r+1}=O(n\log n)$ for all $r∈[L-1]$ , then for any $\delta∈(0,1)$ , we have with probability at least $1-\delta$ that
$$
\mathbb{P}(h_{\mathrm{ERM}}(\boldsymbol{X})\neq Y\mid\mathcal{D})\leq 2\lfloor%
\log_{2}(n)\rfloor e^{-nB^{2}/(48D)}+C\sqrt{\frac{L^{2}n\log^{2}(Ln)\log(N)+%
\log(1/\delta)}{N}}.
$$*
* Proof*
By the proof of Wang and Samworth (2018, supplementary Lemma 10),
$$
\mathbb{P}\bigl{\{}\max_{t\in T_{0}}|\boldsymbol{v}_{t}^{\top}\boldsymbol{\xi}%
|>B\sqrt{3n}/6\bigr{\}}\leq|T_{0}|e^{-nB^{2}/(48D)}.
$$
On the other hand, for $t_{0}$ defined in the proof of Lemma A.1, we have that $|\mu_{\mathrm{L}}-\mu_{\mathrm{R}}|\sqrt{\tau(n-\tau)}/n>B$ , then $|\boldsymbol{v}_{t_{0}}^{→p}\mathbb{E}X|≥ B\sqrt{3n}/3$ . Hence for $\lambda^{*}=B\sqrt{3n}/6$ , we have $h_{\lambda^{*}}^{\mathrm{CUSUM}_{*}}$ satisfying
$$
\mathbb{P}(h_{\lambda^{*}}^{\mathrm{CUSUM}_{*}}(\boldsymbol{X}\neq Y))\leq|T_{%
0}|e^{-nB^{2}/(48D)}.
$$
We can then complete the proof using the same arguments as in the proof of Theorem 4.3. ∎
We now turn to non-Gaussian distributions and recall that the Orlicz $\psi_{\alpha}$ -norm of a random variable $Y$ is defined as
$$
\|Y\|_{\psi_{\alpha}}:=\inf\{\eta:\mathbb{E}\exp(|Y/\eta|^{\alpha})\leq 2\}.
$$
For $\alpha∈(0,2)$ , the random variable $Y$ has heavier tail than a sub-Gaussian distribution. The following lemma is a direct consequence of Kuchibhotla and Chakrabortty (2022, Theorem 3.1) (We state the version used in Li et al. (2023, Proposition 14)).
**Lemma A.4**
*Fix $\alpha∈(0,2)$ . Suppose $\boldsymbol{\xi}=(\xi_{1},...,\xi_{n})^{→p}$ has independent components satisfying $\mathbb{E}\xi_{t}=0$ , $\mathrm{Var}(\xi_{t})=1$ and $\|\xi_{t}\|_{\psi_{\alpha}}≤ K$ for all $t∈[n]$ . There exists $c_{\alpha}>0$ , depending only on $\alpha$ , such that for any $1≤ t≤ n/2$ , we have
$$
\mathbb{P}\bigl{(}|\boldsymbol{v}_{t}^{\top}\boldsymbol{\xi}|\geq y\bigr{)}%
\leq\exp\biggl{\{}1-c_{\alpha}\min\biggl{\{}\biggl{(}\frac{y}{K}\biggr{)}^{2},%
\,\biggl{(}\frac{y}{K\|\boldsymbol{v}_{t}\|_{\beta(\alpha)}}\biggr{)}^{\alpha}%
\biggr{\}}\biggr{\}},
$$
where $\beta(\alpha)=∞$ for $\alpha≤ 1$ and $\beta(\alpha)=\alpha/(\alpha-1)$ when $\alpha>1$ .*
**Theorem A.5**
*Fix $\alpha∈(0,2)$ , $B>0$ , $n>0$ and let $\pi_{0}$ be any prior distribution on $\Theta(B)$ . We draw $(\tau,\mu_{\mathrm{L}},\mu_{\mathrm{R}})\sim\pi_{0}$ , set $Y:=\mathbbm{1}\{\mu_{\mathrm{L}}≠\mu_{\mathrm{R}}\}$ and generate $\boldsymbol{X}:=\boldsymbol{\mu}+\boldsymbol{\xi}$ such that $\boldsymbol{\mu}:=(\mu_{\mathrm{L}}\mathbbm{1}\{i≤\tau\}+\mu_{\mathrm{R}}%
\mathbbm{1}\{i>\tau\})_{i∈[n]}$ and $\boldsymbol{\xi}=(\xi_{1},...,\xi_{n})^{→p}$ satisfies $\mathbb{E}\xi_{i}=0$ , $\mathrm{Var}(\xi_{i})=1$ and $\|\xi_{i}\|_{\psi_{\alpha}}≤ K$ for all $i∈[n]$ . Suppose that the training data $\mathcal{D}:=\bigl{(}(\boldsymbol{X}^{(1)},Y^{(1)}),...,(\boldsymbol{X}^{(N%
)},Y^{(N)})\bigr{)}$ consist of independent copies of $(\boldsymbol{X},Y)$ and let $h_{\mathrm{ERM}}:=\operatorname*{arg\,min}_{h∈\mathcal{L}_{L,\boldsymbol{m}}%
}L_{N}(h)$ be the empirical risk minimiser for a neural network with $L≥ 1$ layers and $\boldsymbol{m}=(m_{1},...,m_{L})^{→p}$ hidden layer widths. If $m_{1}≥ 4\lfloor\log_{2}(n)\rfloor$ and $m_{r}m_{r+1}=O(n\log n)$ for all $r∈[L-1]$ , then there exists a constant $c_{\alpha}>0$ , depending only on $\alpha$ such that for any $\delta∈(0,1)$ , we have with probability at least $1-\delta$ that
$$
\mathbb{P}(h_{\mathrm{ERM}}(\boldsymbol{X})\neq Y\mid\mathcal{D})\leq 2\lfloor%
\log_{2}(n)\rfloor e^{1-c_{\alpha}(\sqrt{n}B/K)^{\alpha}}+C\sqrt{\frac{L^{2}n%
\log^{2}(Ln)\log(N)+\log(1/\delta)}{N}}.
$$*
* Proof*
For $\alpha∈(0,2)$ , we have $\beta(\alpha)>2$ , so $\|\boldsymbol{v}_{t}\|_{\beta(\alpha)}≥\|\boldsymbol{v}_{t}\|_{2}=1$ . Thus, from Lemma A.4, we have $\mathbb{P}(|\boldsymbol{v}_{t}^{→p}\boldsymbol{\xi}|≥ y)≤ e^{1-c_{%
\alpha}(y/K)^{\alpha}}$ . Thus, following the proof of Corollary A.1, we can obtain that $\mathbb{P}(h_{\lambda^{*}}^{\mathrm{CUSUM}_{*}}(\boldsymbol{X}≠ Y))≤ 2%
\lfloor\log_{2}(n)\rfloor e^{1-c_{\alpha}(\sqrt{n}B/K)^{\alpha}}$ . Finally, the desired conclusion follows from the same argument as in the proof of Theorem 4.3. ∎
A.9 Multiple change-point estimation
Algorithm 1 is a general scheme for turning a change-point classifier into a location estimator. While it is theoretically challenging to derive theoretical guarantees for the neural network based change-point location estimation error, we motivate this methodological proposal here by showing that Algorithm 1, applied in conjunction with a CUSUM-based classifier have optimal rate of convergence for the change-point localisation task. We consider the model $x_{i}=\mu_{i}+\xi_{i}$ , where $\xi_{i}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}N(0,1)$ for $i∈[n^{*}]$ . Moreover, for a sequence of change-points $0=\tau_{0}<\tau_{1}<·s<\tau_{\nu}<n=\tau_{\nu+1}$ satisfying $\tau_{r}-\tau_{r-1}≥ 2n$ for all $r∈[\nu+1]$ we have $\mu_{i}=\mu^{(r-1)}$ for all $i∈[\tau_{r-1},\tau_{r}]$ , $r∈[\nu+1]$ .
**Theorem A.6**
*Suppose data $x_{1},...,x_{n^{*}}$ are generated as above satisfying $|\mu^{(r)}-\mu^{(r-1)}|>2\sqrt{2}B$ for all $r∈[\nu]$ . Let $h_{\lambda^{*}}^{\mathrm{CUSUM}_{*}}$ be defined as in Corollary A.1. Let $\hat{\tau}_{1},...,\hat{\tau}_{\hat{\nu}}$ be the output of Algorithm 1 with input $x_{1},...,x_{n^{*}}$ , $\psi=h_{\lambda^{*}}^{\mathrm{CUSUM}_{*}}$ and $\gamma=\lfloor n/2\rfloor/n$ . Then we have
$$
\mathbb{P}\biggl{\{}\hat{\nu}=\nu\text{ and }|\tau_{i}-\hat{\tau}_{i}|\leq%
\frac{2B^{2}}{|\mu^{(r)}-\mu^{(r-1)}|^{2}}\biggr{\}}\geq 1-2n^{*}\lfloor\log_{%
2}(n)\rfloor e^{-nB^{2}/24}.
$$*
* Proof*
For simplicity of presentation, we focus on the case where $n$ is a multiple of 4, so $\gamma=1/2$ . Define
| | $\displaystyle I_{0}$ | $\displaystyle:=\{i:\mu_{i+n-1}=\mu_{i}\},$ | |
| --- | --- | --- | --- |
By Lemma A.2 and a union bound, the event
$$
\Omega=\bigl{\{}h_{\lambda^{*}}^{\mathrm{CUSUM}_{*}}(\boldsymbol{X}^{*}_{[i,i+%
n)})=k,\text{ for all $i\in I_{k}$, $k=0,1$}\bigr{\}}
$$
has probability at least $1-2n^{*}\lfloor\log_{2}(n)\rfloor e^{-nB^{2}/24}$ . We work on the event $\Omega$ henceforth. Denote $\Delta_{r}:=2B^{2}/|\mu^{(r)}-\mu^{(r-1)}|^{2}$ . Since $|\mu^{(r)}-\mu^{(r-1)}|>2\sqrt{2}B$ , we have $\Delta_{r}<n/4$ . Note that for each $r∈[\nu]$ , we have $\{i:\tau_{r-1}<i≤\tau_{r}-n\text{ or }\tau_{r}<i≤\tau_{r+1}-n\}⊂eq
I%
_{0}$ and $\{i:\tau_{r}-n+\Delta_{r}<i≤\tau_{r}-\Delta_{r}\}⊂eq I_{1}$ . Consequently, $\bar{L}_{i}$ defined in Algorithm 1 is below the threshold $\gamma=1/2$ for all $i∈(\tau_{r-1}+n/2,\tau_{r}-n/2]\cup(\tau_{r}+n/2,\tau_{r+1}-n/2]$ , monotonically increases for $i∈(\tau_{r}-n/2,\tau_{r}-\Delta]$ and monotonically decreases for $i∈(\tau_{r}+\Delta,\tau_{r}+n/2]$ and is above the threshold $\gamma$ for $i∈(\tau_{r}-\Delta,\tau_{r}+\Delta]$ . Thus, exactly one change-point, say $\hat{\tau}_{r}$ , will be identified on $(\tau_{r-1}+n/2,\tau_{r+1}-n/2]$ and $\hat{\tau}_{r}=\operatorname*{arg\,max}_{i∈(\tau_{r-1}+n/2,\tau_{r+1}-n/2]}%
\bar{L}_{i}∈(\tau_{r}-\Delta,\tau_{r}+\Delta]$ as desired. Since the above holds for all $r∈[\nu]$ , the proof is complete. ∎
Assuming that $\log(n^{*})\asymp\log(n)$ and choosing $B$ to be of order $\sqrt{\log n}$ , the above theorem shows that using the CUSUM-based change-point classifier $\psi=h_{\lambda^{*}}^{\mathrm{CUSUM}_{*}}$ in conjunction with Algorithm 1 allows for consistent estimation of both the number of locations of multiple change-points in the data stream. In fact, the rate of estimating each change-point, $2B^{2}/|\mu^{(r)}-\mu^{(r-1)}|^{2}$ , is minimax optimal up to logarithmic factors (see, e.g. Verzelen et al., 2020, Proposition 6). An inspection of the proof of Theorem A.6 reveals that the same result would hold for any $\psi$ for which the event $\Omega$ holds with high probability. In view of the representability of $h_{\lambda^{*}}^{\mathrm{CUSUM}_{*}}$ in the class of neural networks, one would intuitively expect that a similar theoretical guarantee as in Theorem A.6 would be available to the empirical risk minimiser in the corresponding neural network function class. However, the particular way in which we handle the generalisation error in the proof of Theorem 4.3 makes it difficult to proceed in this way, due to the fact that the distribution of the data segments obtained via sliding windows have complex dependence and no longer follow a common prior distribution $\pi_{0}$ used in Theorem 4.2.
Appendix B Simulation and Result
B.1 Simulation for Multiple Change-types
In this section, we illustrate the numerical study for one-change-point but with multiple change-types: change in mean, change in slope and change in variance. The data set with change/no-change in mean is generated from $P(n,\tau,\mu_{\mathrm{L}},\mu_{\mathrm{R}})$ . We employ the model of change in slope from Fearnhead et al. (2019), namely
$$
x_{t}=f_{t}+\xi_{t}=\begin{cases}\phi_{0}+\phi_{1}t+\xi_{t}&\quad\text{if }1%
\leq t\leq\tau\\
\phi_{0}+(\phi_{1}-\phi_{2})\tau+\phi_{2}t+\xi_{t}&\quad\tau+1\leq t\leq n,%
\end{cases}
$$
where $\phi_{0},\phi_{1}$ and $\phi_{2}$ are parameters that can guarantee the continuity of two pieces of linear function at time $t=\tau$ . We use the following model to generate the data set with change in variance.
$$
y_{t}=\begin{cases}\mu+\varepsilon_{t}\quad\varepsilon_{t}\sim N(0,\sigma_{1}^%
{2}),&\text{ if }t\leq\tau\\
\mu+\varepsilon_{t}\quad\varepsilon_{t}\sim N(0,\sigma_{2}^{2}),&\text{ %
otherwise }\end{cases}
$$
where $\sigma_{1}^{2},\sigma_{2}^{2}$ are the variances of two Gaussian distributions. $\tau$ is the change-point in variance. When $\sigma_{1}^{2}=\sigma_{2}^{2}$ , there is no-change in model. The labels of no change-point, change in mean only, change in variance only, no-change in variance and change in slope only are 0, 1, 2, 3, 4 respectively. For each label, we randomly generate $N_{sub}$ time series. In each replication of $N_{sub}$ , we update these parameters: $\tau,\mu_{\mathrm{L}},\mu_{\mathrm{R}},\sigma_{1},\sigma_{2},\alpha_{1},\phi_{%
1},\phi_{2}$ . To avoid the boundary effect, we randomly choose $\tau$ from the discrete uniform distribution $U(n^{\prime}+1,n-n^{\prime})$ in each replication, where $1≤ n^{\prime}<\lfloor n/2\rfloor,n^{\prime}∈\mathbb{N}$ . The other parameters are generated as follows:
- $\mu_{\mathrm{L}},\mu_{\mathrm{R}}\sim U(\mu_{l},\mu_{u})$ and $\mu_{dl}≤\left|\mu_{\mathrm{L}}-\mu_{\mathrm{R}}\right|≤\mu_{du}$ , where $\mu_{l},\mu_{u}$ are the lower and upper bounds of $\mu_{\mathrm{L}},\mu_{\mathrm{R}}$ . $\mu_{dl},\mu_{du}$ are the lower and upper bounds of $\left|\mu_{\mathrm{L}}-\mu_{\mathrm{R}}\right|$ .
- $\sigma_{1},\sigma_{2}\sim U(\sigma_{l},\sigma_{u})$ and $\sigma_{dl}≤\left|\sigma_{1}-\sigma_{2}\right|≤\sigma_{du}$ , where $\sigma_{l},\sigma_{u}$ are the lower and upper bounds of $\sigma_{1},\sigma_{2}$ . $\sigma_{dl},\sigma_{du}$ are the lower and upper bounds of $\left|\sigma_{1}-\sigma_{2}\right|$ .
- $\phi_{1},\phi_{2}\sim U(\phi_{l},\phi_{u})$ and $\phi_{dl}≤\left|\phi_{1}-\phi_{2}\right|≤\phi_{du}$ , where $\phi_{l},\phi_{u}$ are the lower and upper bounds of $\phi_{1},\phi_{2}$ . $\phi_{dl},\phi_{du}$ are the lower and upper bounds of $\left|\phi_{1}-\phi_{2}\right|$ .
Besides, we let $\mu=0$ , $\phi_{0}=0$ and the noise follows normal distribution with mean 0. For flexibility, we let the noise variance of change in mean and slope be $0.49$ and $0.25$ respectively. Both Scenarios 1 and 2 defined below use the neural network architecture displayed in Figure 9. Benchmark. Aminikhanghahi and Cook (2017) reviewed the methodologies for change-point detection in different types. To be simple, we employ the Narrowest-Over-Threshold (NOT) (Baranowski et al., 2019) and single variance change-point detection (Chen and Gupta, 2012) algorithms to detect the change in mean, slope and variance respectively. These two algorithms are available in R packages: not and changepoint. The oracle likelihood based tests $\text{LR}^{\mathrm{oracle}}$ means that we pre-specified whether we are testing for change in mean, variance or slope. For the construction of adaptive likelihood-ratio based test $\text{LR}^{\mathrm{adapt}}$ , we first separately apply 3 detection algorithms of change in mean, variance and slope to each time series, then we can compute 3 values of Bayesian information criterion (BIC) for each change-type based on the results of change-point detection. Lastly, the corresponding label of minimum of BIC values is treated as the predicted label. Scenario 1: Weak SNR. Let $n=400$ , $N_{sub}=2000$ and $n^{\prime}=40$ . The data is generated by the parameters settings in Table 2. We use the model architecture in Figure 9 to train the classifier. The learning rate is 0.001, the batch size is 64, filter size in convolution layer is 16, the kernel size is $(3,30)$ , the epoch size is 500. The transformations are ( $x,x^{2}$ ). We also use the inverse time decay technique to dynamically reduce the learning rate. The result which is displayed in Table 1 of main text shows that the test accuracy of $\text{LR}^{\mathrm{oracle}}$ , $\text{LR}^{\mathrm{adapt}}$ and NN based on 2500 test data sets are 0.9056, 0.8796 and 0.8660 respectively.
Table 2: The parameters for weak and strong signal-to-noise ratio (SNR).
Chang in mean $\mu_{l}$ $\mu_{u}$ $\mu_{dl}$ $\mu_{du}$ Weak SNR -5 5 0.25 0.5 Strong SNR -5 5 0.6 1.2 Chang in variance $\sigma_{l}$ $\sigma_{u}$ $\sigma_{dl}$ $\sigma_{du}$ Weak SNR 0.3 0.7 0.12 0.24 Strong SNR 0.3 0.7 0.2 0.4 Change in slope $\phi_{l}$ $\phi_{u}$ $\phi_{dl}$ $\phi_{du}$ Weak SNR -0.025 0.025 0.006 0.012 Strong SNR -0.025 0.025 0.015 0.03
Scenario 2: Strong SNR. The parameters for generating strong-signal data are listed in Table 2. The other hyperparameters are same as in Scenario 1. The test accuracy of $\text{LR}^{\mathrm{oracle}}$ , $\text{LR}^{\mathrm{adapt}}$ and NN based on 2500 test data sets are 0.9924, 0.9260 and 0.9672 respectively. We can see that the neural network-based approach achieves higher classification accuracy than the adaptive likelihood based method.
B.2 Some Additional Simulations
B.2.1 Simulation for simultaneous changes
In this simulation, we compare the classification accuracies of likelihood-based classifier and NN-based classifier under the circumstance of simultaneous changes. For simplicity, we only focus on two classes: no change-point (Class 1) and change in mean and variance at a same change-point (Class 2). The change-point location $\tau$ is randomly drawn from $\mathrm{Unif}\{40,...,n-41\}$ where $n=400$ is the length of time series. Given $\tau$ , to generate the data of Class 2, we use the parameter settings of change in mean and change in variance in Table 2 to randomly draw $\mu_{\mathrm{L}},\mu_{\mathrm{R}}$ and $\sigma_{1},\sigma_{2}$ respectively. The data before and after the change-point $\tau$ are generated from $N(\mu_{\mathrm{L}},\sigma_{1}^{2})$ and $N(\mu_{\mathrm{R}},\sigma_{2}^{2})$ respectively. To generate the data of Class 1, we just draw the data from $N(\mu_{\mathrm{L}},\sigma_{1}^{2})$ . Then, we repeat each data generation of Class 1 and 2 $2500$ times as the training dataset. The test dataset is generated in the same procedure as the training dataset, but the testing size is 15000. We use two classifiers: likelihood-ratio (LR) based classifier (Chen and Gupta, 2012, p.59) and a 21-residual-block neural network (NN) based classifier displayed in Figure 9 to evaluate the classification accuracy of simultaneous change v.s. no change. The result are displayed in Table 3. We can see that under weak SNR, the NN has a good performance than LR-based method while it performs as well as the LR-based method under strong SNR.
Table 3: Test classification accuracy of likelihood-ratio (LR) based classifier (Chen and Gupta, 2012, p.59) and our residual neural network (NN) based classifier with 21 residual blocks for setups with weak and strong signal-to-noise ratios (SNR). Data are generated as a mixture of no change-point (Class 1), change in mean and variance at a same change-point (Class 2). We report the true positive rate of each class and the accuracy in the last row. The optimal threshold value of LR is chosen by the grid search method on the training dataset.
Weak SNR Strong SNR LR NN LR NN Class 1 0.9823 0.9668 1.0000 0.9991 Class 2 0.8759 0.9621 0.9995 0.9992 Accuracy 0.9291 0.9645 0.9997 0.9991
B.2.2 Simulation for heavy-tailed noise
In this simulation, we compare the performance of Wilcoxon change-point test (Dehling et al., 2015), CUSUM, simple neural network $\mathcal{H}_{L,\boldsymbol{m}}$ as well as truncated $\mathcal{H}_{L,\boldsymbol{m}}$ for heavy-tailed noise. Consider the model: $X_{i}=\mu_{i}+\xi_{i},\quad i≥ 1,$ where $(\mu_{i})_{i≥ 1}$ are signals and $(\xi_{i})_{i≥ 1}$ is a stochastic process. To test the null hypothesis
$$
\mathbb{H}:\mu_{1}=\mu_{2}=\cdots=\mu_{n}
$$
against the alternative
$$
\mathbb{A}:~{}\text{There exists }1\leq k\leq n-1~{}\text{such that }\mu_{1}=%
\cdots=\mu_{k}\neq\mu_{k+1}=\cdots=\mu_{n}.
$$
Dehling et al. (2015) proposed the so-called Wilcoxon type of cumulative sum statistic
$$
T_{n}\coloneqq\max_{1\leq k<n}{\left\lvert\frac{2\sqrt{k(n-k)}}{n}\frac{1}{n^{%
3/2}}\sum_{i=1}^{k}\sum_{j=k+1}^{n}\left(\mathbf{1}_{\{X_{i}<X_{j}\}}-1/2%
\right)\right\rvert} \tag{7}
$$
to detect the change-point in time series with outlier or heavy tails. Under the null hypothesis $\mathbb{H}$ , the limit distribution of $T_{n}$ The definition of $T_{n}$ in Dehling et al. (2015, Theorem 3.1) does not include $2\sqrt{k(n-k)}/n$ . However, the repository of the R package robts (Dürre et al., 2016) normalises the Wilcoxon test by this item, for details see function wilcoxsuk in here. In this simulation, we adopt the definition of (7). can be approximately by the supreme of standard Brownian bridge process $(W^{(0)}(\lambda))_{0≤\lambda≤ 1}$ up to a scaling factor (Dehling et al., 2015, Theorem 3.1). In our simulation, we choose the optimal thresh value based on the training dataset by using the grid search method. The truncated simple neural network means that we truncate the data by the $z$ -score in data preprocessing step, i.e. given vector $\boldsymbol{x}=(x_{1},x_{2},...,x_{n})^{→p}$ , then $x_{i}[{\left\lvert x_{i}-\bar{x}\right\rvert}>Z\sigma_{x}]=\bar{x}+\text{sgn}(%
x_{i}-\bar{x})Z\sigma_{x}$ , $\bar{x}$ and $\sigma_{x}$ are the mean and standard deviation of $\boldsymbol{x}$ . The training dataset is generated by using the same parameter settings of Figure 2 (d) of the main text. The result of misclassification error rate (MER) of each method is reported in Figure 5. We can see that truncated simple neural network has the best performance. As expected, the Wilcoxon based test has better performance than the simple neural network based tests. However, we would like to mention that the main focus of Figure 2 of the main text is to demonstrate the point that simple neural networks can replicate the performance of CUSUM tests. Even though, the prior information of heavy-tailed noise is available, we still encourage the practitioner to use simple neural network by adding the $z$ -score truncation in data preprocessing step.
<details>
<summary>x8.png Details</summary>

### Visual Description
# Technical Document Analysis of Line Chart
## 1. Chart Overview
The image is a line chart depicting the relationship between variable **N** (x-axis) and **MER Average** (y-axis). The chart includes four distinct data series, each represented by unique markers and colors.
---
## 2. Axis Labels and Markers
- **X-axis (Horizontal):**
- Label: **N**
- Range: 200 to 1000
- Tick Marks: 200, 400, 600, 800, 1000
- **Y-axis (Vertical):**
- Label: **MER Average**
- Range: 0.10 to 0.40
- Increment: 0.05
---
## 3. Legend and Data Series
The legend is positioned on the **left side** of the chart, outside the plot area. Each data series is defined as follows:
| **Legend Label** | **Color** | **Marker** | **Visual Trend** |
|--------------------------|-----------|------------|----------------------------------------------------------------------------------|
| **CUSUM** | Blue | Circle | Flat line with minor fluctuations around 0.35 |
| **Wilcoxon** | Orange | Triangle | Stable line hovering near 0.20 |
| **m²,L=1** | Green | Diamond | Starts at 0.40, declines to ~0.25, with minor fluctuations |
| **m²,L=1, Z=3** | Red | Pentagon | Sharp decline from 0.22 to 0.09, with a consistent downward trend |
---
## 4. Key Data Points and Trends
### **CUSUM (Blue Circles)**
- **Trend:** Relatively flat with minor fluctuations.
- **Data Points:**
- N=200: 0.35
- N=400: 0.35
- N=600: 0.35
- N=800: 0.35
- N=1000: 0.35
### **Wilcoxon (Orange Triangles)**
- **Trend:** Stable with negligible variation.
- **Data Points:**
- N=200: 0.20
- N=400: 0.20
- N=600: 0.20
- N=800: 0.20
- N=1000: 0.20
### **m²,L=1 (Green Diamonds)**
- **Trend:** Starts at 0.40, declines to ~0.25, with minor fluctuations.
- **Data Points:**
- N=200: 0.40
- N=400: 0.30
- N=600: 0.28
- N=800: 0.29
- N=1000: 0.26
### **m²,L=1, Z=3 (Red Pentagons)**
- **Trend:** Sharp decline from 0.22 to 0.09, with a consistent downward trajectory.
- **Data Points:**
- N=200: 0.22
- N=400: 0.12
- N=600: 0.10
- N=800: 0.09
- N=1000: 0.09
---
## 5. Spatial Grounding and Validation
- **Legend Position:** Left side of the chart, outside the plot area.
- **Color-Marker Consistency:**
- Blue circles (CUSUM) match the legend.
- Orange triangles (Wilcoxon) match the legend.
- Green diamonds (m²,L=1) match the legend.
- Red pentagons (m²,L=1, Z=3) match the legend.
---
## 6. Additional Observations
- The **green line (m²,L=1)** exhibits the most significant variability, starting at the highest value (0.40) and ending at the lowest among the non-flat lines (0.26).
- The **red line (m²,L=1, Z=3)** shows the steepest decline, dropping from 0.22 to 0.09 across the N range.
- The **blue (CUSUM)** and **orange (Wilcoxon)** lines remain nearly constant, suggesting minimal sensitivity to changes in N.
---
## 7. Conclusion
The chart illustrates how different statistical methods (CUSUM, Wilcoxon, m² variants) perform across varying values of N. The **m²,L=1, Z=3** method demonstrates the most pronounced sensitivity to N, while **CUSUM** and **Wilcoxon** remain stable.
</details>
Figure 5: Scenario S3 with Cauchy noise by adding Wilcoxon type of change-point detection method (Dehling et al., 2015) and simple neural network with truncation in data preprocessing. The average misclassification error rate (MER) is computed on a test set of size $N_{\mathrm{test}}=15000$ , against training sample size $N$ for detecting the existence of a change-point on data series of length $n=100$ . We compare the performance of the CUSUM test, Wilcoxon test, $\mathcal{H}_{1,m^{(2)}}$ and $\mathcal{H}_{1,m^{(2)}}$ with $Z=3$ where $m^{(2)}=2n-2$ and $Z=3$ means the truncated $z$ -score, i.e. given vector $\boldsymbol{x}=(x_{1},x_{2},...,x_{n})^{→p}$ , then $x_{i}[{\left|x_{i}-\bar{x}\right|}>Z\sigma_{x}]=\bar{x}+\mathrm{sgn}(x_{i}-%
\bar{x})Z\sigma_{x}$ , $\bar{x}$ and $\sigma_{x}$ are the mean and standard deviation of $\boldsymbol{x}$ .
B.2.3 Robustness Study
This simulation is an extension of numerical study of Section 5 in main text. We trained our neural network using training data generated under scenario S1 with $\rho_{t}=0$ (i.e. corresponding to Figure 2 (a) of the main text), but generate the test data under settings corresponding to Figure 2 (a, b, c, d). In other words, apart the top-left panel, in the remaining panels of Figure 6, the trained network is misspecified for the test data. We see that the neural networks continue to work well in all panels, and in fact have performance similar to those in Figure 2 (b, c, d) of the main text. This indicates that the trained neural network has likely learned features related to the change-point rather than any distributional specific artefacts.
<details>
<summary>x9.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Chart Overview
The image depicts a line chart comparing the **MER Average** (Mean Error Rate) across different sample sizes (**N**) for multiple statistical methods. The chart includes five data series, each represented by distinct line styles, markers, and colors.
---
### **Axes and Labels**
- **X-axis (Horizontal):**
- Label: **N** (sample size)
- Range: 100 to 700 (in increments of 100)
- Ticks: 100, 200, 300, 400, 500, 600, 700
- **Y-axis (Vertical):**
- Label: **MER Average** (Mean Error Rate)
- Range: 0.06 to 0.16
- Ticks: 0.06, 0.08, 0.10, 0.12, 0.14, 0.16
---
### **Legend**
- **Location:** Right side of the chart
- **Entries:**
1. **CUSUM** (blue line with circle markers)
2. **m^(1),L=1** (orange line with downward-pointing triangle markers)
3. **m^(2),L=1** (green line with diamond markers)
4. **m^(1),L=5** (red line with square markers)
5. **m^(1),L=10** (purple line with asterisk markers)
---
### **Data Series Analysis**
#### 1. **CUSUM (Blue Line)**
- **Trend:**
- Starts at **0.06** (N=100)
- Rises sharply to **0.08** at N=200
- Dips to **0.06** at N=300
- Rises again to **0.075** at N=500
- Stabilizes at **0.075** for N=600 and N=700
#### 2. **m^(1),L=1 (Orange Line)**
- **Trend:**
- Starts at **0.165** (N=100)
- Drops sharply to **0.085** at N=200
- Declines further to **0.065** at N=300
- Stabilizes at **0.06** for N=400, 500, 600, and 700
#### 3. **m^(2),L=1 (Green Line)**
- **Trend:**
- Starts at **0.13** (N=100)
- Declines to **0.09** at N=200
- Dips to **0.065** at N=300
- Fluctuates slightly between **0.06** and **0.065** for N=400–700
#### 4. **m^(1),L=5 (Red Line)**
- **Trend:**
- Starts at **0.078** (N=100)
- Declines to **0.065** at N=200
- Stabilizes at **0.06** for N=300–700
#### 5. **m^(1),L=10 (Purple Line)**
- **Trend:**
- Starts at **0.06** (N=100)
- Dips to **0.055** at N=200
- Rises slightly to **0.06** at N=300
- Stabilizes at **0.055** for N=400–700
---
### **Key Observations**
1. **m^(1),L=1** exhibits the highest initial MER Average (0.165 at N=100) but converges to the lowest values by N=700.
2. **CUSUM** shows the most volatility, with peaks at N=200 and N=500.
3. **m^(2),L=1** and **m^(1),L=5** demonstrate similar stabilization patterns, with MER Averages converging to ~0.06 by N=700.
4. **m^(1),L=10** maintains the lowest MER Average across all N values.
---
### **Spatial Grounding**
- **Legend Position:** Right-aligned, outside the plot area.
- **Data Point Verification:**
- All line colors and markers match the legend entries (e.g., orange line = m^(1),L=1).
- No discrepancies observed between legend labels and visual data.
---
### **Conclusion**
The chart illustrates how MER Average varies with sample size (N) for different statistical methods. **m^(1),L=1** and **CUSUM** show significant initial variability, while **m^(1),L=10** consistently performs best. All methods converge to similar MER Averages (~0.06) at larger N values (N ≥ 400).
</details>
<details>
<summary>x10.png Details</summary>

### Visual Description
# Technical Document Extraction: MER Average Line Chart
## Chart Overview
The image is a **line chart** titled **"MER Average"**, depicting the performance of different algorithms across varying sample sizes (`N`). The y-axis represents the **MER Average** (Mean Endpoint Risk), ranging from **0.18 to 0.30**, while the x-axis represents **N** (sample size), ranging from **100 to 700** in increments of 100.
---
## Legend and Data Series
The legend is located in the **upper-right corner** of the chart. Each line corresponds to a specific algorithm configuration, with colors and markers as follows:
| Legend Label | Color | Marker |
|-----------------------|--------|---------|
| CUSUM | Blue | Circle |
| `m^(1),L=1` | Orange | Triangle|
| `m^(2),L=1` | Green | Diamond |
| `m^(1),L=5` | Red | Square |
| `m^(1),L=10` | Purple | Cross |
---
## Key Trends and Data Points
### 1. **CUSUM (Blue Line)**
- **Trend**: Starts at **0.28** (N=100), decreases sharply to **0.25** (N=200), stabilizes between **0.245–0.255** for N=300–500, then slightly declines to **0.25** (N=600–700).
- **Data Points**:
- N=100: 0.28
- N=200: 0.25
- N=300: 0.245
- N=400: 0.245
- N=500: 0.255
- N=600: 0.25
- N=700: 0.25
### 2. **`m^(1),L=1` (Orange Line)**
- **Trend**: Starts at **0.30** (N=100), drops sharply to **0.22** (N=200), fluctuates between **0.20–0.21** for N=300–700.
- **Data Points**:
- N=100: 0.30
- N=200: 0.22
- N=300: 0.205
- N=400: 0.21
- N=500: 0.215
- N=600: 0.21
- N=700: 0.205
### 3. **`m^(2),L=1` (Green Line)**
- **Trend**: Starts at **0.28** (N=100), drops sharply to **0.195** (N=200), rises to **0.21** (N=500), then stabilizes at **0.205** (N=600–700).
- **Data Points**:
- N=100: 0.28
- N=200: 0.195
- N=300: 0.20
- N=400: 0.205
- N=500: 0.21
- N=600: 0.205
- N=700: 0.205
### 4. **`m^(1),L=5` (Red Line)**
- **Trend**: Starts at **0.235** (N=100), drops sharply to **0.185** (N=200), rises to **0.21** (N=500), then stabilizes at **0.205** (N=600–700).
- **Data Points**:
- N=100: 0.235
- N=200: 0.185
- N=300: 0.20
- N=400: 0.205
- N=500: 0.21
- N=600: 0.205
- N=700: 0.205
### 5. **`m^(1),L=10` (Purple Line)**
- **Trend**: Starts at **0.235** (N=100), drops sharply to **0.185** (N=200), rises to **0.21** (N=500), then stabilizes at **0.205** (N=600–700).
- **Data Points**:
- N=100: 0.235
- N=200: 0.185
- N=300: 0.20
- N=400: 0.205
- N=500: 0.21
- N=600: 0.205
- N=700: 0.205
---
## Spatial Grounding and Validation
- **Legend Placement**: Upper-right corner (confirmed via visual inspection).
- **Color Consistency**: All line colors match the legend labels (e.g., blue = CUSUM, orange = `m^(1),L=1`).
- **Axis Labels**:
- X-axis: **N** (sample size, 100–700).
- Y-axis: **MER Average** (0.18–0.30).
---
## Observations
1. **CUSUM** consistently outperforms other configurations for larger N (>300), maintaining the lowest MER Average.
2. **`m^(1),L=1`** exhibits the highest initial MER Average but converges toward CUSUM's performance for N > 500.
3. **`m^(2),L=1`**, **`m^(1),L=5`**, and **`m^(1),L=10`** show similar trends, with slight variations in early-stage performance.
---
## Conclusion
The chart illustrates the trade-offs between algorithm configurations and sample size in terms of MER Average. CUSUM emerges as the most stable and efficient choice for larger datasets, while other configurations require larger N to achieve comparable performance.
</details>
(a) Trained S1 ( $\rho_{t}=0$ ) $→$ S1 ( $\rho_{t}=0$ ) (b)Trained S1 ( $\rho_{t}=0$ ) $→$ S1 ${}^{\prime}$ ( $\rho_{t}=0.7$ )
<details>
<summary>x11.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Chart Overview
The image is a line chart comparing the Mean Error Rate (MER) Average across different sample sizes (N) for various statistical methods. The chart includes five data series with distinct line styles and colors.
---
## Axis Labels and Markers
- **X-axis**: Labeled "N" (sample size), with tick marks at 100, 200, 300, 400, 500, 600, and 700.
- **Y-axis**: Labeled "MER Average", with values ranging from 0.18 to 0.30 in increments of 0.02.
---
## Legend and Data Series
The legend is located in the **top-right corner** of the chart. Colors and markers are as follows:
1. **Blue (●)**: CUSUM
2. **Orange (▼)**: \( m^{(1)}, L=1 \)
3. **Green (◆)**: \( m^{(2)}, L=1 \)
4. **Red (■)**: \( m^{(1)}, L=5 \)
5. **Purple (✦)**: \( m^{(1)}, L=10 \)
All line colors and markers match the legend entries exactly.
---
## Key Trends and Data Points
### 1. **CUSUM (Blue Line)**
- **Trend**: Relatively flat with minor fluctuations.
- **Data Points**:
- N=100: 0.24
- N=200: 0.24
- N=300: 0.24
- N=400: 0.24
- N=500: 0.24
- N=600: 0.24
- N=700: 0.24
### 2. **\( m^{(1)}, L=1 \) (Orange Line)**
- **Trend**: Sharp initial decline, followed by oscillations.
- **Data Points**:
- N=100: 0.30
- N=200: 0.22
- N=300: 0.19
- N=400: 0.20
- N=500: 0.21
- N=600: 0.19
- N=700: 0.19
### 3. **\( m^{(2)}, L=1 \) (Green Line)**
- **Trend**: Steep decline, then gradual increase.
- **Data Points**:
- N=100: 0.28
- N=200: 0.19
- N=300: 0.18
- N=400: 0.20
- N=500: 0.21
- N=600: 0.20
- N=700: 0.19
### 4. **\( m^{(1)}, L=5 \) (Red Line)**
- **Trend**: Initial drop, then gradual rise.
- **Data Points**:
- N=100: 0.24
- N=200: 0.18
- N=300: 0.18
- N=400: 0.19
- N=500: 0.19
- N=600: 0.20
- N=700: 0.19
### 5. **\( m^{(1)}, L=10 \) (Purple Line)**
- **Trend**: Sharp decline, then slight recovery.
- **Data Points**:
- N=100: 0.24
- N=200: 0.18
- N=300: 0.18
- N=400: 0.19
- N=500: 0.19
- N=600: 0.20
- N=700: 0.19
---
## Spatial Grounding and Validation
- **Legend Position**: Top-right corner (standard placement for clarity).
- **Color Consistency**: All line colors match the legend entries without discrepancies.
- **Trend Verification**: Visual inspection confirms the described trends align with the numerical data points.
---
## Additional Notes
- No embedded text, data tables, or non-English content is present.
- The chart focuses on comparing MER performance across methods as sample size (N) increases.
</details>
<details>
<summary>x12.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Chart Overview
The image depicts a line chart comparing multiple data series across a numerical range. Key components include:
- **X-axis**: Labeled "N" with values from 100 to 700 (increments of 100)
- **Y-axis**: Labeled "MER Average" with values from 0.26 to 0.36 (increments of 0.02)
- **Legend**: Positioned on the right side of the chart
- **Grid**: Light gray horizontal and vertical lines for reference
---
## Legend Analysis
Legend entries (right-aligned) with color/marker mappings:
1. **Blue line with circles**: `CUSUM`
2. **Orange line with triangles**: `m^(1),L=1`
3. **Green line with diamonds**: `m^(2),L=1`
4. **Red line with squares**: `m^(1),L=5`
5. **Purple line with crosses**: `m^(1),L=10`
**Spatial Grounding**: Legend occupies the right 20% of the chart area, aligned vertically with the y-axis.
---
## Data Series Trends
### 1. CUSUM (Blue Circles)
- **Trend**: Relatively flat with minor fluctuations
- **Key Points**:
- Starts at 0.36 (N=100)
- Dips to 0.35 (N=200)
- Stabilizes between 0.35–0.36 for N=300–700
- Final value: 0.35 (N=700)
### 2. m^(1),L=1 (Orange Triangles)
- **Trend**: Sharp initial decline, then stabilization
- **Key Points**:
- Starts at 0.36 (N=100)
- Drops to 0.26 (N=200)
- Remains near 0.26 for N=300–700
- Final value: 0.26 (N=700)
### 3. m^(2),L=1 (Green Diamonds)
- **Trend**: Moderate decline followed by fluctuation
- **Key Points**:
- Starts at 0.34 (N=100)
- Dips to 0.26 (N=300)
- Fluctuates between 0.27–0.28 for N=400–600
- Final value: 0.27 (N=700)
### 4. m^(1),L=5 (Red Squares)
- **Trend**: Initial decline, then gradual rise
- **Key Points**:
- Starts at 0.30 (N=100)
- Dips to 0.26 (N=200)
- Rises to 0.27 (N=600)
- Final value: 0.27 (N=700)
### 5. m^(1),L=10 (Purple Crosses)
- **Trend**: Steep decline, then stabilization
- **Key Points**:
- Starts at 0.30 (N=100)
- Drops to 0.26 (N=200)
- Fluctuates between 0.26–0.27 for N=300–700
- Final value: 0.26 (N=700)
---
## Cross-Reference Validation
- **Color Consistency**: All legend colors match corresponding line markers
- **Axis Alignment**: X-axis (N) and Y-axis (MER Average) labels are clearly legible
- **Grid Precision**: Grid lines align with axis increments (e.g., 0.26, 0.28, etc.)
---
## Critical Observations
1. **CUSUM Stability**: Maintains highest MER Average (0.35–0.36) across all N values
2. **L=1 Series Divergence**: Both `m^(1),L=1` and `m^(2),L=1` show significant drops compared to other L values
3. **L=5 vs L=10**: `m^(1),L=5` outperforms `m^(1),L=10` at higher N values (N=500–700)
4. **Convergence**: All L=1 and L=10 series converge to ~0.26–0.27 by N=700
---
## Data Table Reconstruction
| N | CUSUM | m^(1),L=1 | m^(2),L=1 | m^(1),L=5 | m^(1),L=10 |
|------|-------|-----------|-----------|-----------|------------|
| 100 | 0.36 | 0.36 | 0.34 | 0.30 | 0.30 |
| 200 | 0.35 | 0.26 | 0.28 | 0.26 | 0.26 |
| 300 | 0.35 | 0.26 | 0.26 | 0.26 | 0.26 |
| 400 | 0.35 | 0.27 | 0.28 | 0.27 | 0.27 |
| 500 | 0.35 | 0.27 | 0.28 | 0.27 | 0.27 |
| 600 | 0.36 | 0.26 | 0.27 | 0.27 | 0.27 |
| 700 | 0.35 | 0.26 | 0.27 | 0.27 | 0.26 |
*Note: Values interpolated from chart trends where exact points were not marked.*
---
## Language Declaration
- **Primary Language**: English (all labels, axis titles, and legend entries are in English)
- **No Additional Languages Detected**
</details>
(c) Trained S1 ( $\rho_{t}=0$ ) $→$ S2 (d) Trained S1 ( $\rho_{t}=0$ ) $→$ S3
Figure 6: Plot of the test set MER, computed on a test set of size $N_{\mathrm{test}}=30000$ , against training sample size $N$ for detecting the existence of a change-point on data series of length $n=100$ . We compare the performance of the CUSUM test and neural networks from four function classes: $\mathcal{H}_{1,m^{(1)}}$ , $\mathcal{H}_{1,m^{(2)}}$ , $\mathcal{H}_{5,m^{(1)}\mathbf{1}_{5}}$ and $\mathcal{H}_{10,m^{(1)}\mathbf{1}_{10}}$ where $m^{(1)}=4\lfloor\log_{2}(n)\rfloor$ and $m^{(2)}=2n-2$ respectively under scenarios S1, S1 ${}^{\prime}$ , S2 and S3 described in Section 5. The subcaption “A $→$ B” means that we apply the trained classifier “A” to target testing dataset “B”.
B.2.4 Simulation for change in autocorrelation
In this simulation, we discuss how we can use neural networks to recreate test statistics for various types of changes. For instance, if the data follows an AR(1) structure, then changes in autocorrelation can be handled by including transformations of the original input of the form $(x_{t}x_{t+1})_{t=1,...,n-1}$ . On the other hand, even if such transformations are not supplied as the input, a deep neural network of suitable depth is able to approximate these transformations and consequently successfully detect the change (Schmidt-Hieber, 2020, Lemma A.2). This is illustrated in Figure 7, where we compare the performance of neural network based classifiers of various depths constructed with and without using the transformed data as inputs.
<details>
<summary>x13.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Chart Type
Line chart with four data series plotted against a numerical axis.
## Axis Labels
- **X-axis**: Labeled "N" with tick marks at 100, 200, 300, 400, 500, 600, 700.
- **Y-axis**: Labeled "MER Average" with increments of 0.05 (0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40).
## Legend
- **Location**: Top-right corner of the chart.
- **Entries**:
1. **Blue circles**: `m^(1),L=1`
2. **Orange triangles**: `m^(1),L=5`
3. **Green crosses**: `m^(2),L=1`
4. **Red stars**: `NN`
## Data Series Trends
1. **Blue Line (`m^(1),L=1`)**:
- **Trend**: Steep downward slope from ~0.39 at N=100 to ~0.16 at N=700.
- **Key Points**:
- N=100: ~0.39
- N=200: ~0.33
- N=300: ~0.25
- N=400: ~0.22
- N=500: ~0.18
- N=600: ~0.17
- N=700: ~0.16
2. **Orange Line (`m^(1),L=5`)**:
- **Trend**: Gradual decline from ~0.34 at N=100 to ~0.15 at N=700.
- **Key Points**:
- N=100: ~0.34
- N=200: ~0.28
- N=300: ~0.21
- N=400: ~0.19
- N=500: ~0.17
- N=600: ~0.15
- N=700: ~0.15
3. **Green Line (`m^(2),L=1`)**:
- **Trend**: Steep initial drop, then gradual flattening from ~0.39 at N=100 to ~0.15 at N=700.
- **Key Points**:
- N=100: ~0.39
- N=200: ~0.32
- N=300: ~0.23
- N=400: ~0.20
- N=500: ~0.17
- N=600: ~0.16
- N=700: ~0.15
4. **Red Line (`NN`)**:
- **Trend**: Nearly flat with a slight dip from ~0.12 at N=100 to ~0.09 at N=600, then a minor rise to ~0.10 at N=700.
- **Key Points**:
- N=100: ~0.12
- N=200: ~0.12
- N=300: ~0.11
- N=400: ~0.10
- N=500: ~0.095
- N=600: ~0.09
- N=700: ~0.10
## Spatial Grounding
- **Legend Position**: Top-right corner (coordinates: [x=0.8, y=0.9] relative to chart boundaries).
- **Data Point Verification**:
- Blue circles (`m^(1),L=1`) consistently match the blue line.
- Orange triangles (`m^(1),L=5`) align with the orange line.
- Green crosses (`m^(2),L=1`) correspond to the green line.
- Red stars (`NN`) track the red line.
## Component Isolation
- **Header**: Chart title (not explicitly labeled but inferred as "MER Average vs. N").
- **Main Chart**: Four data series with distinct markers and colors.
- **Footer**: No additional text or annotations.
## Critical Observations
- All data series show a general decline in MER Average as N increases, except `NN`, which remains relatively stable.
- `m^(1),L=1` (blue) and `m^(2),L=1` (green) exhibit the steepest declines, suggesting higher sensitivity to N.
- `NN` (red) demonstrates the least variability, indicating potential robustness or baseline performance.
## Data Table Reconstruction
| N | m^(1),L=1 | m^(1),L=5 | m^(2),L=1 | NN |
|------|-----------|-----------|-----------|-----|
| 100 | 0.39 | 0.34 | 0.39 | 0.12|
| 200 | 0.33 | 0.28 | 0.32 | 0.12|
| 300 | 0.25 | 0.21 | 0.23 | 0.11|
| 400 | 0.22 | 0.19 | 0.20 | 0.10|
| 500 | 0.18 | 0.17 | 0.17 | 0.095|
| 600 | 0.17 | 0.15 | 0.16 | 0.09|
| 700 | 0.16 | 0.15 | 0.15 | 0.10|
## Language Declaration
- **Primary Language**: English (all labels, axis titles, and legends are in English).
</details>
<details>
<summary>x14.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Chart Overview
The image depicts a line chart comparing three mathematical models (`m^(1),L=1`, `m^(1),L=5`, `m^(2),L=1`) across varying values of `N` (100–700). The y-axis represents the "MER Average" metric, while the x-axis represents the parameter `N`.
---
### **Axis Labels and Markers**
- **X-Axis (Horizontal):**
- Label: `N`
- Range: 100 to 700 (increments of 100)
- Tick Marks: 100, 200, 300, 400, 500, 600, 700
- **Y-Axis (Vertical):**
- Label: `MER Average`
- Range: 0.05 to 0.40 (increments of 0.05)
- Tick Marks: 0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40
---
### **Legend**
- **Location:** Top-right corner of the chart
- **Entries:**
1. `m^(1),L=1` (Blue circles)
2. `m^(1),L=5` (Orange triangles)
3. `m^(2),L=1` (Green stars)
---
### **Data Series Analysis**
#### 1. `m^(1),L=1` (Blue Circles)
- **Trend:**
- Starts at ~0.16 at `N=100`
- Gradually declines to ~0.09 at `N=700`
- Consistent downward slope with minimal fluctuation
- **Key Data Points:**
- `N=100`: ~0.16
- `N=200`: ~0.17
- `N=300`: ~0.14
- `N=400`: ~0.12
- `N=500`: ~0.11
- `N=600`: ~0.095
- `N=700`: ~0.09
#### 2. `m^(1),L=5` (Orange Triangles)
- **Trend:**
- Begins slightly higher than `m^(1),L=1` at `N=100` (~0.165)
- Peaks at `N=200` (~0.175)
- Declines steadily to ~0.095 at `N=700`
- **Key Data Points:**
- `N=100`: ~0.165
- `N=200`: ~0.175
- `N=300`: ~0.145
- `N=400`: ~0.125
- `N=500`: ~0.115
- `N=600`: ~0.098
- `N=700`: ~0.095
#### 3. `m^(2),L=1` (Green Stars)
- **Trend:**
- Starts slightly lower than `m^(1),L=1` at `N=100` (~0.155)
- Sharp decline to ~0.10 at `N=300`
- Flattens to ~0.098 at `N=700`
- **Key Data Points:**
- `N=100`: ~0.155
- `N=200`: ~0.16
- `N=300`: ~0.10
- `N=400`: ~0.11
- `N=500`: ~0.105
- `N=600`: ~0.098
- `N=700`: ~0.098
---
### **Cross-Reference Validation**
- **Legend Colors vs. Line Colors:**
- Blue circles (`m^(1),L=1`) match the blue line.
- Orange triangles (`m^(1),L=5`) match the orange line.
- Green stars (`m^(2),L=1`) match the green line.
- **Spatial Grounding:**
- Legend is positioned in the top-right corner, outside the plot area.
- All data points align with their respective legend labels.
---
### **Conclusion**
The chart illustrates a general trend of decreasing `MER Average` values for all models as `N` increases. The `m^(1),L=5` model exhibits the highest initial performance but converges with the other models at larger `N` values. The `m^(2),L=1` model shows the steepest initial decline but stabilizes at lower `N` values.
</details>
(a) Original Input (b) Original and $x_{t}x_{t+1}$ Input
Figure 7: Plot of the test set MER, computed on a test set of size $N_{\mathrm{test}}=30000$ , against training sample size $N$ for detecting the existence of a change-point on data series of length $n=100$ . We compare the performance of neural networks from four function classes: $\mathcal{H}_{1,m^{(1)}}$ , $\mathcal{H}_{1,m^{(2)}}$ , $\mathcal{H}_{5,m^{(1)}\mathbf{1}_{5}}$ and neural network with 21 residual blocks where $m^{(1)}=4\lfloor\log_{2}(n)\rfloor$ and $m^{(2)}=2n-2$ respectively. The change-points are randomly chosen from $\mathrm{Unif}\{10,...,89\}$ . Given change-point $\tau$ , data are generated from the autoregressive model $x_{t}=\alpha_{t}x_{t-1}+\epsilon_{t}$ for $\epsilon_{t}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}N(0,0.25^{2})$ and $\alpha_{t}=0.2\mathbf{1}_{\{t<\tau\}}+0.8\mathbf{1}_{\{t≥\tau\}}$ .
B.2.5 Simulation on change-point location estimation
Here, we describe simulation results on the performance of change-point location estimator constructed using a combination of simple neural network-based classifier and Algorithm 1 from the main text. Given a sequence of length $n^{\prime}=2000$ , we draw $\tau\sim\text{Unif}\{750,...,1250\}$ . Set $\mu_{L}=0$ and draw $\mu_{R}|\tau$ from 2 different uniform distributions: $\text{Unif}([-1.5b,-0.5b]\cup[0.5b,1.5b])$ (Weak) and $\text{Unif}([-3b,-b]\cup[b,3b])$ (Strong), where $b\coloneqq\sqrt{\frac{8n^{\prime}\log(20n^{\prime})}{\tau(n^{\prime}-\tau)}}$ is chosen in line with Lemma 4.1 to ensure a good range of signal-to-noise ratio. We then generate $\boldsymbol{x}=(\mu_{\mathrm{L}}\mathbbm{1}_{\{t≤\tau\}}+\mu_{\mathrm{R}}%
\mathbbm{1}_{\{t>\tau\}}+\varepsilon_{t})_{t∈[n^{\prime}]}$ , with the noise $\boldsymbol{\varepsilon}=(\varepsilon_{t})_{t∈[n^{\prime}]}\sim N_{n^{\prime%
}}(0,I_{n^{\prime}})$ . We then draw independent copies $\boldsymbol{x}_{1},...,\boldsymbol{x}_{N^{\prime}}$ of $\boldsymbol{x}$ . For each $\boldsymbol{x}_{k}$ , we randomly choose 60 segments with length $n∈\{300,400,500,600\}$ , the segments which include $\tau_{k}$ are labelled ‘1’, others are labelled ‘0’. The training dataset size is $N=60N^{\prime}$ where $N^{\prime}=500$ . We then draw another $N_{\text{test}}=3000$ independent copies of $\boldsymbol{x}$ as our test data for change-point location estimation. We study the performance of change-point location estimator produced by using Algorithm 1 together with a single-layer neural network, and compare it with the performance of CUSUM, MOSUM and Wilcoxon statistics-based estimators. As we can see from the Figure 8, under Gaussian models where CUSUM is known to work well, our simple neural network-based procedure is competitive. On the other hand, when the noise is heavy-tailed, our simple neural network-based estimator greatly outperforms CUSUM-based estimator.
<details>
<summary>x15.png Details</summary>

### Visual Description
# Technical Document Analysis of Chart
## Chart Overview
The image is a line chart comparing the Root Mean Square Error (RMSE) performance of three algorithms across varying sample sizes (`n`). The chart includes three data series, each represented by distinct line styles and colors, with a legend for reference.
---
## Key Components
### Legend
- **Location**: Top-right corner of the chart.
- **Labels and Colors**:
- **CUSUM**: Blue line with circular markers (`●`).
- **MOSUM**: Orange line with downward-pointing triangle markers (`▼`).
- **Alg. 1**: Green line with asterisk markers (`★`).
### Axes
- **X-axis (Horizontal)**:
- Label: `n` (sample size).
- Markers: 300, 350, 400, 450, 500, 550, 600.
- **Y-axis (Vertical)**:
- Label: `RMSE` (Root Mean Square Error).
- Markers: 50, 100, 150, 200, 250.
---
## Data Series Analysis
### 1. CUSUM (Blue Line)
- **Trend**:
- Starts at ~60 RMSE at `n=300`.
- Dips to ~50 RMSE at `n=400`.
- Peaks at ~70 RMSE at `n=500`.
- Declines to ~55 RMSE at `n=600`.
- **Key Points**:
- `n=300`: ~60 RMSE.
- `n=400`: ~50 RMSE.
- `n=500`: ~70 RMSE.
- `n=600`: ~55 RMSE.
### 2. MOSUM (Orange Line)
- **Trend**:
- Starts at ~280 RMSE at `n=300`.
- Decreases steadily to ~150 RMSE at `n=600`.
- **Key Points**:
- `n=300`: ~280 RMSE.
- `n=400`: ~200 RMSE.
- `n=500`: ~175 RMSE.
- `n=600`: ~150 RMSE.
### 3. Alg. 1 (Green Line)
- **Trend**:
- Starts at ~100 RMSE at `n=300`.
- Decreases to ~70 RMSE at `n=400`.
- Remains stable (~70 RMSE) at `n=500`.
- Declines to ~60 RMSE at `n=600`.
- **Key Points**:
- `n=300`: ~100 RMSE.
- `n=400`: ~70 RMSE.
- `n=500`: ~70 RMSE.
- `n=600`: ~60 RMSE.
---
## Spatial Grounding
- **Legend Position**: Top-right corner (coordinates: `[x=high, y=high]`).
- **Line-Color Matching**:
- Blue (`●`) = CUSUM.
- Orange (`▼`) = MOSUM.
- Green (`★`) = Alg. 1.
---
## Observations
1. **MOSUM** exhibits the highest RMSE across all `n` values but shows a consistent downward trend.
2. **CUSUM** has the lowest RMSE but shows variability, with a notable peak at `n=500`.
3. **Alg. 1** demonstrates a moderate RMSE, decreasing steadily with a plateau between `n=400` and `n=500`.
---
## Conclusion
The chart illustrates the RMSE performance of three algorithms (CUSUM, MOSUM, Alg. 1) as sample size (`n`) increases. MOSUM starts with the highest error but improves most significantly, while CUSUM maintains the lowest error with minor fluctuations. Alg. 1 shows moderate performance with a gradual decline.
</details>
<details>
<summary>x16.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Chart Overview
- **Type**: Line chart with three data series
- **Legend**: Located in top-right corner
- CUSUM: Blue circle (●)
- MOSUM: Orange triangle (▼)
- Alg. 1: Green cross (✖️)
## Axes
- **X-axis (Horizontal)**:
- Label: `n`
- Scale: 300 → 600 (increments of 50)
- Markers: 300, 350, 400, 450, 500, 550, 600
- **Y-axis (Vertical)**:
- Label: `RMSE`
- Scale: 10 → 60 (increments of 10)
## Data Series Analysis
### 1. CUSUM (Blue ●)
- **Trend**:
- Stable with minor fluctuations
- Slight dip at x=400, then gradual decline
- **Data Points**:
- x=300: 12
- x=400: 13
- x=500: 12
- x=600: 11
### 2. MOSUM (Orange ▼)
- **Trend**:
- Sharp initial decline (65 → 25)
- Stabilizes between 25-27
- Final decline to 17
- **Data Points**:
- x=300: 65
- x=400: 25
- x=500: 27
- x=600: 17
### 3. Alg. 1 (Green ✖️)
- **Trend**:
- Gradual increase throughout
- Consistent upward trajectory
- **Data Points**:
- x=300: 16
- x=400: 18
- x=500: 18
- x=600: 19
## Spatial Grounding
- **Legend Position**: Top-right quadrant
- **Color Consistency**:
- Blue ● matches CUSUM line
- Orange ▼ matches MOSUM line
- Green ✖️ matches Alg. 1 line
## Key Observations
1. MOSUM demonstrates the most significant RMSE reduction (65 → 17)
2. CUSUM maintains the lowest RMSE values throughout
3. Alg. 1 shows the most consistent improvement pattern
4. All series converge toward lower RMSE values as `n` increases
## Data Table Reconstruction
| n | CUSUM | MOSUM | Alg. 1 |
|------|-------|-------|--------|
| 300 | 12 | 65 | 16 |
| 400 | 13 | 25 | 18 |
| 500 | 12 | 27 | 18 |
| 600 | 11 | 17 | 19 |
## Validation Checks
- All legend symbols match corresponding line styles
- Numerical values align with visual representation
- Trend descriptions corroborate data point trends
- Scale increments match axis markers
</details>
(a) S1 with $\rho_{t}=0$ , weak SNR (b) S1 with $\rho_{t}=0$ , strong SNR
<details>
<summary>x17.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Chart Overview
The image depicts a line chart comparing the Root Mean Square Error (RMSE) performance of four statistical algorithms across varying sample sizes (`n`). The chart includes four data series, each represented by distinct line styles and markers.
---
### **Axis Labels and Scales**
- **X-axis (Horizontal):**
- Label: `n` (sample size)
- Range: 300 to 600 (increments of 50)
- Ticks: 300, 350, 400, 450, 500, 550, 600
- **Y-axis (Vertical):**
- Label: `RMSE` (Root Mean Square Error)
- Range: 0 to 175 (increments of 25)
- Ticks: 0, 25, 50, 75, 100, 125, 150, 175
---
### **Legend and Data Series**
The legend is positioned in the **upper-right quadrant** of the chart. Colors and markers are explicitly mapped as follows:
1. **CUSUM** (Blue line with circular markers)
2. **MOSUM** (Orange line with triangular markers)
3. **Alg. 1** (Green line with cross markers)
4. **Wilcoxon** (Red line with star markers)
---
### **Data Series Analysis**
#### 1. **CUSUM (Blue Line)**
- **Trend:**
- Starts at ~155 RMSE when `n=300`.
- Increases steadily to a peak of ~175 RMSE at `n=500`.
- Declines slightly to ~165 RMSE at `n=600`.
- **Key Points:**
- `n=300`: 155
- `n=400`: 165
- `n=500`: 175
- `n=600`: 165
#### 2. **MOSUM (Orange Line)**
- **Trend:**
- Starts at ~90 RMSE when `n=300`.
- Increases to ~100 RMSE at `n=500`.
- Declines to ~85 RMSE at `n=600`.
- **Key Points:**
- `n=300`: 90
- `n=400`: 95
- `n=500`: 100
- `n=600`: 85
#### 3. **Alg. 1 (Green Line)**
- **Trend:**
- Remains relatively flat with a slight upward slope.
- Starts at ~5 RMSE when `n=300`.
- Increases to ~12 RMSE at `n=600`.
- **Key Points:**
- `n=300`: 5
- `n=400`: 8
- `n=500`: 10
- `n=600`: 12
#### 4. **Wilcoxon (Red Line)**
- **Trend:**
- Completely flat across all `n` values.
- Maintains a constant RMSE of ~0.
- **Key Points:**
- `n=300`: 0
- `n=400`: 0
- `n=500`: 0
- `n=600`: 0
---
### **Spatial Grounding and Validation**
- **Legend Placement:** Upper-right quadrant (confirmed via visual inspection).
- **Color-Marker Consistency:**
- Blue circles (CUSUM) match legend.
- Orange triangles (MOSUM) match legend.
- Green crosses (Alg. 1) match legend.
- Red stars (Wilcoxon) match legend.
---
### **Conclusion**
The chart illustrates performance trends of four algorithms across sample sizes. CUSUM exhibits the highest variability, while Wilcoxon shows no error. MOSUM and Alg. 1 demonstrate moderate performance with opposing trends. All data points align with legend specifications.
</details>
<details>
<summary>x18.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Chart Overview
The image is a **line chart** comparing the Root Mean Square Error (RMSE) performance of four statistical algorithms across varying sample sizes (`n`). The x-axis represents sample size (`n`), and the y-axis represents RMSE values. Four data series are plotted with distinct line styles and markers.
---
### **Axis Labels and Markers**
- **X-axis (Horizontal):**
- Label: `n` (sample size)
- Markers: 300, 350, 400, 450, 500, 550, 600
- **Y-axis (Vertical):**
- Label: `RMSE` (Root Mean Square Error)
- Scale: 0 to 140, incrementing by 20
---
### **Legend and Data Series**
The legend is located in the **upper-left corner** of the chart. Each data series is represented by a unique line style, marker, and color:
| **Legend Label** | **Color** | **Marker** | **Line Style** | **Trend Description** |
|-------------------|-----------|------------|----------------|------------------------|
| **CUSUM** | Blue | Circle | Solid | Peaks at `n=400` (~130 RMSE), then declines to ~120 at `n=600`. |
| **MOSUM** | Orange | Triangle | Solid | Dips at `n=400` (~50 RMSE), rises to ~65 at `n=500`, then stabilizes at ~60. |
| **Alg. 1** | Green | Cross | Solid | Gradual upward trend from ~5 to ~10 RMSE. |
| **Wilcoxon** | Red | Star | Solid | Flat line near 0 RMSE throughout. |
---
### **Key Data Points**
1. **CUSUM (Blue Line):**
- `n=300`: ~110 RMSE
- `n=400`: ~130 RMSE (peak)
- `n=500`: ~120 RMSE
- `n=600`: ~115 RMSE
2. **MOSUM (Orange Line):**
- `n=300`: ~60 RMSE
- `n=400`: ~50 RMSE (minimum)
- `n=500`: ~65 RMSE (peak)
- `n=600`: ~60 RMSE
3. **Alg. 1 (Green Line):**
- `n=300`: ~5 RMSE
- `n=400`: ~7 RMSE
- `n=500`: ~9 RMSE
- `n=600`: ~10 RMSE
4. **Wilcoxon (Red Line):**
- Consistent ~0 RMSE across all `n` values.
---
### **Trend Verification**
- **CUSUM** exhibits a **bell-shaped curve**, peaking at `n=400` before declining.
- **MOSUM** shows a **U-shaped trend**, with a trough at `n=400` and a secondary peak at `n=500`.
- **Alg. 1** demonstrates a **linear upward trend**, increasing by ~5 RMSE units over the range.
- **Wilcoxon** remains **constant** near 0 RMSE, indicating minimal error.
---
### **Spatial Grounding**
- **Legend Position:** Upper-left corner (coordinates: `[x=0.1, y=0.9]` relative to chart bounds).
- **Data Point Colors:**
- Blue circles (CUSUM) match legend.
- Orange triangles (MOSUM) match legend.
- Green crosses (Alg. 1) match legend.
- Red stars (Wilcoxon) match legend.
---
### **Additional Notes**
- No embedded text blocks, tables, or non-English content detected.
- All labels, axis titles, and legend entries are in English.
- The chart focuses on **comparative performance analysis** of statistical algorithms.
---
### **Conclusion**
The chart illustrates that **Wilcoxon** consistently performs best (lowest RMSE), while **CUSUM** shows the highest variability. **MOSUM** and **Alg. 1** exhibit moderate performance with distinct trends.
</details>
(c) S3, weak SNR (d) S3, strong SNR
Figure 8: Plot of the root mean square error (RMSE) of change-point estimation (S1 with $\rho_{t}=0$ and S3), computed on a test set of size $N_{\text{test}}=3000$ , against bandwidth $n$ for detecting the existence of a change-point on data series of length $n^{*}=2000$ . We compare the performance of the change-point detection by CUSUM, MOSUM, Algorithm 1 and Wilcoxon (only for S3) respectively. The RMSE here is defined by $\sqrt{1/N\sum_{i=1}^{N}(\hat{\tau}_{i}-\tau_{i})^{2}}$ where $\hat{\tau}_{i}$ is the estimator of change-point for the $i$ -th observation and $\tau_{i}$ is the true change-point. The weak and strong signal-to-noise ratio (SNR) correspond to $\mu_{R}|\tau\sim\text{Unif}([-1.5b,-0.5b]\cup[0.5b,1.5b])$ and $\mu_{R}|\tau\sim\text{Unif}([-3b,-b]\cup[b,3b])$ respectively.
Appendix C Real Data Analysis
The HASC (Human Activity Sensing Consortium) project aims at understanding the human activities based on the sensor data. This data includes 6 human activities: “stay”, “walk”, “jog”, “skip”, “stair up” and “stair down”. Each activity lasts at least 10 seconds, the sampling frequency is 100 Hz.
C.1 Data Cleaning
The HASC offers sequential data where there are multiple change-types and multiple change-points, see Figure 3 in main text. Hence, we can not directly feed them into our deep convolutional residual neural network. The training data fed into our neural network requires fixed length $n$ and either one change-point or no change-point existence in each time series. Next, we describe how to obtain this kind of training data from HASC sequential data. In general, Let $\boldsymbol{x}={(x_{1},x_{2},...,x_{d})}^{→p},d≥ 1$ be the $d$ -channel vector. Define $\boldsymbol{X}\coloneqq(\boldsymbol{x}_{t_{1}},\boldsymbol{x}_{t_{2}},...,%
\boldsymbol{x}_{t_{n^{*}}})$ as a realization of $d$ -variate time series where $\boldsymbol{x}_{t_{j}},j=1,2,...,n^{*}$ are the observations of $\boldsymbol{x}$ at $n^{*}$ consecutive time stamps $t_{1},t_{2},...,t_{n^{*}}$ . Let $\boldsymbol{X}_{i},i=1,2,...,N^{*}$ represent the observation from the $i$ -th subject. $\boldsymbol{\tau}_{i}\coloneqq(\tau_{i,1},\tau_{i,2},...,\tau_{i,K})^{→p}%
,K∈\mathbb{Z}^{+},\tau_{i,k}∈[2,n^{*}-1],1≤ k≤ K$ with convention $\tau_{i,0}=0$ and $\tau_{i,K+1}=n^{*}$ represents the change-points of the $i$ -th observation which are well-labelled in the sequential data sets. Furthermore, define $n\coloneqq\min_{i∈[N^{*}]}\min_{k∈[K+1]}(\tau_{i,k}-\tau_{i,k-1})$ . In practice, we require that $n$ is not too small, this can be achieved by controlling the sampling frequency in experiment, see HASC data. We randomly choose $q$ sub-segments with length $n$ from $\boldsymbol{X}_{i}$ like the gray dash rectangles in Figure 3 of main text. By the definition of $n$ , there is at most one change-point in each sub-segment. Meanwhile, we assign the label to each sub-segment according to the type and existence of change-point. After that, we stack all the sub-segments to form a tensor $\mathcal{X}$ with dimensions of $(N^{*}q,d,n)$ . The label vector is denoted as $\mathcal{Y}$ with length $N^{*}q$ . To guarantee that there is at most one change-point in each segment, we set the length of segment $n=700$ . Let $q=15$ , as the change-points are well labelled, it is easy to draw 15 segments without any change-point, i.e., the segments with labels: “stay”, “walk”, “jog”, “skip”, “stair up” and “stair down”. Next, we randomly draw 15 segments (the red rectangles in Figure 3 of main text) for each transition point.
C.2 Transformation
Section 3 in main text suggests that changes in the mean/signal may be captured by feeding the raw data directly. For other type of change, we recommend appropriate transformations before training the model depending on the interest of change-type. For instance, if we are interested in changes in the second order structure, we suggest using the square transformation; for change in auto-correlation with order $p$ we could input the cross-products of data up to a $p$ -lag. In multiple change-types, we allow applying several transformations to the data in data pre-processing step. The mixture of raw data and transformed data is treated as the training data. We employ the square transformation here. All the segments are mapped onto scale $[-1,1]$ after the transformation. The frequency of training labels are list in Figure 11. Finally, the shapes of training and test data sets are $(4875,6,700)$ and $(1035,6,700)$ respectively.
C.3 Network Architecture
We propose a general deep convolutional residual neural network architecture to identify the multiple change-types based on the residual block technique (He et al., 2016) (see Figure 9). There are two reasons to explain why we choose residual block as the skeleton frame.
- The problem of vanishing gradients (Bengio et al., 1994; Glorot and Bengio, 2010). As the number of convolution layers goes significantly deep, some layer weights might vanish in back-propagation which hinders the convergence. Residual block can solve this issue by the so-called “shortcut connection”, see the flow chart in Figure 9.
- Degradation. He et al. (2016) has pointed out that when the number of convolution layers increases significantly, the accuracy might get saturated and degrade quickly. This phenomenon is reported and verified in He and Sun (2015) and He et al. (2016).
<details>
<summary>x19.png Details</summary>

### Visual Description
# Technical Document Extraction: Neural Network Architecture Diagram
## Diagram Overview
The image depicts a convolutional neural network (CNN) architecture with residual blocks and dense layers. The diagram uses standardized layer icons and directional arrows to represent data flow.
## Component Breakdown
### Input Layer
- **Label**: `Input: (d, n)`
- **Description**: Represents input dimensions (depth `d`, height/width `n`)
### Feature Extraction Path
1. **Conv2D Layer**
- Icon: 
- Description: Initial convolutional operation
2. **Batch Normalisation**
- Icon: 
- Description: Normalizes layer outputs
3. **ReLU Activation**
- Icon: 
- Description: Rectified Linear Unit activation function
4. **Max Pooling**
- Icon: 
- Description: Spatial downsampling operation
### Residual Blocks (21x)
- **Label**: `21 x Residual Blocks`
- **Structure**:
- **Loop Body**:
1. Conv2D → Batch Normalisation → ReLU
2. Conv2D → Batch Normalisation → ReLU
- **Skip Connection**: Purple arrow bypassing first ReLU
- **Output**: `x1` (residual output)
### Global Pooling & Classification
1. **Global Average Pooling**
- Icon: 
- Description: Reduces spatial dimensions to 1x1
2. **Dense Layer Stack**
- `Dense(50)` → `Dense(40)` → `Dense(30)` → `Dense(20)` → `Dense(10)`
- Description: Fully connected layers with decreasing units
### Output Layer
- **Label**: `Output: (m, 1)`
- **Description**: Final output dimensions (batch size `m`, single feature)
## Data Flow Summary
</details>
Figure 9: Architecture of our general-purpose change-point detection neural network. The left column shows the standard layers of neural network with input size $(d,n)$ , $d$ may represent the number of transformations or channels; We use 21 residual blocks and one global average pooling in the middle column; The right column includes 5 dense layers with nodes in bracket and output layer. More details of the neural network architecture appear in the supplement.
There are 21 residual blocks in our deep neural network, each residual block contains 2 convolutional layers. Like the suggestion in Ioffe and Szegedy (2015) and He et al. (2016), each convolution layer is followed by one Batch Normalization (BN) layer and one ReLU layer. Besides, there exist 5 fully-connected convolution layers right after the residual blocks, see the third column of Figure 9. For example, Dense(50) means that the dense layer has 50 nodes and is connected to a dropout layer with dropout rate 0.3. To further prevent the effect of overfitting, we also implement the $L_{2}$ regularization in each fully-connected layer (Ng, 2004). As the number of labels in HASC is 28, see Figure 10, we drop the dense layers “Dense(20)” and “Dense(10)” in Figure 9. The output layer has size $(28,1)$ . We remark two discussable issues here. (a) For other problems, the number of residual blocks, dense layers and the hyperparameters may vary depending on the complexity of the problem. In Section 6 of main text, the architecture of neural network for both synthetic data and real data has 21 residual blocks considering the trade-off between time complexity and model complexity. Like the suggestion in He et al. (2016), one can also add more residual blocks into the architecture to improve the accuracy of classification. (b) In practice, we would not have enough training data; but there would be potential ways to overcome this via either using Data Argumentation or increasing $q$ . In some extreme cases that we only mainly have data with no-change, we can artificially add changes into such data in line with the type of change we want to detect.
C.4 Training and Detection
<details>
<summary>x20.png Details</summary>

### Visual Description
# Technical Document Extraction: State Transition Data
## Data Structure Overview
The image contains a JSON-like dictionary representing state transitions and associated numerical values. The structure is organized as key-value pairs where:
- **Keys**: Represent states or state transitions (e.g., `'jog'`, `'jog→skip'`)
- **Values**: Numerical identifiers (integers)
## Extracted Data Table
| State/Transition | Value |
|------------------------|-------|
| 'jog' | 0 |
| 'jog→skip' | 1 |
| 'jog→stay' | 2 |
| 'jog→walk' | 3 |
| 'skip' | 4 |
| 'skip→jog' | 5 |
| 'skip→stay' | 6 |
| 'skip→walk' | 7 |
| 'stDown' | 8 |
| 'stDown→jog' | 9 |
| 'stDown→stay' | 10 |
| 'stDown→walk' | 11 |
| 'stUp' | 12 |
| 'stUp→skip' | 13 |
| 'stUp→stay' | 14 |
| 'stUp→walk' | 15 |
| 'stay' | 16 |
| 'stay→jog' | 17 |
| 'stay→skip' | 18 |
| 'stay→stDown' | 19 |
| '>stUp' | 20 |
| 'stay→walk' | 21 |
| 'walk' | 22 |
| 'walk→jog' | 23 |
| 'walk→stDown' | 24 |
| 'walk→stUp' | 25 |
| 'walk→stay' | 26 |
| 'skip' | 27 |
## Observations
1. **State Hierarchy**:
- Base states: `jog`, `skip`, `stDown`, `stUp`, `stay`, `walk`
- Transitions: Arrows (`→`) denote directed state changes (e.g., `jog→skip`).
2. **Numerical Pattern**:
- Values increment sequentially but are not strictly ordered (e.g., `stay→walk`: 21, `walk`: 22).
3. **Ambiguity**:
- The key `'skip'` appears twice with values `4` and `27`, suggesting potential duplication or a typo in the original data.
## Technical Notes
- **Language**: All text is in English.
- **Formatting**: The data uses single quotes (`'`) for keys, consistent with JSON syntax.
- **Missing Context**: No axis titles, legends, or visual trends are present, as this is a textual representation rather than a chart/diagram.
This extraction captures all textual information from the image for use in technical documentation.
</details>
Figure 10: Label Dictionary
<details>
<summary>x21.png Details</summary>

### Visual Description
# Technical Document Extraction: State Transition Counter
## Data Structure Analysis
The image contains a Python `Counter` object representing state transition frequencies. The data is structured as a dictionary with string keys (transitions) and integer values (counts). All keys follow the pattern `'source→destination'` or single-state labels.
## Key-Value Pairs
```python
{
'walk': 570,
'stay': 525,
'jog': 495,
'skip': 405,
'stDown': 225,
'stUp': 225,
'walk→jog': 210,
'stay→stDown': 180,
'walk→stay': 180,
'stay→skip': 180,
'jog→walk': 165,
'jog→stay': 150,
'walk→stUp': 120,
'skip→stay': 120,
'stay→jog': 120,
'stDown→stay': 105,
'stay→stUp': 105,
'stUp→walk': 105,
'jog→skip': 105,
'skip→walk': 105,
'walk→skip': 75,
'stUp→stay': 75,
'stDown→walk': 75,
'skip→jog': 75,
'stUp→skip': 45,
'stay→walk': 45,
'walk→stDown': 45,
'stDown→jog': 45
}
```
## Observations
1. **State Hierarchy**:
- Primary states: `walk`, `stay`, `jog`, `skip`
- Secondary states: `stDown`, `stUp` (likely representing directional modifiers)
2. **Transition Patterns**:
- Most frequent transitions involve `walk` and `stay` states
- Directional states (`stDown`, `stUp`) show consistent counts (225 each)
- Transitions between primary states show varied frequencies (75-210 range)
3. **Data Completeness**:
- All possible transitions between primary states appear to be represented
- Directional states only show transitions to/from `walk` and `stay`
## Technical Notes
- The data structure uses Python's `collections.Counter` syntax
- Arrow notation (`→`) indicates state transitions
- All values are integers representing occurrence counts
- No duplicate keys exist in the structure
- The total count across all transitions sums to 4,320 occurrences
This structured representation captures all state transition frequencies from the source data.
</details>
Figure 11: Label Frequency
<details>
<summary>x22.png Details</summary>

### Visual Description
# Technical Document Extraction: Accuracy vs. Epochs Graph
## 1. Labels and Axis Titles
- **X-Axis**: "Epochs" (ranges from 0 to 400 in increments of 50)
- **Y-Axis**: "Accuracy" (ranges from 0.3 to 1.0 in increments of 0.1)
- **Legend**: Located at the bottom-left corner of the graph.
## 2. Legend Details
- **Line 1**: Solid blue line labeled "Kernel Size=25 Train"
- **Line 2**: Dashed blue line labeled "Kernel Size=25 Validation"
## 3. Key Trends and Data Points
### Training Accuracy (Solid Blue Line)
- **Initial Phase (0–50 epochs)**:
- Starts at 0.3 accuracy at epoch 0.
- Rises sharply to ~0.95 by epoch 50.
- **Stabilization Phase (50–400 epochs)**:
- Plateaus at ~0.995 accuracy from epoch 100 onward.
- Minor fluctuations observed between epochs 200–300 but remains near 0.995.
### Validation Accuracy (Dashed Blue Line)
- **Initial Phase (0–50 epochs)**:
- Starts at 0.3 accuracy at epoch 0.
- Rises sharply to ~0.98 by epoch 50.
- **Stabilization Phase (50–400 epochs)**:
- Plateaus at ~0.995 accuracy from epoch 100 onward.
- Slightly higher than training accuracy until epoch 150, then converges with training accuracy.
## 4. Spatial Grounding of Legend
- **Legend Position**: Bottom-left corner of the graph.
- **Color Matching**:
- Solid blue line corresponds to "Kernel Size=25 Train".
- Dashed blue line corresponds to "Kernel Size=25 Validation".
## 5. Trend Verification
- **Training Accuracy**:
- Steep upward slope from 0 to 50 epochs.
- Flat line after epoch 100.
- **Validation Accuracy**:
- Steep upward slope from 0 to 50 epochs.
- Flat line after epoch 100, with convergence to training accuracy after epoch 150.
## 6. Component Isolation
- **Header**: No explicit header text.
- **Main Chart**:
- Grid lines at every 0.1 accuracy increment and every 50 epochs.
- Two data series (train/validation) plotted on the same axes.
- **Footer**: No explicit footer text.
## 7. Data Table Reconstruction
| Epochs | Kernel Size=25 Train | Kernel Size=25 Validation |
|--------|----------------------|---------------------------|
| 0 | 0.3 | 0.3 |
| 50 | ~0.95 | ~0.98 |
| 100 | ~0.99 | ~0.99 |
| 150 | ~0.995 | ~0.995 |
| 200 | ~0.995 | ~0.995 |
| 250 | ~0.995 | ~0.995 |
| 300 | ~0.995 | ~0.995 |
| 350 | ~0.995 | ~0.995 |
| 400 | ~0.995 | ~0.995 |
## 8. Observations
- Both training and validation accuracy curves exhibit rapid improvement in the first 50 epochs, followed by stabilization.
- Validation accuracy initially outperforms training accuracy but converges to the same value after epoch 150.
- No overfitting is observed, as validation accuracy does not drop below training accuracy.
## 9. Language Declaration
- All text in the image is in English. No non-English content detected.
</details>
Figure 12: The Accuracy Curves
<details>
<summary>x23.png Details</summary>

### Visual Description
# Technical Document Extraction: Heatmap Analysis
## 1. Axis Labels and Markers
- **X-Axis (Prediction)**: Labeled "Prediction" with integer markers 0–27.
- **Y-Axis (Label)**: Labeled "Label" with integer markers 0–27.
- **Colorbar**: Labeled with values 0–120 (dark blue to yellow gradient).
## 2. Key Data Points and Trends
### Main Diagonal (High Values)
- **(0,0)**: 90 (dark blue)
- **(4,5)**: 90 (dark blue)
- **(15,16)**: 135 (darkest blue)
- **(22,21)**: 135 (darkest blue)
### Off-Diagonal Blocks
- **Top-Left Cluster**:
- **(2,3)**: 45 (light green)
- **(3,4)**: 90 (dark blue)
- **(5,6)**: 45 (light green)
- **(6,7)**: 44 (light green)
- **(7,8)**: 40 (light green)
- **Middle Cluster**:
- **(10,11)**: 43 (light green)
- **(12,13)**: 35 (light green)
- **(13,14)**: 4 (yellow)
- **(14,15)**: 1 (yellow)
- **(16,17)**: 135 (darkest blue)
- **(17,18)**: 45 (light green)
- **(18,19)**: 45 (light green)
- **(19,20)**: 39 (light green)
- **(20,21)**: 4 (yellow)
- **(21,22)**: 2 (yellow)
- **Bottom-Right Cluster**:
- **(23,24)**: 45 (light green)
- **(24,25)**: 12 (yellow)
- **(25,26)**: 33 (light green)
- **(26,27)**: 44 (light green)
- **(27,15)**: 1 (yellow)
- **(27,26)**: 44 (light green)
## 3. Color Legend Verification
- **Dark Blue (120–135)**: Matches highest values (e.g., 135 at (15,16)).
- **Light Green (40–45)**: Matches mid-range values (e.g., 45 at (2,3)).
- **Yellow (0–12)**: Matches lowest values (e.g., 1 at (27,15)).
## 4. Spatial Grounding
- **Legend Position**: Right side of the heatmap (colorbar).
- **Color Consistency**: All data points match the colorbar gradient (e.g., 90 ≈ dark blue, 45 ≈ light green).
## 5. Trend Verification
- **Primary Trend**: Strong diagonal dominance (correct predictions) with values peaking at 135.
- **Secondary Trends**: Off-diagonal clusters indicate misclassifications (e.g., Label 4 frequently predicts 5 with 90).
- **Edge Cases**: Low values (1–4) appear in sparse regions (e.g., (27,15), (13,14)).
## 6. Component Isolation
- **Header**: No explicit header text.
- **Main Chart**: 28x28 heatmap with labeled axes.
- **Footer**: Colorbar (0–120) on the right.
## 7. Transcribed Text
- **Axis Titles**: "Label" (Y), "Prediction" (X).
- **Colorbar Label**: Numerical scale 0–120.
- **Embedded Text**: All numerical values in cells (e.g., 90, 45, 135).
## 8. Missing Information
- No textual annotations beyond axis labels, colorbar, and embedded values.
- No explicit title or contextual description provided in the image.
</details>
Figure 13: Confusion Matrix of Real Test Dataset
<details>
<summary>x24.png Details</summary>

### Visual Description
# Technical Document Extraction: Signal Analysis Over Time
## Image Description
The image is a **line graph** depicting three time-series signals (x, y, z) plotted against a temporal axis. The graph includes labeled vertical markers indicating discrete actions ("walk," "skip," "stay," etc.) at specific time intervals. The signals oscillate around a baseline of 0, with amplitudes ranging from -4 to +4.
---
### **Axis Labels and Markers**
- **X-Axis (Time):**
- Label: "Time"
- Range: 0 to 10,000 (in increments of 2,000)
- Key Markers:
- Vertical lines with labels at:
- 0: `walk`
- 1,500: `skip`
- 3,000: `stay`
- 4,500: `jog`
- 6,000: `walk`
- 7,500: `stUp`
- 9,000: `stay`
- 10,500: `stDown`
- 12,000: `walk`
- 13,500: `stay`
- 15,000: `skip`
- 16,500: `jog`
- **Y-Axis (Signal):**
- Label: "Signal"
- Range: -4 to +4 (in increments of 1)
---
### **Legend**
- **Location:** Top-left corner
- **Labels and Colors:**
- `x` (blue line)
- `y` (orange line)
- `z` (green line)
---
### **Data Series Analysis**
#### 1. **Blue Line (x-Signal)**
- **Trend:**
- Oscillates symmetrically around 0.
- Peaks at ±2 during "walk" and "jog" actions.
- Minimal deviation during "stay" and "stUp/stDown."
- **Key Points:**
- At `t=0` (walk): Peaks at +2.
- At `t=6,000` (walk): Peaks at +2.
- At `t=12,000` (walk): Peaks at +2.
#### 2. **Orange Line (y-Signal)**
- **Trend:**
- Sharp vertical spikes during "skip" actions.
- Flatline at 0 during "stay" and "stUp/stDown."
- Moderate oscillations during "walk" and "jog."
- **Key Points:**
- At `t=1,500` (skip): Spikes to +4.
- At `t=15,000` (skip): Spikes to +4.
#### 3. **Green Line (z-Signal)**
- **Trend:**
- Peaks at ±2 during "jog" actions.
- Flatline at 0 during "stay" and "stUp/stDown."
- Moderate oscillations during "walk" and "skip."
- **Key Points:**
- At `t=4,500` (jog): Peaks at +2.
- At `t=16,500` (jog): Peaks at +2.
---
### **Action-Signal Correlation**
| Action | Time (t) | Signal Behavior |
|--------------|----------|---------------------------------------------------------------------------------|
| `walk` | 0, 6,000, 12,000 | x peaks at +2; y and z oscillate moderately. |
| `skip` | 1,500, 15,000 | y spikes to +4; x and z oscillate. |
| `stay` | 3,000, 9,000, 13,500 | All signals flatline at 0. |
| `jog` | 4,500, 16,500 | z peaks at +2; x and y oscillate. |
| `stUp` | 7,500 | All signals flatline at 0. |
| `stDown` | 10,500 | All signals flatline at 0. |
---
### **Critical Observations**
1. **Signal Amplitude:**
- `x` and `z` signals exhibit bounded oscillations (±2).
- `y` signal exhibits unbounded spikes (±4) during "skip" actions.
2. **Temporal Patterns:**
- "Walk" and "jog" actions correlate with sustained oscillations in `x` and `z`, respectively.
- "Skip" actions are characterized by abrupt, high-amplitude spikes in `y`.
- "Stay" and transitional actions ("stUp," "stDown") result in signal suppression.
3. **Legend Consistency:**
- All line colors (blue, orange, green) match the legend labels (`x`, `y`, `z`) without discrepancies.
---
### **Conclusion**
The graph illustrates distinct signal patterns tied to specific actions. The `y`-signal (orange) is uniquely sensitive to "skip" actions, while `x` and `z` signals correlate with "walk" and "jog," respectively. No textual data tables or non-English content are present.
</details>
Figure 14: Change-point Detection of Real Dataset for Person 7 (2nd sequence). The red line at 4476 is the true change-point, the blue line on its right is the estimator. The difference between them is caused by the similarity of “Walk” and “StairUp”.
<details>
<summary>x25.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Graph Analysis
## 1. **Axis Labels and Markers**
- **X-Axis (Horizontal):**
- Label: `Time`
- Markers: `0`, `2000`, `4000`, `6000`, `8000`, `10000`
- **Y-Axis (Vertical):**
- Label: `Signal`
- Range: `-2` to `2`
## 2. **Legend**
- **Location:** Top-right corner of the graph.
- **Labels and Colors:**
- `x` (Blue line)
- `y` (Orange line)
- `z` (Green line)
## 3. **Segment Labels and Boundaries**
Vertical red/blue lines divide the graph into labeled segments. Each segment corresponds to a specific action and time range:
| Segment Label | Time Range (X-axis) |
|---------------|---------------------|
| `walk` | `0` – `1000` |
| `skip` | `1000` – `2000` |
| `stay` | `2000` – `3000` |
| `jog` | `3000` – `4000` |
| `walk` | `4000` – `5000` |
| `stUp` | `5000` – `6000` |
| `stay` | `6000` – `7000` |
| `stDown` | `7000` – `8000` |
| `walk` | `8000` – `9000` |
| `stay` | `9000` – `10000` |
| `skip` | `10000` – `11000` |
| `jog` | `11000` – `12000` |
## 4. **Key Trends and Data Patterns**
### **X-Axis (Blue Line)**
- **Trend:**
- High-frequency oscillations during `walk` and `jog` segments.
- Flat lines during `stay` segments.
- Moderate oscillations during `skip` and `stUp/stDown` segments.
### **Y-Axis (Orange Line)**
- **Trend:**
- Low-amplitude oscillations during `walk` and `jog`.
- Flat lines during `stay` segments.
- Sharp spikes during `stUp` and `stDown` segments.
### **Z-Axis (Green Line)**
- **Trend:**
- Moderate oscillations during `walk` and `jog`.
- Flat lines during `stay` segments.
- Slightly higher amplitude during `skip` and `stUp/stDown` segments.
## 5. **Spatial Grounding and Color Verification**
- **Legend Colors Match Lines:**
- Blue (`x`) aligns with high-frequency oscillations.
- Orange (`y`) aligns with low-amplitude oscillations.
- Green (`z`) aligns with moderate oscillations.
## 6. **Component Isolation**
- **Header:** Contains the title and segment labels.
- **Main Chart:** Line graph with three axes (`x`, `y`, `z`) plotted over time.
- **Footer:** No additional text or markers.
## 7. **Textual Information**
- **Embedded Labels:**
- Segment labels (`walk`, `skip`, `stay`, `jog`, `stUp`, `stDown`) are written in black text above each vertical boundary.
- **Legend Text:**
- `x`, `y`, `z` with corresponding color codes.
## 8. **Conclusion**
The graph visualizes three-dimensional signal data (`x`, `y`, `z`) over time, segmented by discrete actions (`walk`, `skip`, `stay`, `jog`, `stUp`, `stDown`). Each axis exhibits distinct oscillatory patterns tied to specific actions, with `x` showing the highest frequency and `y` the lowest amplitude. No numerical data table is present; trends are inferred visually.
</details>
Figure 15: Change-point Detection of Real Dataset for Person 7 (3rd sequence). The red vertical lines represent the underlying change-points, the blue vertical lines represent the estimated change-points.
There are 7 persons observations in this dataset. The first 6 persons sequential data are treated as the training dataset, we use the last person’s data to validate the trained classifier. Each person performs each of 6 activities: “stay”, “walk”, “jog”, “skip”, “stair up” and “stair down” at least 10 seconds. The transition point between two consecutive activities can be treated as the change-point. Therefore, there are 30 possible types of change-point. The total number of labels is 36 (6 activities and 30 possible transitions). However, we only found 28 different types of label in this real dataset, see Figure 10. The initial learning rate is 0.001, the epoch size is 400. Batch size is 16, the dropout rate is 0.3, the filter size is 16 and the kernel size is $(3,25)$ . Furthermore, we also use 20% of the training dataset to validate the classifier during training step. Figure 12 shows the accuracy curves of training and validation. After 150 epochs, both solid and dash curves approximate to 1. The test accuracy is 0.9623, see the confusion matrix in Figure 13. These results show that our neural network classifier performs well both in the training and test datasets. Next, we apply the trained classifier to 3 repeated sequential datasets of Person 7 to detect the change-points. The first sequential dataset has shape $(3,10743)$ . First, we extract the $n$ -length sliding windows with stride 1 as the input dataset. The input size becomes $(9883,6,700)$ . Second, we use Algorithm 1 to detect the change-points where we relabel the activity label as “no-change” label and transition label as “one-change” label respectively. Figures 14 and 15 show the results of multiple change-point detection for other 2 sequential data sets from the 7-th person.