# Bridging Neural ODE and ResNet: A Formal Error Bound for Safety Verification
**Authors**: Abdelrahman Sayed Sayed, Pierre-Jean Meyer, Mohamed Ghazel
institutetext: Univ Gustave Eiffel, COSYS-ESTAS, F-59657 Villeneuve d’Ascq, France email: {abdelrahman.ibrahim,pierre-jean.meyer,mohamed.ghazel}@univ-eiffel.fr
Abstract
A neural ordinary differential equation (neural ODE) is a machine learning model that is commonly described as a continuous-depth generalization of a residual network (ResNet) with a single residual block, or conversely, the ResNet can be seen as the Euler discretization of the neural ODE. These two models are therefore strongly related in a way that the behaviors of either model are considered to be an approximation of the behaviors of the other. In this work, we establish a more formal relationship between these two models by bounding the approximation error between two such related models. The obtained error bound then allows us to use one of the models as a verification proxy for the other, without running the verification tools twice: if the reachable output set expanded by the error bound satisfies a safety property on one of the models, this safety property is then guaranteed to be also satisfied on the other model. This feature is fully reversible, and the initial safety verification can be run indifferently on either of the two models. This novel approach is illustrated on a numerical example of a fixed-point attractor system modeled as a neural ODE.
1 Introduction
Neural ordinary differential equations (neural ODE) are gaining prominence in continuous-time modeling, offering distinct advantages over traditional neural networks, such as memory efficiency, continuous-time modeling, adaptive computation balancing speed and accuracy [5, 14, 23]. This surge in interest stems from recent advancements in differential programming, which have enhanced the ability to model complex dynamics with greater flexibility and precision [24].
Neural ODE can be viewed as a continuous-depth generalization of residual networks (ResNet) [10], and conversely a ResNet represents an Euler discretization of the continuous transformations modeled by a neural ODE [9, 18]. Unlike ResNet, neural ODE enable smooth and robust representations through continuous dynamics, leading to improved modeling of time-evolving systems [5, 9]. By interpreting ResNet as discretized neural ODE, we can leverage advanced ODE solvers to enhance computational efficiency and reduce the number of required parameters [5]. Furthermore, the continuous formulation of neural ODE supports flexible handling of varying input resolutions and scales, making them adaptable to diverse data modalities. This perspective also facilitates theoretical analysis using tools from differential equations, providing insights into network stability and convergence [14].
Despite the growing interest in neural ODE for continuous-time modeling, formal analysis techniques for these models remain underdeveloped [17]. Current verification methods for neural ODE are still maturing, with existing reachability approaches primarily focusing on stochastic methods [7, 8]. Other works include the NNVODE tool [17] which is an extension of the Neural Network Verification (NNV) framework [28, 16] that investigates reachability for a general class of neural ODE. Additionally, another line of verification based on topological properties was introduced in [15] through a set-boundary method for safety verification of neural ODE and invertible residual networks (i-ResNet) [3].
The similarity between the neural ODE and ResNet models enables bidirectional safety verification, where the properties verified for one model can be used to deduce safety guarantees for the other one. This motivates our work, which investigates how verification results from one model can serve as a proxy for the other, addressing practical scenarios where only one model or compatible verification tools are available. The main contributions of this work are as follows:
- We derive a rigorous bound on the approximation error between the neural ODE and ResNet models for a given input set.
- We use the derived error bound in conjunction with the reachable set of one model as a proxy to verify safety properties of the other model, without applying any verification tools to the other model as illustrated in Figure 1.
<details>
<summary>x1.png Details</summary>

### Visual Description
\n
## Diagram: Model Relationship and Safety Verification
### Overview
The image is a diagram illustrating the relationship between two models (Model 1 and Model 2) and the process of safety verification. It depicts how safety properties are transferred from one model to another, accounting for approximation errors. The diagram uses shapes and arrows to represent the flow of information and the verification process.
### Components/Axes
The diagram consists of the following components:
* **Model 1:** Located on the left side of the diagram.
* **Model 2:** Located on the right side of the diagram.
* **Horizontal Line:** Separates the models and the verification processes.
* **"Formal relationship approximation error ε":** A red, curved arrow pointing diagonally downwards from Model 1 to Model 2, indicating the error introduced during the approximation of the relationship between the models.
* **"Safety verification":** A red, vertical arrow pointing downwards from Model 2, representing the process of verifying safety properties in Model 2.
* **"Induced verification":** A red, horizontal arrow pointing from the lower left (Model 1's safety region) to the lower right (Model 2's safety region).
* **Safety Regions:** Two enclosed areas, one below each model, containing shapes representing safe states. Each region is marked with a green checkmark labeled "Safe".
* **Shapes within Safety Regions:** Irregular shapes (teal and red) within the safety regions, representing the set of safe states.
* **Dashed Lines:** Grey dashed lines surrounding the shapes within the safety regions, possibly representing boundaries or constraints.
* **Small Squares:** Small squares in the top-left and top-right corners of each model, possibly representing input or initial conditions.
### Detailed Analysis or Content Details
The diagram illustrates a process where safety is established in Model 1 and then transferred to Model 2.
* **Model 1:** Contains a teal and red shape enclosed within a safety region marked "Safe". The shape is surrounded by dashed lines.
* **Model 2:** Contains a teal shape enclosed within a safety region marked "Safe". The shape is surrounded by dashed lines.
* **Error Approximation:** The "Formal relationship approximation error ε" suggests that the transition from Model 1 to Model 2 introduces an error. This error is represented by the dashed arrow and the label "ε".
* **Induced Verification:** The "Induced verification" arrow indicates that the safety established in Model 1 induces a verification process in Model 2.
* **Safety Verification:** The "Safety verification" arrow indicates a direct verification process within Model 2.
### Key Observations
* The shapes within the safety regions are different between Model 1 and Model 2, suggesting that the approximation process alters the set of safe states.
* The presence of the error term "ε" highlights the importance of considering approximation errors when transferring safety properties between models.
* Both models ultimately achieve a "Safe" state, indicating that the verification process is successful despite the approximation error.
### Interpretation
This diagram demonstrates a formal method for verifying safety properties in a system modeled by two different representations (Model 1 and Model 2). The process acknowledges that the relationship between the models is not perfect and introduces an approximation error (ε). Despite this error, the diagram shows that safety can be induced from Model 1 to Model 2 through a verification process.
The diagram suggests a workflow where:
1. Safety is initially established in a more abstract or simplified model (Model 1).
2. The model is refined or approximated to a more concrete or detailed model (Model 2).
3. The approximation introduces an error (ε).
4. Safety is verified in the refined model (Model 2), taking into account the approximation error.
The use of shapes and regions likely represents state spaces or reachable sets of states. The "Safe" regions indicate the subset of states that satisfy the safety properties. The diagram is a high-level illustration of a formal verification technique, likely used in safety-critical systems where rigorous guarantees are required. The diagram does not provide specific numerical data, but rather a conceptual framework for safety verification.
</details>
Figure 1: Illustration of the proposed framework to verify Model $1$ based on the outcome of the verification of Model $2$ and a bound $\varepsilon$ on the maximal error between the models.
Related work.
Although the similarity between the ResNet and neural ODE models is well established [5, 14], to the best of our knowledge, very few works have tried connecting these models through some more formal relationships. These include various theoretical perspectives, such as quantifying the deviation between the hidden state trajectory of a ResNet and its corresponding neural ODE, focusing on approximation error [26], while [20] derives generalization bounds for neural ODE and ResNet using a Lipschitz-based argument, emphasizing the impact of successive weight matrix differences on generalization capability. On the other hand, [21] investigates implicit regularization effects in deep ResNet and its impact on training outcomes. While these studies focus on theoretical analyses of approximation error, generalization, and regularization to understand model behavior and performance, our work leverages this relationship for formal safety verification. We propose a verification proxy approach that uses the reachable set of one model to verify the safety properties of the other, incorporating an error bound to ensure conservative over-approximations, which enables practical verification of nonlinear systems.
Abstraction-based verification (i.e., verifying properties of one model by working on an abstraction of its behaviors into a simpler model) has been a popular topic in the past decades outside of the AI field [27]. Within the field of AI verification, its primary application has been on abstracting specific model components rather than the whole model itself, as in approaches based on convex relaxation of nonlinear ReLU activation functions [13, 11]. On the other hand, full-model abstraction has been mostly unexplored for AI verification, except on the topic of neural network model reduction, where the verification of a neural network is achieved at a lower computational cost on a reduced network with less neurons, see e.g. [4] for unidirectional relationships, or [29] for bidirectional ones through the use of approximate bisimulation relations. Although the overall principle of the proposed approach in our paper is similar (abstracting a model by one that over-approximates the set of all its behaviors), the main difference with the above works between two discrete neural networks is that our paper considers the formal relationships between a continuous neural ODE model and a discrete ResNet one.
Organization of the paper.
The remainder of the paper is structured as follows. First, we formulate the safety verification problem of interest and provide some preliminaries in Section 2. In Section 3, we describe our proposed approach to bound the approximation error between the ResNet and neural ODE models, and use this error bound to verify the safety of one model based on the reachability analysis of the other. Following this, we provide numerical illustrations of our error bounding and verification proxy results (in both directions: from ResNet to neural ODE, and from neural ODE to ResNet) on an academic example in Section 4. Finally, we summarize the main findings of the paper and discuss potential future work in Section 5.
2 Preliminaries
2.1 Neural ODE and ResNet models
We consider the following neural ODE:
$$
\dot{x}(t)=\frac{dx(t)}{dt}=f(x(t)), \tag{1}
$$
with state $x∈\mathbb{R}^{n}$ , initial state $x(0)=u$ , and vector field $f:\mathbb{R}^{n}→\mathbb{R}^{n}$ defined as a finite sequence of classical neural network layers (such as fully connected layers, convolutional layers, activation functions, batch normalization). The state trajectories of (1) are defined based on the solution $\Phi:\mathbb{R}×\mathbb{R}^{n}→\mathbb{R}^{n}$ of the corresponding initial value problem:
$$
x(t)=\Phi(t,x(0))=\Phi(t,u). \tag{0}
$$
In [5], such a neural ODE is described as a continuous-depth generalization of a residual neural network constituted of a single residual block. Conversely, this ResNet can be seen as the Euler discretization of the neural ODE (1):
$$
y=u+f(u), \tag{2}
$$
where $u∈\mathbb{R}^{n}$ is the input, $y∈\mathbb{R}^{n}$ is the output, and the residual function $f:\mathbb{R}^{n}→\mathbb{R}^{n}$ is identical to the vector field of the neural ODE (1).
Since the approach proposed in this paper relies on the Taylor expansion of the trajectories of (1) up to the second order, we assume here for simplicity that the neural network described by the vector field $f$ is continuously differentiable.
**Remark 1**
*The case where $f$ contains piecewise-affine activation functions such as ReLU can theoretically be handled as well, since our approach only really requires their derivatives to be bounded (but not necessarily continuous). But for the sake of clarity of presentation (to avoid the case decompositions of each ReLU activation), this case is kept out of the scope of the present paper.*
2.2 Problem definition
As mentioned above and in [5], both the neural ODE and ResNet models describe a very similar behavior, and either model could be seen as an approximation of the other. Our goal in this paper is to provide a formal comparison of these models in the context of safety verification, by evaluating the approximation error between them. For such comparison to be meaningful, we consider the outputs $y$ of the ResNet (2) on one side, and the outputs $\Phi(1,u)$ of the neural ODE (1) at continuous depth $t=1$ on the other side, since other values $t≠ 1$ of this continuous depth have no elements of comparison in the discrete architecture of the ResNet.
Given an initial set $\mathcal{X}_{in}⊂eq\mathbb{R}^{n}$ for the neural ODE (or equivalently referred to as input set for the ResNet), we first define the sets of reachable outputs for either model:
$$
\mathcal{R}_{\text{neural ODE}}(\mathcal{X}_{in})=\{y\in\mathbb{R}^{n}\mid y=\Phi(1,u),\ u\in\mathcal{X}_{in}\},
$$
$$
\mathcal{R}_{\text{ResNet}}(\mathcal{X}_{in})=\{y\in\mathbb{R}^{n}\mid y=u+f(u),\ u\in\mathcal{X}_{in}\}.
$$
Since we usually cannot compute these output reachable sets exactly, we will often rely on computing an over-approximation denoted as $\Omega(\mathcal{X}_{in})$ such that $\mathcal{R}(\mathcal{X}_{in})⊂eq\Omega(\mathcal{X}_{in})$ .
Our first objective is to bound the approximation error between the two models, as formalized below.
**Problem 1 (Error Bounding)**
*Given an input set $\mathcal{X}_{in}⊂eq\mathbb{R}^{n}$ , we want to over-approximate the set $\mathcal{R}_{\varepsilon}(\mathcal{X}_{in})$ of errors between the ResNet (2) and neural ODE (1) models, defined as:
$$
\mathcal{R}_{\varepsilon}(\mathcal{X}_{in})=\left\{\Phi(1,u)-(u+f(u))~|~u\in\mathcal{X}_{in}\right\}.
$$*
Our second problem of interest is to use one of our models as a verification proxy for the other. In other words, we want to combine this error bound with the reachable set of one model to verify the satisfaction of a safety property on the other model, without having to compute the reachable output set of this second model.
**Problem 2 (Verification Proxy)**
*Given an input-output safety property defined by an input set $\mathcal{X}_{in}⊂eq\mathbb{R}^{n}$ and a safe output set $\mathcal{X}_{s}⊂eq\mathbb{R}^{n}$ , the verification problem consists in checking whether the reachable output set of a model is fully contained in the targeted safe set: $\mathcal{R}(\mathcal{X}_{in})⊂eq\mathcal{X}_{s}$ . In this paper, we want to verify this safety property on one model by relying only on the error set $\mathcal{R}_{\varepsilon}(\mathcal{X}_{in})$ from Problem 1 and the reachability analysis of the other model.*
3 Proposed approach
As mentioned in Section 2.2, the ResNet model in (2) can be seen as the Euler discretization of the neural ODE (1) evaluated at continuous depth $t=1$ :
$$
x(1)=\Phi(1,u)\approx u+f(u)=y. \tag{1}
$$
Our initial goal, related to Problem 1, is to evaluate this approximation error for a given set of inputs $u∈\mathcal{X}_{in}$ . This is done below through the use of a Taylor expansion and its Lagrange-remainder form, combined later with some tools dedicated for reachability analysis.
3.1 Lagrange remainder
The Taylor expansion of the state trajectory $x(t)$ of the neural ODE (1) at $t=0$ is given by the infinite sum:
$$
x(t)=x(0)+t\frac{dx(0)}{dt}+\frac{t^{2}}{2!}\frac{d^{2}x(0)}{dt^{2}}+\frac{t^{3}}{3!}\frac{d^{3}x(0)}{dt^{3}}+\dots \tag{0}
$$
The Lagrange remainder theorem offers the possibility to truncate (4) without approximation error, hence preserving the above equality. We only state below the result in the case of a truncation at the Taylor order $2$ corresponding to the case of interest in our work.
**Proposition 1 (Lagrange remainder[25])**
*There exists $t^{*}∈[0,t]$ such that
$$
x(t)=x(0)+t\frac{dx(0)}{dt}+\frac{t^{2}}{2!}\frac{d^{2}x(t^{*})}{dt^{2}} \tag{0}
$$*
Notice that in (5), the second order derivative $\frac{d^{2}x}{dt^{2}}$ is evaluated at $t^{*}∈[0,t]$ instead of $t$ as in the Taylor series (4). Although the truncation in Proposition 1 provides a much more manageable expression than the infinite sum in (4), the main difficulty is that this result only states the existence of a $t^{*}∈[0,t]$ satisfying the equality in (5), but its actual value is unknown.
3.2 Error function
To compare the continuous state $x(t)$ with the discrete output of the ResNet, the state of the neural ODE (1) should be evaluated at depth $t=1$ .
The first term of the right-hand side in (5) is the known initial condition of the neural ODE (1): $x(0)=u$ .
The second term is provided by the definition of the vector field of the neural ODE (1), and thus reduces to:
$$
t\frac{dx(0)}{dt}=1\cdot f(x(0))=f(u). \tag{0}
$$
The second derivative appearing in the third term of (5) can be computed using the chain rule as follows:
| | $\displaystyle\frac{d^{2}x(t)}{dt^{2}}$ | $\displaystyle=\frac{df(x(t))}{dt}$ | |
| --- | --- | --- | --- |
In our context of Section 2, the function $f$ is assumed not to be explicitly dependent on the depth $t$ due to its definition as a single residual block with classical layers. Therefore, the partial derivative $\frac{∂ f(x(t))}{∂ t}$ is equal to $0$ , and the third term of (5) thus reduces to:
$$
\frac{t^{2}}{2!}\frac{d^{2}x(t^{*})}{dt^{2}}=\frac{1}{2}f^{\prime}(x(t^{*}))f(x(t^{*})).
$$
We can thus re-write (5) as an equation defining the output of the neural ODE based on the output of the ResNet (for the same initial state/input $u$ ) and an error term:
$$
\Phi(1,u)=(u+f(u))+\varepsilon(u), \tag{6}
$$
where the approximation error between our models for this particular input $u$ is expressed by the Lagrange remainder of Taylor order 2:
$$
\varepsilon(u)=\frac{1}{2}f^{\prime}(x(t^{*}))f(x(t^{*})), \tag{7}
$$
with $x(t^{*})=\Phi(t^{*},u)$ for a fixed but unknown $t^{*}∈[0,1]$ .
Equation (6) can also be modified to rather express the outputs of the ResNet based on those of the neural ODE:
$$
u+f(u)=\Phi(1,u)-\varepsilon(u). \tag{8}
$$
The error function $\varepsilon:\mathbb{R}^{n}→\mathbb{R}^{n}$ appearing positively in (6) and negatively in (8) is defined in (7) only for a specific input $u$ . However, in the context of our Problem 1, we are interested in analyzing the approximation error between both models over an input set $\mathcal{X}_{in}⊂eq\mathbb{R}^{n}$ . In addition, since the specific value of $t^{*}$ is unknown, we need to bound (7) for any possible value of $t^{*}∈[0,1]$ . Therefore in the next sections, we focus on converting the equalities (6)-(8) to set inclusions over all $u∈\mathcal{X}_{in}$ and $t^{*}∈[0,1]$ .
3.3 Bounding the error set
The reachable error set $\mathcal{R}_{\varepsilon}(\mathcal{X}_{in})$ introduced in Problem 1, can be redefined based on the error function (7) as follows:
$$
\displaystyle\mathcal{R}_{\varepsilon}(\mathcal{X}_{in}) \displaystyle=\left\{\Phi(1,u)-(u+f(u))~|~u\in\mathcal{X}_{in}\right\} \displaystyle=\left\{\left.\frac{1}{2}f^{\prime}(\Phi(t^{*},u))f(\Phi(t^{*},u))~\right|~t^{*}\in[0,1],~u\in\mathcal{X}_{in}\right\}. \tag{9}
$$
To solve Problem 1, our objective is thus to compute an over-approximation $\Omega_{\varepsilon}(\mathcal{X}_{in})$ bounding the error set: $\mathcal{R}_{\varepsilon}(\mathcal{X}_{in})⊂eq\Omega_{\varepsilon}(\mathcal{X}_{in})$ .
The first step (corresponding to line 1 in Algorithm 1) is to compute the reachable tube of all possible states that can be reached by the neural ODE (1) over the whole range $t∈[0,1]$ and for any initial state $x(0)=u∈\mathcal{X}_{in}$ . This reachable tube can be defined similarly to $\mathcal{R}_{\text{neural ODE}}(\mathcal{X}_{in})$ in Section 2.2 but for all possible depth $t∈[0,1]$ instead of only the final one:
$$
\mathcal{R}^{\text{tube}}_{\text{neural ODE}}(\mathcal{X}_{in})=\{\Phi(t,u)\in\mathbb{R}^{n}\mid t\in[0,1],~u\in\mathcal{X}_{in}\}.
$$
Since in most cases this set cannot be computed exactly, we instead use off-the-shelf reachability analysis toolboxes to compute an over-approximating set $\Omega^{\text{tube}}_{\text{neural ODE}}(\mathcal{X}_{in})$ such that $\mathcal{R}^{\text{tube}}_{\text{neural ODE}}(\mathcal{X}_{in})⊂eq\Omega^{\text{tube}}_{\text{neural ODE}}(\mathcal{X}_{in})$ .
The error set can then be re-written based on the above reachable tube definition, by replacing $\Phi(t^{*},u)$ (with $t^{*}∈[0,1]$ and $u∈\mathcal{X}_{in}$ ) in (9) by $x∈\mathcal{R}^{\text{tube}}_{\text{neural ODE}}(\mathcal{X}_{in})$ .
$$
\displaystyle\mathcal{R}_{\varepsilon}(\mathcal{X}_{in}) \displaystyle=\left\{\left.\frac{1}{2}f^{\prime}(x)f(x)~\right|~x\in\mathcal{R}^{\text{tube}}_{\text{neural ODE}}(\mathcal{X}_{in})\right\} \displaystyle\subseteq\left\{\left.\frac{1}{2}f^{\prime}(x)f(x)~\right|~x\in\Omega^{\text{tube}}_{\text{neural ODE}}(\mathcal{X}_{in})\right\}. \tag{10}
$$
The next step, in line 2 of Algorithm 1, is to over-approximate this error set $\mathcal{R}_{\varepsilon}(\mathcal{X}_{in})$ . One possible approach to achieve this is to define the static function $\varepsilon=\frac{1}{2}f^{\prime}(x)f(x)$ and apply to it some set-propagation techniques (such as interval arithmetic [12], Taylor models [19], or affine arithmetic [6]) to bound the set of output errors $\varepsilon$ corresponding to any state $x∈\Omega^{\text{tube}}_{\text{neural ODE}}(\mathcal{X}_{in})$ in the reachable tube over-approximation. An alternative approach, which provided a tighter error bounding set in the particular case of the numerical example presented in Section 4, is to define the discrete-time nonlinear system $x^{+}=\frac{1}{2}f^{\prime}(x)f(x)$ , and then use existing reachability analysis toolboxes to over-approximate the reachable set of this system after one time step, which corresponds to bounding the image of the error function. Note that in this case, it is important that this final reachable set is computed as a single step, and not decomposed into a sequence of smaller intermediate steps whose iterative updates of the internal state would have no mathematical meaning for the static (stateless) error function.
As a consequence of the equalities and set inclusions in (9)-(10) and the fact that the reachability methods to be used in the first two steps of Algorithm 1 described above guarantee that the obtained sets are over-approximations of the output or reachable sets of interest, we have thus reached a solution to Problem 1.
**Theorem 3.1**
*The set $\Omega_{\varepsilon}(\mathcal{X}_{in})$ obtained after applying this second step described above solves Problem 1:
$$
\mathcal{R}_{\varepsilon}(\mathcal{X}_{in})=\left\{\Phi(1,u)-(u+f(u))~|~u\in\mathcal{X}_{in}\right\}\subseteq\Omega_{\varepsilon}(\mathcal{X}_{in}).
$$*
Note that the error bound in Theorem 3.1 is defined as a set in the state space of the neural ODE. This differs from the approach in [26], where the error bound is defined as a positive scalar.
A second and more important difference with this work is the tightness of the obtained error bounds. Indeed, if we adapt the results from [26] to the context of our framework described in Section 2, their error bound is expressed as:
$$
\varepsilon\leq\frac{e^{L}-1}{L}\left\|\frac{1}{2}f^{\prime}(x)f(x)\right\|_{\infty},~\forall x\in\mathcal{R}^{\text{tube}}_{\text{neural ODE}}(\mathcal{X}_{in}),
$$
where $L$ is a Lipschitz constant of the neural ODE vector field. The term $\left\|\frac{1}{2}f^{\prime}(x)f(x)\right\|_{∞}$ can be obtained by first over-approximating the error set by $\Omega_{\varepsilon}(\mathcal{X}_{in})$ in the same way we did, but the infinity norm forces to expand this set to make it symmetrical around $0$ , and then keeping only the maximum value among its components (thus corresponding to a second expansion of this set into an hypercube whose width along all dimensions is the largest width of the previous set). In addition, for any system with non-zero Lipschitz constant, the factor $\frac{e^{L}-1}{L}$ is always greater than $1$ , which increases this error bound even more.
In summary, this scalar error bound is doubly more conservative than our proposed set-based error bound. The comparison of both approaches is illustrated in the numerical example of Section 4.
3.4 Verification proxy
To address Problem 2, we leverage the similar behavior between the neural ODE and ResNet models to verify safety properties on one model using the reachable set of the other, combined with the error bound from Theorem 3.1. Specifically, we want to verify whether the reachable output set of a model is contained in the safe set $\mathcal{X}_{s}$ , i.e., $\mathcal{R}(\mathcal{X}_{in})⊂eq\mathcal{X}_{s}$ .
We first focus on the case of Algorithm 1 to verify the safety property on the neural ODE, based on the reachability analysis of the ResNet. This first verification proxy relies on the set-based version of (6) using the Minkowski sum:
$$
\mathcal{R}_{\text{neural ODE}}(\mathcal{X}_{in})\subseteq\Omega_{\text{ResNet}}(\mathcal{X}_{in})+\Omega_{\varepsilon}(\mathcal{X}_{in}), \tag{11}
$$
stating that the reachable output set of the neural ODE is contained in the output set over-approximation of the ResNet $\Omega_{\text{ResNet}}(\mathcal{X}_{in})$ , expanded by the bounding set of the error $\Omega_{\varepsilon}(\mathcal{X}_{in})$ obtained after applying the first two lines of Algorithm 1 as described in Section 3.3.
Therefore, this verification procedure is achieved as in Algorithm 1, by first using existing set-propagation or reachability analysis tools to compute an over-approximation ${\color[rgb]{0,0,1}\Omega_{\text{ResNet}}(\mathcal{X}_{in})}$ of the ResNet output set (line 3). Then in line 4, an over-approximation of the neural ODE output set can be deduced from (11) by taking the Minkowski sum of ${\color[rgb]{0,0,1}\Omega_{\text{ResNet}}(\mathcal{X}_{in})}$ and our error bound ${\color[rgb]{1,0,0}\Omega_{\varepsilon}(\mathcal{X}_{in})}$ . If $\Omega_{\text{neural ODE}}(\mathcal{X}_{in})$ is contained in the safe set ${\color[rgb]{0,0,0}\mathcal{X}_{s}}$ , then the neural ODE satisfies the safety property, otherwise the result is inconclusive (line 5-9).
Algorithm 1 Safety Verification Framework for neural ODE based on ResNet
Input: a neural ODE, an input set $\mathcal{X}_{in}$ and a safe set $\mathcal{X}_{s}$ . Output: Safe or Unknown.
1: compute an over-approximation of the reachable tube of the neural ODE $\Omega^{\text{tube}}_{\text{neural ODE}}(\mathcal{X}_{in})$ ;
2: compute the over-approximation of the error set ${\color[rgb]{1,0,0}\Omega_{\varepsilon}(\mathcal{X}_{in})}$ , $∀ x∈\Omega^{\text{tube}}_{\text{neural ODE}}(\mathcal{X}_{in})$ ;
3: compute the over-approximation of the ResNet output ${\color[rgb]{0,0,1}\Omega_{\text{ResNet}}(\mathcal{X}_{in})}$ ;
4: deduce an over-approximation of the neural ODE output $\Omega_{\text{neural ODE}}(\mathcal{X}_{in})={\color[rgb]{1,0,0}\Omega_{\text{ResNet}}(\mathcal{X}_{in})+\Omega_{\varepsilon}(\mathcal{X}_{in})}$ ;
5: if $\Omega_{\text{neural ODE}}(\mathcal{X}_{in})⊂eq{\color[rgb]{0,0,0}\mathcal{X}_{s}}$ then
6: return Safe
7: else
8: return Unknown
9: end if
Reversing the roles, the case of verifying the ResNet based on the reachability analysis of the neural ODE is described in Algorithm 2. This case is very similar to the previous one, so we focus here on the main differences with Algorithm 1. The first difference is that in (8), the term representing the approximation error between the models appears with a negative sign. Therefore, when converting this equation into a set inclusion similarly to (11), we need to be careful to add the negation of the error set (and not to do a set difference, which is not the correct set operation in our case). We thus introduce the negative error set
$$
\Omega_{-\varepsilon}(\mathcal{X}_{in})=\{-\varepsilon\mid\varepsilon\in\Omega_{\varepsilon}(\mathcal{X}_{in})\},
$$
in order to convert (8) into its set-based notation as follows:
$$
\mathcal{R}_{\text{ResNet}}(\mathcal{X}_{in})\subseteq\Omega_{\text{neural ODE}}(\mathcal{X}_{in})+\Omega_{-\varepsilon}(\mathcal{X}_{in}). \tag{12}
$$
The second difference is that in line 3 of Algorithm 2, we compute an over-approximation of the reachable set of the neural ODE, using any classical tools for reachability analysis of continuous-time nonlinear systems, and add it to the negative error set to obtain an over-approximation of the ResNet output set. This final set can then similarly be used to verify the satisfaction of the safety property on the ResNet model.
Algorithm 2 Safety Verification Framework for ResNet based on neural ODE
Input: a ResNet, an input set $\mathcal{X}_{in}$ and a safe set $\mathcal{X}_{s}$ . Output: Safe or Unknown.
1: compute an over-approximation of the reachable tube of the neural ODE $\Omega^{\text{tube}}_{\text{neural ODE}}(\mathcal{X}_{in})$ ;
2: compute the over-approximation of the negative error set ${\color[rgb]{1,0,0}\Omega_{-\varepsilon}(\mathcal{X}_{in})}$ , $∀ x∈\Omega^{\text{tube}}_{\text{neural ODE}}(\mathcal{X}_{in})$ ;
3: compute the over-approximation of the neural ODE output $\Omega_{\text{neural ODE}}(\mathcal{X}_{in})$ ;
4: deduce an over-approximation of the ResNet output ${\color[rgb]{0,0,1}\Omega_{\text{ResNet}}(\mathcal{X}_{in})}={\color[rgb]{1,0,0}\Omega_{\text{neural ODE}}(\mathcal{X}_{in})+\Omega_{-\varepsilon}(\mathcal{X}_{in})}$ ;
5: if ${\color[rgb]{0,0,1}\Omega_{\text{ResNet}}(\mathcal{X}_{in})}⊂eq{\color[rgb]{0,0,0}\mathcal{X}_{s}}$ then
6: return Safe
7: else
8: return Unknown
9: end if
**Theorem 3.2 (Soundness)**
*For the case that either Algorithm 1 or 2 returns Safe, the safety property in the sense of Problem 2 holds true [15].*
The soundness of the verification framework is guaranteed because both algorithms rely on over-approximations of the true reachable sets. Specifically, (11) ensures that $\mathcal{R}_{\text{neural ODE}}(\mathcal{X}_{in})⊂eq\Omega_{\text{neural ODE}}(\mathcal{X}_{in})$ , and (12) ensures ${\color[rgb]{0,0,1}\mathcal{R}_{\text{ResNet}}(\mathcal{X}_{in})}⊂eq{\color[rgb]{0,0,1}\Omega_{\text{ResNet}}(\mathcal{X}_{in})}$ . These inclusions hold due to the conservative nature of the considered reachability analysis and error bound computations in Section 3.3 (Theorem 3.1).
4 Numerical illustration
In this section, a commonly used neural ODE academic example [16, 17] is used to demonstrate the verification proxy between the two models, which is the Fixed-Point Attractor (FPA) [22] that consists of one nonlinear neural ODE.
Experiment Setting: All the experiments Code available in the following repository: https://github.com/ab-sayed/Formal-Error-Bound-for-Safety-Verification-of-neural-ODE herein are run on MATLAB 2024b with Continuous Reachability Analyzer (CORA) version 2024.4.0 with an Intel (R) Core (TM) i5-1145G7 CPU@2.60 GHz and 32 GB of RAM.
4.1 System description
The FPA system is a nonlinear dynamical system with dynamics that converge to a fixed point (an equilibrium state) under certain conditions [2], and the fixed-point aspect makes it a useful model for studying convergence and stability, which are important in safety-critical applications where the system must not diverge or enter unsafe states. As in the proposed benchmark in [22], we consider here the following $5$ -dimensional neural ODE approximating the FPA dynamics:
$$
\dot{x}=f(x)=\tau x+W\text{tanh}(x),
$$
where $x∈\mathbb{R}^{5}$ is the state vector, $\tau=-10^{-6}$ is a time constant for the neurons, $W∈\mathbb{R}^{5× 5}$ is a composite weight matrix defined as $W=\begin{pmatrix}0_{2× 2}&A\\
0_{3× 2}&BA\end{pmatrix}$ with $A=\begin{pmatrix}-1.20327&-0.07202&-0.93635\\
1.18810&-1.50015&0.93519\end{pmatrix}$ and $B=\begin{pmatrix}1.21464&-0.10502\\
0.12023&0.19387\\
-1.36695&0.12201\end{pmatrix}$ , and $\text{tanh}(x)$ is the hyperbolic tangent activation function applied element-wise to the state vector $x$ .
We choose our safety property defined by the input set $\mathcal{X}_{in}≈[0.45,0.55]×[0.72,0.88]×[0.47,0.58]×[0.19,0.24]×[-0.64,-0.53]$ (its exact numerical values are provided in the code linked below) and the safe set $\mathcal{X}_{s}=[0.2,0.6]×[0.3,0.85]⊂\mathbb{R}^{2}$ , that only focuses on the projection of the state onto its first two dimensions, i.e., using an output function $h(x)=(x_{1},x_{2})$ . In the case of the neural ODE, we thus want to verify that for all initial state $x(0)∈\mathcal{X}_{in}$ , we have $h(x(1))∈\mathcal{X}_{s}$ .
4.2 Computing the error bound
Using CORA [1], we compute the error bound $\Omega_{\varepsilon}(\mathcal{X}_{in})$ from Theorem 3.1 as follows. First, we over-approximate the reachable tube of the neural ODE $\mathcal{R}^{\text{tube}}_{\text{neural ODE}}$ over the time interval $[0,1]$ as a sequence of zonotopes, where each zonotope corresponds to an intermediate time range. For each zonotope in the reachable tube, we bound the image of the error function (7) by applying a discrete-time reachability analysis method at $t=1$ . This results in a new zonotope that over-approximates the error set starting from that particular reachable tube zonotope. The total error set is thus guaranteed to be contained in the union of these error zonotopes across all time steps. To simplify its use in the safety verification experiments in Section 4.3, we compute the interval hull of this union, yielding a hyperrectangle that over-approximates ${\color[rgb]{1,0,0}\Omega_{\varepsilon}(\mathcal{X}_{in})}$ illustrated in Figure 2 in red, and showing 20 error zonotopes in different colors, corresponding to the error bound of each intermediate time range used in the reachable tube.
<details>
<summary>x2.png Details</summary>

### Visual Description
\n
## Contour Plot: State Space Representation
### Overview
The image presents a contour plot visualizing a two-dimensional state space. The plot displays multiple closed curves, representing level sets of some function of two variables, x₁ and x₂. The plot is enclosed in a red rectangular border. A legend in the top-right corner identifies the curves as representing Ωε.
### Components/Axes
* **X-axis:** Labeled "x₁", ranging from approximately -0.03 to 0.03.
* **Y-axis:** Labeled "x₂", ranging from approximately -0.01 to 0.07.
* **Contours:** Multiple closed curves representing different values of Ωε.
* **Legend:** Located in the top-right corner, labeling the curves as "Ωε" and using a solid red line to represent them.
* **Border:** A red rectangular border encompasses the entire plot.
### Detailed Analysis
The contour plot shows a series of nested, elliptical-like contours. The contours are most densely packed near the origin (x₁ ≈ 0, x₂ ≈ 0), indicating a region of high concentration or a local maximum of the function represented by Ωε. As you move away from the origin, the contours become more spaced out, suggesting a decreasing value of Ωε.
The contours exhibit an elongated shape, oriented diagonally. The highest concentration of contours appears to be centered around the point (x₁ ≈ 0, x₂ ≈ 0.03). The contours are generally smooth and continuous, with no apparent discontinuities or sharp corners.
It's difficult to extract precise numerical values from the contours without knowing the specific function Ωε. However, we can observe the relative positions and densities of the contours to infer the behavior of the function.
### Key Observations
* **Concentration:** The highest concentration of contours is near (0, 0.03).
* **Elongation:** The contours are elongated diagonally.
* **Density Gradient:** Contour density decreases as you move away from the origin.
* **Shape:** The contours are generally elliptical.
### Interpretation
This contour plot likely represents the state space of a dynamical system. The contours represent the level sets of a Lyapunov function (Ωε), which is used to analyze the stability of the system. The region enclosed by the innermost contours represents a stable region, where the system's trajectories will converge. The red border may indicate the boundaries of the state space or a region of interest.
The elongated shape of the contours suggests that the system's dynamics are anisotropic, meaning that the system's behavior is different in different directions. The concentration of contours near the origin indicates that the system is most stable near that point.
The plot suggests that the system has a stable equilibrium point near (0, 0.03). The shape and density of the contours provide information about the system's stability and the behavior of its trajectories. Without knowing the specific function Ωε, it is difficult to provide a more detailed interpretation. The plot is a visualization of a two-dimensional phase space, and the contours represent the paths of constant energy or some other conserved quantity. The red rectangle likely defines the region of interest or the boundaries of the system.
</details>
Figure 2: Illustration of the error over-approximation
To contextualize our proposed error bound, we compare it with the error bound proposed in [26]. For that, we first compute the infinity norm of our error set $\|{\color[rgb]{1,0,0}\Omega_{\varepsilon}(\mathcal{X}_{in})}\|_{∞}=0.064$ , which corresponds to a positive and scalar bound on the error, thus implying that its set representation in the state space (represented in yellow in Figure 3) is necessarily symmetrical around $0$ and with the width that is identical on all dimensions (since the infinity norm takes the largest width across all dimensions). The set-based error bound ( ${\color[rgb]{1,0,0}\Omega_{\varepsilon}(\mathcal{X}_{in})}$ represented in red) obtained from our method is thus always contained in this infinity norm.
Next, we compute the Lipschitz constant for the vector field of the neural ODE $L=\|\tau+W\|_{∞}=3.62$ , and then we obtain the error bound in [26] as $\frac{(e^{L}-1)}{L}\|\Omega_{\varepsilon}(\mathcal{X}_{in})\|_{∞}=0.64$ . This final error bound, represented in magenta in Figure 3, is 10 times wider (on each dimension) than the infinity norm of our error set in yellow, and about $16$ millions times larger (in volume over the $5$ -dimensional state space) than our error set ${\color[rgb]{1,0,0}\Omega_{\varepsilon}(\mathcal{X}_{in})}$ in red. The improved tightness of our proposed approach is therefore very significant.
<details>
<summary>x3.png Details</summary>

### Visual Description
\n
## Chart: Error Bound and Region Visualization
### Overview
The image presents a 2D chart visualizing regions and an error bound, likely related to a mathematical or computational model. The chart displays two regions, denoted as Ωε (Omega epsilon) and ||Ωε||∞ (norm of Omega epsilon), along with an error bound derived from a source cited as "Sander 2022". The chart uses a Cartesian coordinate system with axes labeled x1 and x2.
### Components/Axes
* **X-axis:** Labeled "x1", ranging from approximately -0.8 to 0.8.
* **Y-axis:** Labeled "x2", ranging from approximately -0.8 to 0.8.
* **Legend:** Located in the top-right corner.
* Red line: Represents Ωε (Omega epsilon).
* Orange line: Represents ||Ωε||∞ (norm of Omega epsilon).
* Magenta line: Represents "Error bound from Sander 2022".
* **Grid:** A light gray grid is overlaid on the chart, aiding in visual estimation of coordinates.
### Detailed Analysis
The chart shows three distinct shapes:
1. **Ωε (Red):** This region appears as a large rectangle encompassing most of the chart area. The rectangle's corners are approximately at (-0.7, 0.6), (0.6, 0.6), (0.6, -0.7), and (-0.7, -0.7).
2. **||Ωε||∞ (Orange):** This region is a much smaller rectangle, positioned near the origin (0,0). Its corners are approximately at (-0.1, 0.1), (0.1, 0.1), (0.1, -0.1), and (-0.1, -0.1).
3. **Error bound from Sander 2022 (Magenta):** This line forms a rectangular shape around the perimeter of the chart. The top edge is approximately at y = 0.6, the bottom edge at y = -0.7, the left edge at x = -0.7, and the right edge at x = 0.6.
The magenta line (error bound) completely encloses the red region (Ωε), which in turn completely encloses the orange region (||Ωε||∞).
### Key Observations
* The orange region (||Ωε||∞) is significantly smaller than the red region (Ωε).
* The magenta line (error bound) provides an upper limit on the extent of the red region (Ωε).
* The error bound is not tight, as there is a substantial gap between the red region and the magenta line in several areas.
* The chart suggests a hierarchical relationship between the three elements: ||Ωε||∞ is contained within Ωε, and Ωε is contained within the error bound.
### Interpretation
The chart likely represents a constraint or bound on a solution space. Ωε could represent a feasible region for a particular problem, while ||Ωε||∞ might represent a specific characteristic or norm of that region. The "Error bound from Sander 2022" provides a guaranteed level of accuracy or containment for the solution.
The large gap between the red region and the magenta line suggests that the error bound is conservative, meaning it provides a safe but potentially overly restrictive limit. The small size of the orange region relative to the red region indicates that the norm of Ωε is significantly smaller than Ωε itself.
The chart could be used to assess the performance of an algorithm or model, where Ωε represents the set of possible solutions, ||Ωε||∞ represents a measure of the solution's quality, and the error bound provides a guarantee on the accuracy of the results. The citation to "Sander 2022" suggests that the error bound is based on a specific theoretical result or method developed by Sander in 2022.
</details>
Figure 3: Comparison of the error bounds obtained from our approach in red and the one from [26] in magenta
4.3 Experiments on safety verification
Using the error bound computed in Section 4.2, we can verify safety properties for the neural ODE output set based on the ResNet output set and the error bound set (i.e., ${\color[rgb]{1,0,0}\Omega_{\text{ResNet}}(\mathcal{X}_{in})+\Omega_{\varepsilon}(\mathcal{X}_{in})}$ ), or vice versa for the ResNet output set based on the neural ODE output set and the negative error bound set (i.e., ${\color[rgb]{1,0,0}\Omega_{\text{neural ODE}}(\mathcal{X}_{in})+\Omega_{-\varepsilon}(\mathcal{X}_{in})}$ ).
In Figure 4, we compute the over-approximation of the ResNet output set ${\color[rgb]{0,0,1}\Omega_{\text{ResNet}}}$ using simple bound propagation through the ResNet function with CORA. By adding the error bound ${\color[rgb]{1,0,0}\Omega_{\varepsilon}}$ , we obtain a zonotope (shown in red) that is guaranteed to contain $\mathcal{R}_{\text{neural ODE}}(\mathcal{X}_{in})$ . The figure also includes black points representing neural ODE outputs for random initial conditions in $\mathcal{X}_{in}$ , with their convex hull (black set) approximating the true reachable set $\mathcal{R}_{\text{neural ODE}}(\mathcal{X}_{in})$ . Since the safe set ${\color[rgb]{0,0,0}\mathcal{X}_{s}}$ contains the over-approximation ${\color[rgb]{1,0,0}\Omega_{\text{ResNet}}(\mathcal{X}_{in})+\Omega_{\varepsilon}(\mathcal{X}_{in})}$ , we guarantee that the neural ODE true reachable set is safe, as:
$$
{\color[rgb]{0,0,0}\mathcal{X}_{s}}\supseteq{\color[rgb]{1,0,0}\Omega_{\text{ResNet}}(\mathcal{X}_{in})+\Omega_{\varepsilon}(\mathcal{X}_{in})}\supseteq\mathcal{R}_{\text{neural ODE}}.
$$
From Figure 4, we can see that the ResNet and neural ODE reachable sets are very similar due to the ResNet role as a discretization of the neural ODE, but they are not identical. Indeed, some neural ODE outputs (black points) lie outside ${\color[rgb]{0,0,1}\Omega_{\text{ResNet}}}$ , highlighting the necessity of the error bound ${\color[rgb]{1,0,0}\Omega_{\varepsilon}(\mathcal{X}_{in})}$ to ensure that the over-approximation captures all possible neural ODE outputs.
<details>
<summary>x4.png Details</summary>

### Visual Description
## Chart: 2D Region Plot - Uncertainty Quantification
### Overview
The image presents a 2D plot illustrating regions defined by different methods, likely representing uncertainty bounds or confidence intervals in a two-dimensional space defined by variables x1 and x2. The plot compares the regions generated by several models and error estimations.
### Components/Axes
* **X-axis:** Labeled "x₁", ranging approximately from -0.5 to 1.5.
* **Y-axis:** Labeled "x₂", ranging approximately from -0.4 to 1.6.
* **Legend:** Located in the bottom-left corner, containing the following labels and corresponding colors:
* "xₛ" (Light Green)
* "RneuralODE" (Black)
* "ΩResNet" (Blue)
* "ΩResNet + Ωₑ" (Red)
* "ΩResNet + Error bound from Sander 2022" (Magenta)
* **Grid:** A light gray grid is overlaid on the plot area.
### Detailed Analysis
The plot displays five distinct regions, each defined by a closed curve.
1. **xₛ (Light Green):** This region is a square, with vertices approximately at (-0.1, 0.7), (0.4, 0.7), (0.4, 1.2), and (-0.1, 1.2). It is the smallest and most contained region.
2. **RneuralODE (Black):** This region is a more complex polygon, enclosing the green region. It has vertices approximately at (-0.2, 0.0), (1.3, 0.0), (1.3, 1.5), and (-0.2, 1.5). The shape is roughly rectangular but with some inward curves.
3. **ΩResNet (Blue):** This region is a polygon, contained within the black region, and appears to be similar in shape to the black region but slightly smaller and more irregular. It has vertices approximately at (0.0, 0.3), (0.8, 0.3), (0.8, 1.3), and (0.0, 1.3).
4. **ΩResNet + Ωₑ (Red):** This region is a polygon, contained within the blue region, and appears to be similar in shape to the blue region but slightly smaller and more irregular. It has vertices approximately at (0.1, 0.4), (0.7, 0.4), (0.7, 1.1), and (0.1, 1.1).
5. **ΩResNet + Error bound from Sander 2022 (Magenta):** This region is the largest and most encompassing, containing all other regions. It is a polygon with vertices approximately at (-0.3, -0.2), (1.4, -0.2), (1.4, 1.6), and (-0.3, 1.6).
The regions are nested, with xₛ being the innermost and the Sander 2022 error bound being the outermost. The shapes are not perfectly regular, suggesting the underlying data or model is complex.
### Key Observations
* The regions represent increasing levels of uncertainty or confidence intervals.
* The "xₛ" region is the most precise, while the Sander 2022 error bound provides the broadest estimate.
* The difference between the regions suggests that adding error bounds significantly expands the estimated uncertainty.
* The shapes of the regions are not simple, indicating non-linear relationships or complex dependencies.
### Interpretation
This chart likely visualizes the results of an uncertainty quantification study. The different regions represent the estimated range of possible values for the variables x₁ and x₂, as determined by different models or methods.
* **xₛ:** Could represent a ground truth or a highly accurate estimate.
* **RneuralODE:** Represents the region estimated by a Neural ODE model.
* **ΩResNet:** Represents the region estimated by a ResNet model.
* **ΩResNet + Ωₑ:** Represents the ResNet model's estimate with an additional error component (Ωₑ).
* **ΩResNet + Error bound from Sander 2022:** Represents the ResNet model's estimate with an error bound calculated using the method proposed by Sander in 2022.
The increasing size of the regions as more error terms are added indicates that the uncertainty in the estimate grows as more factors are considered. The Sander 2022 error bound provides the most conservative (largest) estimate, likely reflecting a higher level of confidence or a more thorough assessment of potential errors. The chart demonstrates the importance of considering uncertainty when making predictions or decisions based on these models. The nested structure highlights how different methods contribute to the overall uncertainty estimate.
</details>
Figure 4: Verification of neural ODE based on ResNet
Conversely, in Figure 5, we compute the over-approximation of the neural ODE reachable set ${\color[rgb]{0,0,0}\Omega_{\text{neural ODE}}(\mathcal{X}_{in})}$ . By adding the negative error bound ${\color[rgb]{1,0,0}\Omega_{-\varepsilon}}$ , we obtain a zonotope (shown in red) that encapsulates ${\color[rgb]{0,0,1}\mathcal{R}_{\text{ResNet}}(\mathcal{X}_{in})}$ . Similarly, the figure includes blue points representing ResNet outputs for random inputs in $\mathcal{X}_{in}$ , with their convex hull (blue set) approximating the true reachable set ${\color[rgb]{0,0,1}\mathcal{R}_{\text{ResNet}}(\mathcal{X}_{in})}$ . Since the safe set ${\color[rgb]{0,0,0}\mathcal{X}_{s}}$ is a super set that contains the over-approximation ${\color[rgb]{1,0,0}\Omega_{\text{neural ODE}}(\mathcal{X}_{in})+\Omega_{-\varepsilon}(\mathcal{X}_{in})}$ , we guarantee that the ResNet true reachable set is safe, as:
$$
{\color[rgb]{0,0,0}\mathcal{X}_{s}}\supseteq{\color[rgb]{1,0,0}\Omega_{\text{neural ODE}}(\mathcal{X}_{in})+\Omega_{-\varepsilon}(\mathcal{X}_{in})}\supseteq{\color[rgb]{0,0,1}\mathcal{R}_{\text{ResNet}}}.
$$
<details>
<summary>x5.png Details</summary>

### Visual Description
## Chart: Phase Space Visualization of Dynamical Systems
### Overview
The image presents a 2D phase space plot, likely representing the trajectories of dynamical systems. The plot visualizes the state space defined by variables x₁ and x₂. Several curves are overlaid, each representing a different method or model for approximating the system's behavior. The plot appears to be comparing the performance of different numerical integration schemes or models in capturing the true dynamics.
### Components/Axes
* **x-axis:** Labeled "x₁", ranging approximately from -0.5 to 1.5.
* **y-axis:** Labeled "x₂", ranging approximately from -0.4 to 1.6.
* **Legend:** Located in the bottom-left corner, listing the following curves and their corresponding colors:
* χs (Green)
* RResNet (Dark Blue)
* ΩneuralODE (Red)
* ΩneuralODE + Ωε (Black)
* ΩneuralODE + Error bound from Sander 2022 (Magenta)
### Detailed Analysis
The plot shows several closed curves, indicating potentially periodic or bounded behavior of the dynamical systems.
* **χs (Green):** This curve forms a roughly rectangular shape. It starts at approximately (-0.5, 0.8), goes to (0.5, 0.8), then to (0.5, 1.4), then to (-0.5, 1.4), and back to the starting point.
* **RResNet (Dark Blue):** This curve is contained within the green curve and appears more complex, with several lobes and indentations. It roughly follows a similar overall shape to the green curve, but with more oscillations. The curve starts at approximately (0.0, 0.6), moves to (0.4, 0.7), then to (0.6, 0.5), then to (0.2, 0.3), and back to the starting point.
* **ΩneuralODE (Red):** This curve is also contained within the green curve and appears to be smoother than the RResNet curve. It starts at approximately (0.0, 0.5), moves to (0.5, 0.6), then to (0.7, 0.4), then to (0.3, 0.2), and back to the starting point.
* **ΩneuralODE + Ωε (Black):** This curve is very similar to the red curve, but slightly offset. It starts at approximately (0.0, 0.4), moves to (0.5, 0.5), then to (0.7, 0.3), then to (0.3, 0.1), and back to the starting point.
* **ΩneuralODE + Error bound from Sander 2022 (Magenta):** This curve forms the outer boundary of the region, encompassing all other curves. It starts at approximately (-0.5, -0.2), moves to (1.5, 1.4), then to (1.5, -0.2), and back to the starting point.
### Key Observations
* The magenta curve (ΩneuralODE + Error bound from Sander 2022) represents the widest region, suggesting it provides the most conservative estimate of the system's behavior.
* The green curve (χs) defines a relatively simple, rectangular boundary.
* The RResNet and ΩneuralODE curves are contained within the green curve, indicating they provide more refined approximations.
* The black curve (ΩneuralODE + Ωε) is very close to the red curve (ΩneuralODE), suggesting that the addition of Ωε has a relatively small effect.
* The RResNet curve exhibits more complex oscillations than the ΩneuralODE curve.
### Interpretation
This plot likely compares the accuracy and robustness of different methods for solving or approximating a dynamical system. The green curve might represent a known or "true" solution, while the other curves represent approximations obtained using different numerical methods or models. The magenta curve represents the true solution plus an error bound.
The fact that the RResNet and ΩneuralODE curves are contained within the green curve suggests that these methods are providing reasonable approximations. The differences between the curves highlight the trade-offs between accuracy, computational cost, and robustness. The RResNet curve's more complex shape suggests it might be capturing finer details of the dynamics, but it could also be more sensitive to noise or errors. The error bound (magenta curve) provides a measure of the uncertainty associated with the ΩneuralODE approximation.
The reference to "Sander 2022" suggests that the error bound is based on results published in that paper. This plot is a visual representation of the performance of these methods in a specific phase space, and it could be used to guide the selection of an appropriate method for a given application.
</details>
Figure 5: Verification of ResNet based on neural ODE
We can also remark that the magenta sets obtained by adding the error bound proposed in [26] to the ResNet and neural ODE reachable sets in Figures 4 and 5, extends significantly beyond the green safe set, preventing us from successfully guaranteeing the safety of the models.
5 Conclusion
In this paper, we propose a set-based method to bound the error between a neural ODE model and its ResNet approximation. This approach is based on reachability analysis tools applied to the Lagrange remainder in the Taylor expansion of the neural ODE trajectories, and is shown both theoretically and numerically to provide significantly tighter over-approximation of this approximation error than previous results in [26]. As the second contribution of this paper, the obtained bounding set of the approximation error between the two models is used to verify a safety property on either of the two models by applying reachability or verification tools only on the other model. This approach is fully reversible and either model can be used as the verification proxy for the other. These contributions and their improvement with respect to [26] have been illustrated on a numerical example of a fixed-point attractor system modeled as a neural ODE.
In future works, we plan to explore additional sources of complexity for these approaches, such as handling non-smooth activation functions (e.g. ReLU), and the case where the neural ODE vector field is explicitly dependent on the depth variable $t$ , thus corresponding to ResNet with multiple residual blocks. Additionally, we aim to study the versatility of this verification proxy approach by applying it to other complex nonlinear dynamical systems or neural network architectures.
Acknowledgement
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie COFUND grant agreement no. 101034248.
References
- [1] Althoff, M.: An introduction to cora 2015. In: Proc. of the workshop on applied verification for continuous and hybrid systems. pp. 120–151 (2015)
- [2] Beer, R.D.: On the dynamics of small continuous-time recurrent neural networks. Adaptive Behavior 3 (4), 469–509 (1995)
- [3] Behrmann, J., Grathwohl, W., Chen, R.T., Duvenaud, D., Jacobsen, J.H.: Invertible residual networks. In: International conference on machine learning. pp. 573–582. PMLR (2019)
- [4] Boudardara, F., Boussif, A., Meyer, P.J., Ghazel, M.: Innabstract: An inn-based abstraction method for large-scale neural network verification. IEEE Transactions on Neural Networks and Learning Systems (2023)
- [5] Chen, R.T., Rubanova, Y., Bettencourt, J., Duvenaud, D.K.: Neural ordinary differential equations. Advances in neural information processing systems 31 (2018)
- [6] De Figueiredo, L.H., Stolfi, J.: Affine arithmetic: concepts and applications. Numerical algorithms 37, 147–158 (2004)
- [7] Gruenbacher, S., Hasani, R., Lechner, M., Cyranka, J., Smolka, S.A., Grosu, R.: On the verification of neural odes with stochastic guarantees. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 35, pp. 11525–11535 (2021)
- [8] Gruenbacher, S.A., Lechner, M., Hasani, R., Rus, D., Henzinger, T.A., Smolka, S.A., Grosu, R.: Gotube: Scalable statistical verification of continuous-depth models. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 36, pp. 6755–6764 (2022)
- [9] Haber, E., Ruthotto, L.: Stable architectures for deep neural networks. Inverse problems 34 (1), 014004 (2017)
- [10] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778 (2016)
- [11] Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: Computer Aided Verification (CAV). Springer (2017)
- [12] Jaulin, L., Kieffer, M., Didrit, O., Walter, E., Jaulin, L., Kieffer, M., Didrit, O., Walter, É.: Interval analysis. Springer (2001)
- [13] Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: An efficient smt solver for verifying deep neural networks. In: Computer Aided Verification (CAV). Springer (2017)
- [14] Kidger, P.: On neural differential equations. Ph.D. thesis, University of Oxford (2021)
- [15] Liang, Z., Ren, D., Liu, W., Wang, J., Yang, W., Xue, B.: Safety verification for neural networks based on set-boundary analysis. In: International Symposium on Theoretical Aspects of Software Engineering. pp. 248–267. Springer (2023)
- [16] Lopez, D.M., Choi, S.W., Tran, H.D., Johnson, T.T.: Nnv 2.0: the neural network verification tool. In: International Conference on Computer Aided Verification. pp. 397–412. Springer (2023), https://doi.org/10.1007/978-3-031-37703-7_19
- [17] Lopez, D.M., Musau, P., Hamilton, N., Johnson, T.T.: Reachability analysis of a general class of neural ordinary differential equations (2022), https://doi.org/10.1007/978-3-031-15839-1_15
- [18] Lu, Y., Zhong, A., Li, Q., Dong, B.: Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations. In: International conference on machine learning. pp. 3276–3285. PMLR (2018)
- [19] Makino, K., Berz, M.: Taylor models and other validated functional inclusion methods. International Journal of Pure and Applied Mathematics 6, 239–316 (2003)
- [20] Marion, P.: Generalization bounds for neural ordinary differential equations and deep residual networks. Advances in Neural Information Processing Systems 36, 48918–48938 (2023)
- [21] Marion, P., Wu, Y.H., Sander, M.E., Biau, G.: Implicit regularization of deep residual networks towards neural odes (2024), https://arxiv.org/abs/2309.01213
- [22] Musau, P., Johnson, T.: Continuous-time recurrent neural networks (ctrnns)(benchmark proposal). In: 5th Applied Verification for Continuous and Hybrid Systems Workshop (ARCH), Oxford, UK (2018), https://doi.org/10.29007/6czp
- [23] Oh, Y., Kam, S., Lee, J., Lim, D.Y., Kim, S., Bui, A.: Comprehensive review of neural differential equations for time series analysis (2025), https://arxiv.org/abs/2502.09885
- [24] Rackauckas, C., Ma, Y., Martensen, J., Warner, C., Zubov, K., Supekar, R., Skinner, D., Ramadhan, A., Edelman, A.: Universal differential equations for scientific machine learning (2021), https://arxiv.org/abs/2001.04385
- [25] Rudin, W.: Principles of Mathematical Analysis. McGraw-Hill, New York, 3rd edn. (1976)
- [26] Sander, M., Ablin, P., Peyré, G.: Do residual neural networks discretize neural ordinary differential equations? Advances in Neural Information Processing Systems 35, 36520–36532 (2022)
- [27] Tabuada, P.: Verification and control of hybrid systems: a symbolic approach. Springer Science & Business Media (2009)
- [28] Tran, H.D., Yang, X., Manzanas Lopez, D., Musau, P., Nguyen, L.V., Xiang, W., Bak, S., Johnson, T.T.: Nnv: the neural network verification tool for deep neural networks and learning-enabled cyber-physical systems. In: International conference on computer aided verification. pp. 3–17. Springer (2020)
- [29] Xiang, W., Shao, Z.: Approximate bisimulation relations for neural networks and application to assured neural network compression. In: 2022 American Control Conference (ACC). pp. 3248–3253. IEEE (2022)