# Chapter 0 Statistical Mechanics and Artificial Neural Networks: Principles, Models, and Applications
Abstract
The field of neuroscience and the development of artificial neural networks (ANNs) have mutually influenced each other, drawing from and contributing to many concepts initially developed in statistical mechanics. Notably, Hopfield networks and Boltzmann machines are versions of the Ising model, a model extensively studied in statistical mechanics for over a century. In the first part of this chapter, we provide an overview of the principles, models, and applications of ANNs, highlighting their connections to statistical mechanics and statistical learning theory.
Artificial neural networks can be seen as high-dimensional mathematical functions, and understanding the geometric properties of their loss landscapes (i.e., the high-dimensional space on which one wishes to find extrema or saddles) can provide valuable insights into their optimization behavior, generalization abilities, and overall performance. Visualizing these functions can help us design better optimization methods and improve their generalization abilities. Thus, the second part of this chapter focuses on quantifying geometric properties and visualizing loss functions associated with deep ANNs.
\body Contents
1. 1 Introduction
1. 1 The McCullochâPitts calculus and Hebbian learning
1. 2 The Perceptron
1. 3 Objective Bayes, entropy, and information theory
1. 4 Connections to the Ising model
1. 2 Hopfield network
1. 3 Boltzmann machine learning
1. 1 Boltzmann machines
1. 2 Restricted Boltzmann machines
1. 3 Derivation of the learning algorithm
1. 4 Loss landscapes of artificial neural networks
1. 1 Principal curvature in random projections
1. 2 Extracting curvature information
1. 3 Hessian directions
1. 5 Conclusions
1 Introduction
There has been a long tradition in studying learning systems within statistical mechanics. [1, 2, 3] Some versions of artificial neural networks (ANNs) were, in fact, inspired by the Ising model that has been proposed in 1920 by Wilhelm Lenz as a model for ferromagnetism and solved in one dimension by his doctoral student, Ernst Ising, in 1924. [4, 5, 6] In the language of machine learning, the Ising model can be considered as a non-learning recurrent neural network (RNN).
Contributions to ANN research extend beyond statistical mechanics, however, requiring a broad range of scientific disciplines. Historically, these efforts have been marked by eras of interdisciplinary collaboration referred to as cybernetics [7, 8], connectionism [9, 10], artificial intelligence [11, 12], and machine learning [13, 14]. To emphasize the importance of multidisciplinary contributions in ANN research, we will often highlight the research disciplines of individuals who have played pivotal roles in the advancement of ANNs.
1 The McCullochâPitts calculus and Hebbian learning
An influential paper by the neuroscientist Warren S. McCulloch and logician Walter Pitts published in 1943 is often regarded as inaugurating neural network research, [15] but in fact there was an active research community at that time working on the mathematics of neural activity. [16] McCulloch and Pittsâs pivotal contribution was to join propositional logic and Alan Turingâs mathematical notion of computation [17] to propose a calculus for creating compositional representations of neural behavior. Their work has later on been picked up by the mathematician Stephen C. Kleene [18], who made some of their work more accessible. Kleene comments on the work by McCulloch and Pitts in his 1956 article [18]: âThe present memorandum is partly an exposition of the McCullochâPitts results; but we found the part of their paper which treats of arbitrary nets obscure; so we have proceeded independently here.â
Contemporaneously, the physchologist Donald O. Hebb formulated in his 1949 book âThe Organization of Behaviorâ [19] a neurophysiological postulate, which is known today as âHebbâs ruleâ or âHebbian learningâ: \MakeFramed \FrameRestore Let us assume then that the persistence or repetition of a reverberatory activity (or âtraceâ) tends to induce lasting cellular changes that add to its stability. ⊠When an axon of cell $A$ is near enough to excite a cell $B$ and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that $A$ âs efficiency, as one of the cells firing $B$ , is increased. Donald O. Hebb in âThe Organization of Behaviorâ (1949) \endMakeFramed In the modern literature, Hebbian learning is often summarized by the slogan âneurons wire together if they fire togetherâ. [20]
2 The Perceptron
In 1958, the psychologist Frank Rosenblatt developed the perceptron [21, 22], which is essentially a binary classifier that is based on a single artificial neuron and a Hebbian-type learning rule. The artificial neuron of this type is sometimes referred to as âMcCullochâPitts neuronâ, owing to its connection to the foundational work of McCulloch and Pitts [15]. Some authors distinguish McCullochâPitts neurons from perceptrons by noting that the former handle binary signals, while the latter can process real-valued inputs. In parallel to Rosenblattâs work, the electrical engineers Bernard Widrow and Ted Hoff developed a similar ANN named ADALINE (Adaptive Linear Neuron). [23]
We show a schematic of a perceptron in Fig. 1. It receives inputs $x_{1},x_{2},...,x_{n}$ that are weighted with $w_{1},w_{2},...,w_{n}$ . Within the summer (indicated by a $\Sigma$ symbol), the artificial neuron computes $\sum_{i}w_{i}x_{i}+b$ , where $b$ is a bias term. This quantity is then used as input of a step activation function (indicated by a step symbol). The output of this function is $y$ . Such simple artificial neurons can be used to implement logical functions including AND, OR, and NOT. For example, consider a perceptron with Heaviside activation, two binary inputs $x_{1},x_{2}â\{0,1\}$ , weights $w_{1},w_{1}=1$ and bias $b=-1.5$ . We can readily verify that $y=x_{1}\wedge x_{2}$ . An example of a logical function that cannot be represented by a perceptron is XOR. This was shown by the computer scientists Marvin L. Minsky and Seymour A. Papert in their 1969 book âPerceptrons: An Introduction to Computational Geometryâ [11]. Based on their analysis of perceptrons they considered the extension of research on this topic to be âsterileâ.
Problems like the one associated with representing XOR using a single perceptron can be overcome by employing multilayer perceptrons (MLPs) or, in a broader sense, multilayer feedforward networks equipped with various types of activation functions. [24, 25, 26] One effective way to train such multilayered structures is reverse mode automatic differentiation (i.e., backpropagation). [27, 28, 29, 30, 31]
Notice that perceptrons and other ANNs predominantly operate on real-valued data. Nevertheless, there is an expanding body of research focused on ANNs that are designed to handle complex-valued and, in some cases, quaternion-based information. [32, 33, 34, 35, 36, 37, 38]
$x_{1}$
$x_{2}$
$...$
$x_{n}$ $\Sigma$ $w_{1}$ $w_{2}$ $...$ $w_{n}$ $b$ $y$
Figure 1: A schematic of a perceptron with output $y$ , inputs $x_{1},x_{2},...,x_{n}$ , corresponding weights $w_{1},w_{2},...,w_{n}$ , and a bias term $b$ . Traditionally, the inputs and outputs of perceptrons were considered to be binary (e.g., $\{-1,1\}$ or $\{0,1\}$ ), while weights and biases are generally represented as real numbers. The symbol $\Sigma$ signifies the summation process wherein the artificial neuron computes $\sum_{i}w_{i}x_{i}+b$ . This computed value then becomes the input for a step activation function, denoted by a step symbol. We denote the final output as $y$ . While the original perceptron employed a step activation function, contemporary applications frequently utilize a variety of functions including sigmoid, $\tanh$ , and rectified linear unit (ReLU).
3 Objective Bayes, entropy, and information theory
Over the same period of time, seminal work in probability and statistics appeared that would later prove crucial to the development of ANNs. In the early 20th century, a renewed interest in inverse âBayesianâ inference [39, 40] met strong resistance from the emerging frequentist school of statistics led by Jerzy Neyman, Karl Pearson, and especially Ronald Fisher, who advocated his own fiducial inference as a better alternative to Bayesian inference [41, 42]. In response to criticism that Bayesian methods were irredeemably subjective, Harold Jeffreys presented a methodology for âobjectivelyâ assigning prior probabilities based on symmetry principles and invariance properties. [43]
Edwin Jaynes, building on Jeffreysâs ideas for objective Bayesian priors and Shannonâs quantification of uncertainty [44], proposed the Maximum Entropy (MaxEnt) principle as a rationality criteria for assigning prior probabilities that are at once consistent with known information while remaining maximally uncommitted about what remains uncertain [45, 46, 47]. Jaynes envisioned the MaxEnt principle as the basis for a âlogic of scienceâ [48]. For instance, he proposed a reinterpretation of the principles of thermodynamics in information-theoretic terms where macro-states of a physical system are understood through the partial information they provide about the possible underlying microstates, showing how the original BoltzmannâGibbs interpretation of entropy could be replaced by a more general single framework based on Shannon entropy. Objective Bayesianism does not remove subjective judgment [49], MaxEnt is far from a universal principle for encoding prior information [50], and the terms âobjectiveâ and âsubjectiveâ are outdated. [51, 52]
For example, the MaxEnt principle can be used to select the least biased distribution from many common exponential family distributions by imposing constraints that match the expectation values of their sufficient statistics. That is, from constraints on the expected values of some function $h(x)$ on data parameterized by $\theta$ , applying the MaxEnt principle to find the probability distribution that maximizes entropy under those constraints often takes the form of an exponential family distribution
$$
p(x\mid\theta)=h(x)\,\exp\!\left(\theta^{\top}T(x)-A(\theta)\right)\,, \tag{1}
$$
where $h(x)$ is the underlying base measure, $\theta$ is the natural parameter of the distribution, $A(\theta)$ is the log normalizer, and $T(x)$ is the sufficient statistic determined by MaxEnt under the given constraints. MaxEnt derived distributions depend on contingencies of the data and the availability of the right constraints to ensure the derivation goes through, howeverâcontingencies that undermine viewing MaxEnt as a fundamental principle of rational inference.
Nevertheless, Jaynesâs notion of treating probability as an extension of logic [48, 48, 53] and probabilistic inference as governed by coherence conditions on information states [54, 55] has proved useful in optimization (e.g., cross-entropy loss [56]), entropy-based regularization of ANNs [57], and Bayesian neural networks. [58]
4 Connections to the Ising model
Returning to the Ising model, a modified version of it has been endowed with a learning algorithm by the mathematical neuroscientist Shunâichi Amari in 1972 and has been employed for learning patterns and sequences of patterns. [59] Amariâs self-organizing ANN is an early model of associative memory that has later been popularized by the physicist John Hopfield. [60] Despite its limited representational capacity and use in practical applications, the study of Hopfield-type learning systems is still an active research area. [61, 36, 62, 63]
The Hopfield network shares the Hamiltonian with the Ising model. Its learning algorithm is deterministic and aims at minimizing the âenergyâ of the system. One problem with the deterministic learning algorithm of Hopfield networks is that it cannot escape local minima. A solution to this problem is to employ a stochastic learning rule akin to Monte-Carlo methods that are used to generate Ising configurations. [64] In 1985, David H. Ackley, Geoffrey E. Hinton, and Terrence J. Sejnowski proposed such a learning algorithm for an Ising-type ANN which they called âBoltzmann machineâ (BM). [65, 66, 67, 68] In 1986, Smolensky introduced the concept of âHarmoniumâ [69], which later evolved into restricted Boltzmann machines (RBMs). In contrast to Boltzmann machines, RBMs benefit from more efficient training algorithms that were developed in the early 2000s. [70, 71] Since then, RBMs have been applied in various context such as to reduce dimensionality of datasets [72], study phase transitions [73, 74, 3, 75], represent wave functions [76, 77].
Our historical overview of ANNs aims at emphasizing the strong interplay between statistical mechanics and related fields in the investigation of learning systems. Given the extensive span of over eight decades since the pioneering work of McCulloch and Pitts, summarizing the history of ANNs within a few pages cannot do justice to its depth and significance. We therefore refer the reader to the excellent review by JĂŒrgen Schmidhuber (see Ref. 78) for a more detailed overview of the history of deep learning in ANNs.
As computing power continues to increase, deep (i.e., multilayer) ANNs have, over the past decade, become pervasive across various scientific domains. [79] Different established and newly developed ANN architectures have not only been employed to tackle scientific problems but have also found extensive applications in tasks such as image classification and natural language processing. [80, 81, 82, 83]
Finally, we would also like to emphasize an area of research that has significantly advanced due to the availability of automatic differentiation [84] and the utilization of ANNs as universal function approximators. [85, 86, 87] This area resides at the intersection of non-linear dynamics [88, 89], control theory [90, 91, 92, 93, 94, 95, 96, 97, 98], and dynamics-informed learning, where researchers often aim to integrate ANNs and their associated learning algorithms with domain-specific knowledge. [99, 100, 101, 102, 103, 104] This integration introduces an inductive bias that facilitates effective learning.
The outline of this chapter is as follows. To better illustrate the connections between the Ising model and ANNs, we devote Secs. 2 and 3 to the Hopfield model and Boltzmann machines, respectively. In Sec. 4, we discuss how mathematical tools from high-dimensional probability and differential geometry can help us study the loss landscape of deep ANNs. [105] We conclude this chapter in Sec. 5.
2 Hopfield network
A Hopfield network is a complete (i.e., fully connected) undirected graph in which each node is an artificial neuron of McCullochâPitts type (see Fig. 1). We show an example of a Hopfield network with $n=5$ neurons in Fig. 2. We use $x_{i}â\{-1,1\}$ to denote the state of neuron $i$ ( $iâ\{1,...,n\}$ ). Because the underlying graph is fully connected, neuron $i$ receives $n-1$ inputs $x_{j}$ ( $jâ i$ ). The inputs $x_{j}$ associated with neuron $i$ are assigned weights $w_{ij}â\mathbb{R}$ . In a Hopfield network, weights are assumed to be symmetric (i.e., $w_{ij}=w_{ji}$ ), and self-weights are considered to be absent (i.e., $w_{ii}=0$ ). $x_{1}$ $x_{2}$ $x_{3}$ $x_{4}$ $x_{5}$ $w_{12}$ $w_{15}$
Figure 2: An example of a Hopfield network with $n=5$ neurons. Each neuron is connected to all other neurons through black edges, representing both inputs and outputs. (Adapted from 106.)
With these definitions in place, the activation of neuron $i$ is given by
$$
a_{i}=\sum_{j}w_{ij}x_{j}+b_{i}\,, \tag{2}
$$
where $b_{i}$ is the bias term of neuron $i$ .
Regarding connections between the Ising model and Hopfield networks, the states of artificial neurons align with the binary values employed in the Ising model to model the orientations of elementary magnets, often referred to as classical âspinsâ. The weights in a Hopfield network represent the potentially diverse couplings between different elementary magnets. Lastly, the bias terms assume the function of external magnetic fields that act on these elementary magnets. Given these structural similarities between the Ising model and Hopfield networks, it is natural to assign them the Ising-type energy function
$$
E=-\frac{1}{2}\sum_{i,j}w_{ij}x_{i}x_{j}-\sum_{i}b_{i}x_{i}\,. \tag{3}
$$
Hopfield networks are not just a collection of interconnected artificial neurons, but they are dynamical systems whose states $x_{i}$ evolve in discrete time according to
$$
x_{i}\leftarrow\begin{cases}1\,,&\text{if }a_{i}\geq 0\\
-1\,,&\text{otherwise}\,,\end{cases} \tag{4}
$$
where $a_{i}$ is the activation of neuron $i$ [see Eq. (2)].
For asynchronous updates in which the state of one neuron is updated at a time, the update rule (4) has an interesting property: Under this update rule, the energy $E$ [see Eq. (3)] of a Hopfield network never increases.
The proof of this statement is straightforward. We are interested in the cases where an update of $x_{i}$ causes this quantity to change its sign. Otherwise the energy will stay constant. Consider the two cases (i) $a_{i}=\sum_{j}w_{ij}x_{j}+b_{i}<0$ with $x_{i}=1$ , and (ii) $a_{i}â„ 0$ with $x_{i}=-1$ . In the first case, the energy difference is
$$
\Delta E_{i}=E(x_{i}=-1)-E(x_{i}=1)=2\Bigl{(}b_{i}+\sum_{j}w_{ij}x_{j}\Bigr{)}=2a_{i}\,. \tag{5}
$$
Notice that $\Delta E_{i}<0$ because $a_{i}=\sum_{j}w_{ij}x_{j}+b_{i}<0$ .
In the second case, we have
$$
\Delta E_{i}=E(x_{i}=1)-E(x_{i}=-1)=-2\Bigl{(}b_{i}+\sum_{j}w_{ij}x_{j}\Bigr{)}=-2a_{i}\,. \tag{6}
$$
The energy difference satisfies $\Delta E_{i}†0$ because $a_{i}℠0$ . In summary, we have shown that $\Delta E_{i}†0$ for all changes of the state variable $x_{i}$ according to update rule (4).
Equations (5) and (6) also show that the energy difference associated with a change of sign in $x_{i}$ is $2a_{i}$ and $-2a_{i}$ for $a_{i}<0$ and $a_{i}â„ 0$ , respectively. Absorbing the bias term $b_{i}$ in the weights $w_{ij}$ by associating an extra active unit to every node in the network yields for the corresponding energy differences
$$
\Delta E_{i}=\pm 2\sum_{j}w_{ij}x_{j}\,. \tag{7}
$$
One potential application of Hopfield networks is to store and retrieve information in local minima by adjusting their weights. For instance, consider the task of storing $N$ binary (black and white) patterns in a Hopfield network. Let these $N$ patterns be represented by the set $\{p_{i}^{\nu}=± 1|1†i†n\}$ with $\nuâ\{1,...,N\}$ .
To learn the weights that represent our binary patterns, we can apply the Hebbian-type rule
$$
w_{ij}=w_{ji}=\frac{1}{N}\sum_{\nu=1}^{N}p_{i}^{\nu}p_{j}^{\nu}\,. \tag{8}
$$
In Hopfield networks with a large number of neurons, the capacity for storing and retrieving patterns is limited to approximately 14% of the total number of neurons. [107] However, it is possible to enhance this capacity through the use of modified versions of the learning rule (8). [108]
After applying the Hebbian learning rule to adjust the weights of a given Hopfield network, we may be interested in studying the evolution of different initial configurations $\{x_{i}\}$ according to update rule (4). A Hopfield network accurately represents a pattern $\{p_{i}^{\nu}=± 1|1†i†n\}$ if $x_{i}$ remains equal to $p_{i}^{\nu}$ both before and after an update for all $i$ (meaning $\{p_{i}^{\nu}\}$ is a fixed point of the system). Depending on the number of stored patterns and the chosen initial configuration, the Hopfield network may converge to a local minimum that does not align with the desired pattern. To mitigate this behavior, one can employ a stochastic update rule instead of the deterministic rule (4). This will be the topic that we are going to discuss in the following section.
3 Boltzmann machine learning
In Hopfield networks, if we start with an initial configuration that is close enough to a desired local energy minimum, we can recover the corresponding pattern $\{p_{i}^{\nu}\}$ . However, for certain applications, relying solely on the deterministic update rule (4) may not be enough as it never accepts transitions that are associated with an increase in energy. In constraint satisfaction tasks, we often require learning algorithms to have the capability to occasionally accept transitions to configurations of higher energy and move the system under consideration away from local minima towards a global minimum. A method that is commonly used in this context is the M(RT) 2 algorithm introduced by Nicholas Metropolis, Arianna W. Rosenbluth, Marshall N. Rosenbluth, Augusta H. Teller, and Edward Teller in their seminal 1953 paper âEquation of State Calculations by Fast Computing Machinesâ. [109, 110] This algorithm became the basis of many optimization methods, such as simulated annealing. [111]
In the 1980s, the M(RT) 2 algorithm has been adopted to equip Hopfield-type systems with a stochastic update rule, in which neuron $i$ is activated (set to $1$ ) regardless of its current state, with probability
$$
\sigma_{i}\equiv\sigma(\Delta E_{i}/T)=\sigma(2a_{i}/T)=\frac{1}{1+\exp(-\Delta E_{i}/T)}\,. \tag{9}
$$
Otherwise, it is set to $-1$ . The corresponding ANNs were dubbed ââBoltzmann machinesâ. [65, 66, 67, 68] $-10$ $-8$ $-6$ $-4$ $-2$ $0$ $2$ $4$ $6$ $8$ $10$ $0$ $0.2$ $0.4$ $0.6$ $0.8$ $1$ $\Delta E_{i}$ $\sigma_{i}$ $T=0.5$ $T=1$ $T=2$
Figure 3: The sigmoid function $\sigma_{i}\equiv\sigma(\Delta E_{i}/T)$ [see Eq. (9)] as function of $\Delta E_{i}$ for $T=0.5,1,2$ .
In Eq. (9), the quantity $\Delta E_{i}=2a_{i}$ is the energy difference between an inactive neuron $i$ and an active one [see Eq. (5)]. We use the convention employed in Refs. 67, 68. The authors of Ref. 66, use the convention that $\Delta E_{i}=-2a_{i}$ and $\sigma_{i}\equiv\sigma(\Delta E_{i}/T)=1/(1+\exp(\Delta E_{i}/T))$ . The function $\sigma(x)=1/\left(1+\exp(-x)\right)$ represents the sigmoid function, and the parameter $T$ serves as an equivalent to temperature. In the limit $Tâ 0$ , we recover the deterministic update rule (4). In Fig. 3, we show $\sigma(\Delta E_{i}/T)$ as a function of $\Delta E_{i}$ for $T=0.5,1,2$ .
Examining Eqs. (3) and (9), we notice that we are simulating a system akin to the Ising model with Glauber (heat bath) dynamics. [112, 64] Because Glauber dynamics satisfy the detailed balance condition, Boltzmann machines will eventually reach thermal equilibrium. The corresponding probabilities $p_{\mathrm{eq}}(X)$ and $p_{\mathrm{eq}}(Y)$ for the ANN to be in states $X$ and $Y$ , respectively, will satisfy We set the Boltzmann constant $k_{B}$ to 1.
$$
\frac{p_{\mathrm{eq}}(Y)}{p_{\mathrm{eq}}(X)}=\exp\left(-\frac{E(Y)-E(X)}{T}\right)\,. \tag{10}
$$
In other words, the Boltzmann distribution provides the relative probability $p_{\mathrm{eq}}(Y)/p_{\mathrm{eq}}(X)$ associated with the states $X$ and $Y$ of a âthermalizedâ Boltzmann machine. Regardless of the initial configuration, at a given temperature $T$ , the stochastic update rule in which neurons are activated with probability $\sigma_{i}$ always leads to a thermal equilibrium configuration that is solely determined by its energy.
1 Boltzmann machines
Unlike Hopfield networks, Boltzmann machines have two types of nodes: visible units and hidden units. We denote the corresponding sets of visible and hidden units by $V$ and $H$ , respectively. Notice that the set $H$ may be empty. In Fig. 4, we show an example of a Boltzmann machine with five visible units and three hidden units.
During the training of a Boltzmann machine, the visible units are âclampedâ to the environment, which means they are set to binary vectors drawn from an empirical distribution. Hidden units may be used to account for constraints involving more than two visible units. $h_{1}$ $h_{2}$ $h_{3}$ $v_{1}$ $v_{2}$ $v_{3}$ $v_{4}$ $v_{5}$
Figure 4: Boltzmann machines consist of visible units (blue) and hidden units (red). In the shown example, there are five visible units $\left\{v_{i}\right\}$ $({iâ\{1,...,5\}})$ and three hidden units $\left\{h_{j}\right\}$ $(jâ\{1,2,3\})$ . Similar to Hopfield networks, the network architecture in a Boltzmann machine is complete.
We denote the probability distribution of all configurations of visible units $\{\nu\}$ in a freely running network as $P^{\prime}\left(\{\nu\}\right)$ . Here, âfreely runningâ means that no external inputs are fixed (or âclampedâ) to the visible units. We derive the distribution $P^{\prime}\left(\{\nu\}\right)$ by summing (i.e., marginalizing) over the corresponding joint probability distribution. That is,
$$
P^{\prime}\left(\{\nu\}\right)=\sum_{\{h\}}P^{\prime}\left(\{\nu\},\{h\}\right)\,, \tag{11}
$$
where the summation is performed over all possible configurations of hidden units $\{h\}$ .
Our objective is now to devise a method such that $P^{\prime}\left(\{\nu\}\right)$ converges to the unknown environment (i.e., data) distribution $P\left(\{\nu\}\right)$ . To do so, we quantity the disparity between $P^{\prime}\left(\{\nu\}\right)$ and $P\left(\{\nu\}\right)$ using the KullbackâLeibler (KL) divergence (or relative entropy)
$$
G(P,P^{\prime})=\sum_{\{\nu\}}P\left(\{\nu\}\right)\ln\left[\frac{P\left(\{\nu\}\right)}{P^{\prime}\left(\{\nu\}\right)}\right]\,. \tag{12}
$$
To minimize the KL divergence $G(P,P^{\prime})$ , we perform a gradient descent according to
$$
\frac{\partial G}{\partial w_{ij}}=-\frac{1}{T}\left(p_{ij}-p^{\prime}_{ij}\right)\,, \tag{13}
$$
where $p_{ij}$ represents the probability of both units $i$ and $j$ being active when the environment dictates the states of the visible units, and $p^{\prime}_{ij}$ is the corresponding probability in a freely running network without any connection to the environment. [66, 67, 68] We derive Eq. (13) in Sec. 3.
Both probabilities $p_{ij}$ and $p_{ij}^{\prime}$ are measured once the Boltzmann machine has reached thermal equilibrium. Subsequently, the weights $w_{ij}$ of the network are then updated according to
$$
\Delta w_{ij}=\epsilon\left(p_{ij}-p^{\prime}_{ij}\right)\,, \tag{14}
$$
where $\epsilon$ is the learning rate. To reach thermal equilibrium, the states of the visible and hidden units are updated using the update probability (9).
In summary, the steps relevant for training a Boltzmann machine are as follows. 1.
Clamp the input data (environment distribution) to the visible units. 2.
Update the state of all hidden units according to Eq. (9) until the system reaches thermal equilibrium and then compute $p_{ij}$ . 3.
Unclamp the input data from the visible units. 4.
Update the state of all neurons according to Eq. (9) until the system reaches thermal equilibrium and then compute $p_{ij}^{\prime}$ . 5.
Update all weights according to Eq. (14) and return to step 2 or stop if the weight updates are sufficiently small. After training a Boltzmann machine, we can unclamp the visible units from the environment and generate samples to evaluate their quality. To do this, we use various initial configurations and activate neurons according to Eq. (9) until thermal equilibrium is reached. If the Boltzmann machine was trained successfully, the distribution of states for the unclamped visible units should align with the environment distribution.
2 Restricted Boltzmann machines
Boltzmann machines are not widely used in general learning tasks. Their impracticality arises from the significant computational burden associated with achieving thermal equilibrium, especially in instances involving large system sizes. $h_{1}$ $h_{2}$ $h_{3}$ $h_{4}$ $v_{1}$ $v_{2}$ $v_{3}$ $v_{4}$ $v_{5}$ $v_{6}$
Figure 5: Restricted Boltzmann machines consist of a visible layer (blue) and a hidden layer (red). In the shown example, the respective layers comprise six visible units $\left\{v_{i}\right\}$ $(iâ\{1,...,6\})$ and four hidden units $\left\{h_{j}\right\}$ $(jâ\{1,...,4\})$ . The network structure of an RBM is bipartite and undirected.
Restricted Boltzmann machines provide an ANN structure that can be trained more efficiently by omitting connections between the hidden and visible units (see Fig. 5). Because of these missing intra-layer connections, the network architecture of an RBM is bipartite.
In RBMs, updates for visible and hidden units are performed alternately. Because there are no connections within each layer, we can update all units within each layer in parallel. Specifically, visible unit $v_{i}$ is activated with conditional probability
$$
p(v_{i}=1|\{h_{j}\})=\sigma\Bigl{(}\sum_{j}w_{ij}h_{j}+b_{i}\Bigr{)}\,, \tag{15}
$$
where $b_{i}$ is the bias of visible unit $v_{i}$ and $\{h_{j}\}$ is a given configuration of hidden units. We then activate all hidden units based on their conditional probabilities
$$
p(h_{j}=1|\{v_{i}\})=\sigma\Bigl{(}\sum_{i}w_{ij}v_{i}+c_{j}\Bigr{)}\,, \tag{16}
$$
where $h_{j}$ represents hidden unit $j$ , $c_{j}$ is its associated bias, and $\{v_{i}\}$ denotes a specific configuration of visible units. This technique of sampling is referred to as âblock Gibbs samplingâ.
Training an RBM shares similarities with training a BM. The key difference lies in the need to consider the bipartite network structure in the weight update equation (14). For an RBM, the weight updates are
$$
\Delta w_{ij}=\epsilon(\langle\nu_{i}h_{j}\rangle_{\text{data}}-\langle\nu_{i}h_{j}\rangle_{\text{model}})\,. \tag{17}
$$
Instead of sampling configurations to compute $\langle\nu_{i}h_{j}\rangle_{\text{data}}$ and $\langle\nu_{i}h_{j}\rangle_{\text{model}}$ at thermal equilibrium, we can instead employ a few relaxation steps. This approach is known as âcontrastive divergenceâ. [70, 113] The corresponding weight updates are
$$
\Delta w_{ij}^{\text{CD}}=\epsilon(\langle\nu_{i}h_{j}\rangle_{\text{data}}-\langle\nu_{i}h_{j}\rangle_{\text{model}}^{k})\,. \tag{18}
$$
The superscript $k$ indicates the number of block Gibbs updates performed. For a more detailed discussion on the contrastive divergence training of RBMs, see Ref. 114.
<details>
<summary>x1.png Details</summary>

### Visual Description
\n
## Heatmap: Correlation Analysis at Different Temperatures
### Overview
The image presents a 2x3 grid of heatmaps illustrating the correlation between two variables, M(RT)ÂČ and RBM, at three different temperature regimes: T < Tc, T â Tc, and T > Tc. Each cell in the grid represents a heatmap, with darker squares indicating a stronger correlation. The heatmaps appear to be based on a square grid of data points.
### Components/Axes
* **Y-axis:** M(RT)ÂČ (top row) and RBM (bottom row). These are the two variables being correlated.
* **X-axis:** Temperature regimes: T < Tc, T â Tc, and T > Tc. Tc represents a critical temperature.
* **Heatmap Color:** Black squares represent high correlation, while white squares represent low or no correlation. Shades of gray indicate intermediate correlation levels.
* **Grid Structure:** The image is organized as a 2x3 grid, with the rows representing the variables (M(RT)ÂČ and RBM) and the columns representing the temperature regimes.
### Detailed Analysis
The heatmaps show a clear trend in correlation strength as temperature increases.
* **T < Tc (Left Column):** Both heatmaps (M(RT)ÂČ and RBM) show very sparse black squares, indicating a very weak correlation between the variables at temperatures below the critical temperature. The black squares are clustered towards the bottom-left corner of each heatmap.
* **T â Tc (Middle Column):** The correlation increases significantly at temperatures around the critical temperature. The heatmaps show a moderate density of black squares, distributed more randomly than in the T < Tc case. There are some visible clusters of black squares.
* **T > Tc (Right Column):** The correlation is strongest at temperatures above the critical temperature. The heatmaps are almost entirely filled with black squares, indicating a strong correlation between the variables. The distribution of black squares appears more uniform and random than in the other two cases.
It is difficult to extract precise numerical values from the heatmaps without the underlying data. However, we can qualitatively assess the correlation strength based on the density of black squares.
### Key Observations
* The correlation between M(RT)ÂČ and RBM increases dramatically as the temperature approaches and exceeds the critical temperature Tc.
* At temperatures below Tc, the correlation is minimal.
* The distribution of correlation changes from clustered (T < Tc) to more random (T â Tc and T > Tc) as the temperature increases.
### Interpretation
The data suggests a phase transition occurring at the critical temperature Tc. Below Tc, the variables M(RT)ÂČ and RBM are largely uncorrelated, indicating that they behave independently. As the temperature approaches and exceeds Tc, a strong correlation emerges, suggesting that the two variables become strongly coupled. This could indicate a change in the underlying system's behavior, such as the emergence of long-range order or a collective phenomenon. The increasing randomness of the correlation pattern at higher temperatures might indicate a more disordered state. The heatmaps visually demonstrate how the relationship between these two variables changes drastically with temperature, highlighting the importance of the critical temperature Tc. The data suggests that the system undergoes a transition from a weakly correlated state to a strongly correlated state at Tc.
</details>
Figure 6: Snapshots of $32Ă 32$ Ising configurations are shown for $Tâ\{1.5,2.5,4\}$ . These configurations are derived from both M(RT) 2 and RBM samples. The quantity $T_{c}=2/\ln(1+\sqrt{2})â 2.269$ is the critical temperature of the two-dimensional Ising model.
Restricted Boltzmann machines have been employed in diverse contexts, including dimensionality reduction of datasets [72], the study of phase transitions [73, 74, 3, 75], and the representation of wave functions [76, 77]. In Fig. 6, we show three snapshots of Ising configurations, generated using both M(RT) 2 sampling and RBMs, each comprising $32Ă 32$ spins. The RBM was trained using $20Ă 10^{4}$ realizations of Ising configurations sampled at various temperatures. [75]
3 Derivation of the learning algorithm
To derive Eq. (13), we follow the approach outlined in Ref. 68. Notice that the environment distribution $P\left(\{\nu\}\right)$ does not depend $w_{ij}$ . Hence, we have
$$
\frac{\partial G}{\partial w_{ij}}=-\sum_{\{\nu\}}\frac{P\left(\{\nu\}\right)}{P^{\prime}\left(\{\nu\}\right)}\frac{\partial P^{\prime}\left(\{\nu\}\right)}{\partial w_{ij}}\,. \tag{19}
$$
Next, we wish to compute the gradient $â P^{\prime}(\{\nu\})/â w_{ij}$ . In a freely running BM, the equilibrium distribution of the visible units follows a Boltzmann distribution. That is,
$$
P^{\prime}(\{\nu\})=\sum_{\{h\}}P^{\prime}\left(\{\nu\},\{h\}\right)=\frac{\sum_{\{h\}}e^{-E\left({\{\nu,h\}}\right)/T}}{\sum_{\{\nu,h\}}e^{-E\left({\{\nu,h\}}\right)/T}}\,. \tag{20}
$$
Here, the quantity
$$
E\left({\{\nu,h\}}\right)=-\frac{1}{2}\sum_{i,j}w_{ij}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}} \tag{21}
$$
is the Ising-type energy function in which field (or bias) terms are absorbed in the weights $w_{ij}$ [see Eqs. (3) and (7)]. For a BM in state ${\{\nu,h\}}$ , we denote the state of neuron $i$ by $x_{i}^{\{\nu,h\}}$ . Using
$$
\frac{\partial e^{-E\left({\{\nu,h\}}\right)/T}}{\partial w_{ij}}=\frac{1}{T}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}e^{-E\left({\{\nu,h\}}\right)/T} \tag{22}
$$
yields
$$
\displaystyle\begin{split}\frac{\partial P^{\prime}\left(\{\nu\}\right)}{\partial w_{ij}}&=\frac{\frac{1}{T}\sum_{\{h\}}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}e^{-E\left({\{\nu,h\}}\right)/T}}{\sum_{\{\nu,h\}}e^{-E\left({\{\nu,h\}}\right)/T}}\\
&-\frac{\sum_{\{h\}}e^{-E\left({\{\nu,h\}}\right)/T}\frac{1}{T}\sum_{\{\nu,h\}}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}e^{-E\left({\{\nu,h\}}\right)/T}}{\left(\sum_{\{\nu,h\}}e^{-E\left({\{\nu,h\}}\right)/T}\right)^{2}}\\
&=\frac{1}{T}\left[\sum_{\{h\}}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}P^{\prime}\left(\{\nu\},\{h\}\right)\right.\\
&\hskip 28.45274pt\left.-P^{\prime}(\{\nu\})\sum_{\{\nu,h\}}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}P^{\prime}\left(\{\nu\},\{h\}\right)\right]\,.\end{split} \tag{23}
$$
We will now substitute this result into Eq. (19) to obtain
$$
\displaystyle\begin{split}\frac{\partial G}{\partial w_{ij}}&=-\sum_{\{\nu\}}\frac{P\left(\{\nu\}\right)}{P^{\prime}\left(\{\nu\}\right)}\frac{1}{T}\left[\sum_{\{h\}}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}P^{\prime}\left(\{\nu\},\{h\}\right)\right.\\
&\hskip 91.04872pt\left.-P^{\prime}\left(\{\nu\}\right)\sum_{\{\nu,h\}}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}P^{\prime}\left(\{\nu\},\{h\}\right)\right]\\
&=-\frac{1}{T}\left[\sum_{\{\nu,h\}}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}P\left(\{\nu\},\{h\}\right)-\sum_{\{\nu,h\}}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}P^{\prime}\left(\{\nu\},\{h\}\right)\right]\,,\end{split} \tag{24}
$$
where we used that $\sum_{\{\nu\}}P\left(\{\nu\}\right)=1$ and $P\left(\{\nu\}\right)/P^{\prime}\left(\{\nu\}\right)P^{\prime}\left(\{\nu\},\{h\}\right)=P\left(\{\nu\},\{h\}\right)\,.$ The latter equation follows from the definition of joint probability distributions
$$
P\left(\{\nu\},\{h\}\right)=P\left(\{h\}|\{\nu\}\right)P\left(\{\nu\}\right) \tag{25}
$$
and
$$
P^{\prime}\left(\{\nu\},\{h\}\right)=P^{\prime}\left(\{h\}|\{\nu\}\right)P^{\prime}\left(\{\nu\}\right)\,, \tag{26}
$$
with $P\left(\{h\}|\{\nu\}\right)=P^{\prime}\left(\{h\}|\{\nu\}\right)$ . The conditional distributions $P(\{h\}|\{\nu\})$ and $P^{\prime}(\{h\}|\{\nu\})$ are identical, as the probability of observing a certain hidden state given a visible one is independent of the origin of the visible state in an equilibrated system. In other words, for the conditional distributions in an equilibrated system, it is irrelevant whether the visible state is provided by the environment or generated by a freely running machine. Defining
$$
p_{ij}\coloneqq\sum_{\{\nu,h\}}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}P\left(\{\nu\},\{h\}\right) \tag{27}
$$
and
$$
p_{ij}^{\prime}\coloneqq\sum_{\{\nu,h\}}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}P^{\prime}\left(\{\nu\},\{h\}\right)\,, \tag{28}
$$
we finally obtain Eq. (13).
4 Loss landscapes of artificial neural networks
Training an ANN involves minimizing a given loss function such as the KL divergence [see Eq. (12)]. The loss landscapes of ANNs are affected by several factors, including structural properties [86, 87, 115] and a range of implementation attributes [116, 117, 118]. Knowledge of their precise effects on learning performance, however, remains incomplete.
One path to a better understanding of the relationships between ANN structure, implementation attributes, and learning performance is through a more in-depth analysis of the geometric properties of loss landscapes. For instance, Keskar et al. [119] analyze the local curvature around candidate minimizers via the spectrum of the underlying Hessian to characterize the flatness and sharpness of loss minima, and Dinh et al. [120] demonstrate that reparameterizations can render flat minima sharp without affecting generalization properties. Even so, one challenge to the study of geometric properties of loss landscapes is high dimensionality. To meet that challenge, some propose to visualize the curvature around a given point by projecting curvature properties of high-dimensional loss functions to a lower-dimensional (and often random) projection of two or three dimensions [121, 122, 123, 124, 125]. Horoi et al. [125], building on this approach, pursue improvements in learning by dynamically sampling points in projected low-loss regions surrounding local minima during training.
However, visualizing high-dimensional loss landscape curvature relies on accurate projections of curvature properties to lower dimensions. Unfortunately, random projections do not preserve curvature information, thus do not afford accurate low-dimensional representations of curvature information. This argument is given in Sec. 1 and illustrated with a simulation example in Sec. 2. Principal curvatures in a low-dimensional projection are nevertheless given by functions of weighted ensemble means of the Hessian elements in the original, high-dimensional space, a result also established in Sec. 1. Instead of using random projections to visualize loss functions, we propose to analyze projections along dominant Hessian directions associated with the largest-magnitude positive and negative principal curvatures.
1 Principal curvature in random projections
To describe the connection between the principal curvature of a loss function $L(\theta)$ with ANN parameters $\thetaâ\mathbb{R}^{N}$ and that associated with a lower-dimensional, random projection, we provide in Sec. 1 an overview of concepts from differential geometry [126, 127, 128] that are useful to mathematically describe curvature in high-dimensional spaces. In Secs. 1 and 1, we show the relationship between principal curvature and random projections. Finally, in Sec. 1, we highlight a relationship between measures of curvature presented in Sec. 1 and Hessian trace estimates, which affords an alternative to Hutchinsonâs method for computing unbiased Hessian trace estimates.
Differential and information geometry concepts
In differential geometry, the principal curvatures are the eigenvalues of the shape operator (or Weingarten map Some authors distinguish between the shape operator and the Weingarten map depending on if the change of the underlying tangent vector is described in the original manifold or in Euclidean space (see, e.g., chapter 3 in [129]).), a linear endomorphism defined on the tangent space $T_{p}$ of $L$ at a point $p$ . For the original high-dimensional space, we have $(\theta,L(\theta))âeq\mathbb{R}^{N+1}$ and there are $N$ principal curvatures $\kappa_{1}^{\theta}â„\kappa_{2}^{\theta}â„...â„\kappa_{N}^{\theta}$ . At a non-degenerate critical point $\theta^{*}$ where the gradient $â_{\theta}L$ vanishes, the matrix of the shape parameter is given by the Hessian $H_{\theta}$ with elements $(H_{\theta})_{ij}=â^{2}L/(â\theta_{i}â\theta_{j})$ ( $i,jâ\{1,...,N\}$ ). A critical point is degenerate if the Hessian $H_{\theta}$ at this point is singular (i.e., $\mathrm{det}(H_{\theta})=0$ ). At degenerate critical points, one cannot use the eigenvalues of $H_{\theta}$ to determine if the critical point is a minimum (positive definite $H_{\theta}$ ) or a maximum (negative definite $H_{\theta}$ ). Geometrically, at a degenerate critical point, a quadratic approximation fails to capture the local behavior of the function that one wishes to study. Some works refer to the Hessian as the âcurvature matrixâ [130] or use it to characterize the curvature properties of $L(\theta)$ [122]. In the vicinity of a non-degenerate critical point $\theta^{*}$ , the eigenvalues of the Hessian $H_{\theta}$ are the principal curvatures and describe the loss function in the eigenbasis of $H_{\theta}$ according to
$$
L(\theta^{*}+\Delta\theta)=L(\theta^{*})+\frac{1}{2}\sum_{i=1}^{N}\kappa_{i}^{\theta}\Delta\theta_{i}^{2}\,. \tag{29}
$$
The Morse lemma states that, if a critical point $\theta^{*}$ of $L(\theta)$ is non-degenerate, then there exists a chart $(\tilde{\theta}_{1},...,\tilde{\theta}_{N})$ in a neighborhood of $\theta^{*}$ such that
$$
L(\tilde{\theta})=-\tilde{\theta}^{2}_{1}-\cdots-\tilde{\theta}^{2}_{i}+\tilde{\theta}^{2}_{i+1}+\cdots+\tilde{\theta}^{2}_{N}+L(\theta^{*})\,, \tag{30}
$$
where $\tilde{\theta}_{i}(\theta)=0$ for $iâ\{1,...,N\}$ . The loss function $L(\tilde{\theta})$ in Eq. (30) is decreasing along $i$ directions and increasing along the remaining $i+1$ to $N$ directions. Further, the index $i$ of a critical point $\theta^{*}$ is the number of negative eigenvalues of the Hessian $H_{\theta}$ at that point.
In the standard basis, the Hessian is
$$
H_{\theta}\coloneqq\nabla_{\theta}\nabla_{\theta}L(\theta)=\begin{pmatrix}\frac{\partial^{2}L}{\partial\theta_{1}^{2}}&\cdots&\frac{\partial^{2}L}{\partial\theta_{1}\partial\theta_{N}}\\
\vdots&\ddots&\vdots\\
\frac{\partial^{2}L}{\partial\theta_{N}\partial\theta_{1}}&\cdots&\frac{\partial^{2}L}{\partial\theta_{N}^{2}}\end{pmatrix}\,. \tag{31}
$$
Random projections
To graphically explore an $N$ -dimensional loss function $L$ around a critical point $\theta^{*}$ , one may wish to work in a lower-dimensional representation. For example, a two-dimensional projection of $L$ around $\theta^{*}$ is provided by
$$
L(\theta^{*}+\alpha\eta+\beta\delta)\,, \tag{32}
$$
where the parameters $\alpha,\betaâ\mathbb{R}$ scale the directions $\eta,\deltaâ\mathbb{R}^{N}$ . The corresponding graph representation is $(\alpha,\beta,L(\alpha,\beta))âeq\mathbb{R}^{3}$ .
In high-dimensional spaces, there exist vastly many more almost-orthogonal than orthogonal directions. In fact, if the dimension of our space is large enough, with high probability, random vectors will be sufficiently close to orthogonal [131]. Following this result, many related works [122, 124, 125] use random Gaussian directions with independent and identically distributed vector elements $\eta_{i},\delta_{i}\sim\mathcal{N}(0,1)$ ( $iâ\{1,...,N\}$ ).
The scalar product of random Gaussian vectors $\eta,\delta$ is a sum of the difference between two chi-squared distributed random variables because
$$
\sum_{i=1}^{N}\eta_{i}\delta_{i}=\sum_{i=1}^{N}\frac{1}{4}(\eta_{i}+\delta_{i})^{2}-\frac{1}{4}(\eta_{i}-\delta_{i})^{2}=\sum_{i=1}^{N}\frac{1}{2}X_{i}^{2}-\frac{1}{2}Y_{i}^{2}\,, \tag{33}
$$
where $X_{i},Y_{i}\sim\mathcal{N}(0,1)$ .
Notice that $\eta,\delta$ are almost orthogonal, which can be proved using a concentration inequality for chi-squared distributed random variables. For further details, see Ref. 105.
Principal curvature
With the form of random Gaussian projections in hand, we now analyze the principal curvatures in both the original and lower-dimensional spaces. The Hessian associated with the two-dimensional loss projection (32) is
$$
\displaystyle\begin{split}H_{\alpha,\beta}&=\begin{pmatrix}\frac{\partial^{2}L}{\partial\alpha^{2}}&\frac{\partial^{2}L}{\partial\alpha\partial\beta}\\
\frac{\partial^{2}L}{\partial\beta\partial\alpha}&\frac{\partial^{2}L}{\partial\beta^{2}}\\
\end{pmatrix}\\
&=\begin{pmatrix}\sum_{i,j}\eta_{i}\eta_{j}\frac{\partial^{2}L}{\partial\theta_{i}\theta_{j}}&\sum_{i,j}\eta_{i}\delta_{j}\frac{\partial^{2}L}{\partial\theta_{i}\theta_{j}}\\
\sum_{i,j}\eta_{i}\delta_{j}\frac{\partial^{2}L}{\partial\theta_{i}\theta_{j}}&\sum_{i,j}\delta_{i}\delta_{j}\frac{\partial^{2}L}{\partial\theta_{i}\theta_{j}}\\
\end{pmatrix}\\
&=\begin{pmatrix}(H_{\theta})_{ij}\eta^{i}\eta^{j}&(H_{\theta})_{ij}\eta^{i}\delta^{j}\\
(H_{\theta})_{ij}\eta^{i}\delta^{j}&(H_{\theta})_{ij}\delta^{i}\delta^{j}\\
\end{pmatrix}\,,\end{split} \tag{34}
$$
where we use Einstein notation in the last equality.
Because the elements of $\delta,\eta$ are distributed according to a standard normal distribution, the second derivatives of the loss function $L$ in Eq. (34) have prefactors that are products of standard normal variables and, hence, can be expressed as sums of chi-squared distributed random variables as in Eq. (33). To summarize, elements of $H_{\alpha,\beta}$ are sums of second derivatives of $L$ in the original space weighted with chi-squared distributed prefactors.
The principal curvatures $\kappa_{±}^{\alpha,\beta}$ (i.e., the eigenvalues of $H_{\alpha,\beta}$ ) are
$$
\kappa_{\pm}^{\alpha,\beta}=\frac{1}{2}\left(A+C\pm\sqrt{4B^{2}+(A-C)^{2}}\right)\,, \tag{35}
$$
where $A=(H_{\theta})_{ij}\eta^{i}\eta^{j}$ , $B=(H_{\theta})_{ij}\eta^{i}\delta^{j}$ , and $C=(H_{\theta})_{ij}\delta^{i}\delta^{j}$ . To the best of our knowledge, a closed, analytic expression for the distribution of the quantities $A,B,C$ is not yet known [132, 133, 134, 135].
Returning to principal curvature, since $\sum_{i,j}a_{ij}\eta^{i}\eta^{j}=\sum_{i}a_{ii}\eta^{i}\eta^{i}+\sum_{iâ j}a_{ij}\eta^{i}\eta^{j}$ ( $a_{ij}â\mathbb{R}$ ), we find that $\mathds{E}[A]=\mathds{E}[C]={(H_{\theta})^{i}}_{i}$ and $\mathds{E}[B]=0$ where ${(H_{\theta})^{i}}_{i}\equiv\mathrm{tr}(H_{\theta})=\sum_{i=1}^{N}\kappa_{i}^{\theta}$ . The expected values of the quantities $A$ , $B$ , and $C$ correspond to ensemble means (43) in the limit $Sââ$ , where $S$ is the number of independent realizations of the underlying random variable. To show that the expected values of $a_{ij}\eta^{i}\eta^{j}$ ( $iâ j$ ) or $a_{ij}\eta^{i}\delta^{j}$ vanish, one can either invoke independence of $\eta^{i},\eta^{j}$ ( $iâ j$ ) and $\eta^{i},\delta^{j}$ or transform both products into corresponding differences of two chi-squared random variables with the same mean [see Eq. (33)].
Hence, the expected, dimension-reduced Hessian (34) is
$$
\mathds{E}[H_{\alpha,\beta}]=\begin{pmatrix}{(H_{\theta})^{i}}_{i}&0\\
0&{(H_{\theta})^{i}}_{i}\\
\end{pmatrix}\,. \tag{36}
$$
The corresponding eigenvalue (or principal curvature) $\bar{\kappa}^{\alpha,\beta}$ is therefore given by the sum over all principal curvatures in the original space (i.e., $\bar{\kappa}^{\alpha,\beta}=\sum_{i=1}^{N}\kappa_{i}^{\theta}$ ). Hence, the value of the principal curvature $\bar{\kappa}^{\alpha,\beta}$ in the expected dimension-reduced space will be either positive (if the positive principal curvatures in the original space dominate), negative (if the negative principal curvatures in the original space dominate), or close to zero (if positive and negative principal curvatures in the original space cancel out each other). As a result, saddle points will not appear as such in the expected random projection.
In addition to the connection between $\bar{\kappa}^{\alpha,\beta}$ and the principal curvatures $\kappa_{i}^{\theta}$ , we now provide an overview of additional mathematical relations between different curvature measures that are useful to quantify curvature properties of high-dimensional loss functions and their two-dimensional random projections.
\tbl
Quantities to characterize curvature. Symbol Definition $H_{\theta}â\mathbb{R}^{NĂ N}$ Hessian in original loss space $\kappa_{i}^{\theta}â\mathbb{R}$ principal curvatures in original loss space with $iâ\{1,...,N\}$ (i.e., the eigenvalues of $H_{\theta}$ ) $H_{\alpha,\beta}â\mathbb{R}^{2Ă 2}$ Hessian in a two-dimensional projection of an $N$ -dimensional loss function $\kappa_{±}^{\alpha,\beta}â\mathbb{R}$ principal curvatures in a two-dimensional loss projection (i.e., the eigenvalues of $H_{\alpha,\beta}$ ) $\bar{\kappa}^{\alpha,\beta}â\mathbb{R}$ principal curvature in expected, two-dimensional loss projection (i.e., the eigenvalues of $\mathbb{E}[H_{\alpha,\beta}]$ ) $Hâ\mathbb{R}$ mean curvature (i.e., $\sum_{i=1}^{N}\kappa_{i}^{\theta}/N=\bar{\kappa}^{\alpha,\beta}/N$ )
Invoking Eq. (35), we can relate $\bar{\kappa}^{\alpha,\beta}$ to $\mathrm{tr}(H_{\theta})$ and $\kappa_{±}^{\alpha,\beta}$ . Because $\kappa_{+}^{\alpha,\beta}+\kappa_{-}^{\alpha,\beta}=A+C$ , we have
$$
\mathrm{tr}(H_{\theta})=\bar{\kappa}^{\alpha,\beta}=\sum_{i=1}^{N}\kappa_{i}^{\theta}=\frac{1}{2}\left(\mathbb{E}[\kappa_{-}^{\alpha,\beta}]+\mathbb{E}[\kappa_{+}^{\alpha,\beta}]\right)\,. \tag{37}
$$
The mean curvature $H$ in the original space is related to $\bar{\kappa}^{\alpha,\beta}$ via
$$
H=\frac{1}{N}\bar{\kappa}^{\alpha,\beta}=\frac{1}{N}\sum_{i=1}^{N}\kappa_{i}^{\theta}\,. \tag{38}
$$
We summarize the definitions of the employed Hessians and curvature measures in Tab. 1.
The appeal of random projections is that pairwise distances between points in a high-dimensional space can be nearly preserved by a lower-dimensional linear embedding, affording a low-dimensional representation of mean and variance information with minimal distortion [136]. The relationship between random Gaussian directions and principal curvature is less straightforward. Our results show that the principal curvatures $\kappa_{±}^{\alpha,\beta}$ in a two-dimensional loss projection are weighted averages of the Hessian elements $(H_{\theta})_{ij}$ in the original space, not weighted averages of the principal curvatures $\kappa_{i}^{\theta}$ as claimed by Ref. 122. Similar arguments apply to projections with dimension larger than 2.
Hessian trace estimates
Finally, we point to a connection between curvature measures and Hessian trace estimates. A common way of estimating $\mathrm{tr}(H_{\theta})$ without explicitly computing all eigenvalues of $H_{\theta}$ is based on Hutchinsonâs method [137] and random numerical linear algebra [138, 139]. The basic idea behind this approach is to (i) use a random vector $zâ\mathbb{R}^{N}$ with elements $z_{i}$ that are distributed according to a distribution function with zero mean and unit variance (e.g., a Rademacher distribution with $\Pr\left(z_{i}=± 1\right)=1/2$ ), and (ii) compute $z^{âp}H_{\theta}z$ , an unbiased estimator of $\mathrm{tr}(H_{\theta})$ . That is,
$$
\mathrm{tr}(H_{\theta})=\mathbb{E}[z^{\top}H_{\theta}z]\,. \tag{39}
$$
Recall that Eq. (37) shows that the principal curvature of the expected random loss projection, $\bar{\kappa}^{\alpha,\beta}$ , is equal to $\mathrm{tr}(H_{\theta})$ . Instead of estimating $\mathrm{tr}(H_{\theta})$ using Hutchinsonâs method (39), an alternative Hutchinson-type estimate of this quantity is provided by the mean of the expected values of $\kappa_{-}^{\alpha,\beta}$ and $\kappa_{+}^{\alpha,\beta}$ [see Eq. (37)].
2 Extracting curvature information
We now study two examples that will help build intuitions. In the first example presented in Sec. 2, we study a critical point $\theta^{*}$ of an $N$ -dimensional loss function $L(\theta)$ for which (i) all principal curvatures have the same magnitude and (ii) the number of positive curvature directions is equal to the number of negative curvature directions. In this example, saddles can be correctly detected by ensemble means but success depends on the averaging process used. In the second example presented in Sec. 2, we use a loss function associated with an unequal number of negative and positive curvature directions. For the different curvature measures derived in Sec. 1, we find that random projections cannot identify the underlying saddle point. In Sec. 2, we will use the two example loss functions to discuss how curvature-based Hessian trace estimates relate to those obtained with the original Hutchinsonâs method.
Equal number of curvature directions
The loss function of our first example is
$$
L(\theta)=\frac{1}{2}\theta_{2n+1}\left(\sum_{i=1}^{n}\theta_{i}^{2}-\theta_{i+n}^{2}\right)\,,\quad n\in\mathbb{Z}_{+}\,, \tag{40}
$$
where we set $N=2n+1$ . A critical point $\theta^{*}$ of the loss function (40) satisfies
$$
(\nabla_{\theta}L)(\theta^{*})=\left(\begin{array}[]{c}\theta^{*}_{1}\theta^{*}_{2n+1}\\
\vdots\\
\theta^{*}_{n}\theta^{*}_{2n+1}\\
-\theta^{*}_{n+1}\theta^{*}_{2n+1}\\
\vdots\\
-\theta^{*}_{2n}\theta^{*}_{2n+1}\\
\frac{1}{2}\left(\sum_{i=1}^{n}{\theta_{i}^{{*\textsuperscript{2}}}}-{\theta_{i+n}^{{*\textsuperscript{2}}}}\right)\end{array}\right)=0\,. \tag{41}
$$
The Hessian at the critical point $\theta^{*}=(\theta^{*}_{1},...,\theta^{*}_{2n},\theta^{*}_{2n+1})$ $=(0,...,0,1)$ is
$$
H_{\theta}=\mathrm{diag}(\underbrace{1,\dots,1}_{n~\mathrm{times}},\underbrace{-1,\dots,-1}_{n~\mathrm{times}},0)\,. \tag{42}
$$
<details>
<summary>hessian_alpha_beta_left.png Details</summary>

### Visual Description
## Chart: Difference in Entropy Estimates vs. Number of Samples
### Overview
The image presents two line charts (labeled (a) and (c)) displaying the difference between the sample entropy estimate and the expected entropy estimate, denoted as `((Hα,ÎČ)ij) - E[(Hα,ÎČ)ij]`, plotted against the number of samples, `S`, on a logarithmic scale. Each chart contains four data series, distinguished by line style and color, representing different index pairs (i, j).
### Components/Axes
* **Y-axis:** `((Hα,ÎČ)ij) - E[(Hα,ÎČ)ij]` ranging from -20 to 20.
* **X-axis:** `number of samples S` on a logarithmic scale, ranging from 10â° to 10âŽ.
* **Legend:** Located in the top-right corner of each chart.
* `(i, j) = (1, 1)`: Solid black line.
* `(i, j) = (2, 2)`: Dashed blue line.
* `(i, j) = (1, 2), (i, j) = (2, 1)`: Dashed red line.
* **Chart Labels:**
* (a) is positioned in the top-left corner.
* (c) is positioned in the bottom-left corner.
### Detailed Analysis or Content Details
**Chart (a):**
* **(i, j) = (1, 1) (Black Line):** The line starts at approximately 10, decreases to around -5 at S=10, fluctuates between -10 and 10, and then gradually approaches 0 as S increases towards 10âŽ. There are significant oscillations between S=10 and S=100.
* **(i, j) = (2, 2) (Blue Dashed Line):** Starts at approximately 5, rapidly decreases to around -15 at S=10, then oscillates between -15 and 5, and approaches 0 as S increases. The initial drop is steeper than the black line.
* **(i, j) = (1, 2), (i, j) = (2, 1) (Red Dashed Line):** Starts at approximately -10, increases to around 10 at S=10, then oscillates between -15 and 10, and approaches 0 as S increases. This line exhibits the most pronounced oscillations.
**Chart (c):**
* **(i, j) = (1, 1) (Black Line):** Starts at approximately 10, decreases to around -5 at S=10, fluctuates between -10 and 5, and then gradually approaches 0 as S increases towards 10âŽ. Similar to chart (a), but with slightly different oscillation patterns.
* **(i, j) = (2, 2) (Blue Dashed Line):** Starts at approximately 15, rapidly decreases to around -10 at S=10, then oscillates between -15 and 5, and approaches 0 as S increases. The initial drop is steep.
* **(i, j) = (1, 2), (i, j) = (2, 1) (Red Dashed Line):** Starts at approximately -15, increases to around 5 at S=10, then oscillates between -15 and 10, and approaches 0 as S increases. This line exhibits significant oscillations.
### Key Observations
* All four data series in both charts converge towards 0 as the number of samples (S) increases. This suggests that the difference between the sample entropy estimate and the expected entropy estimate decreases with increasing sample size.
* The initial difference is most pronounced for the (2, 2) index pair (blue dashed line), indicating that this pair requires a larger sample size to achieve a stable entropy estimate.
* The (1, 2) and (2, 1) index pairs (red dashed line) exhibit the most significant oscillations, suggesting that their entropy estimates are more sensitive to sample variations.
* The charts (a) and (c) show similar trends, but with different initial values and oscillation amplitudes.
### Interpretation
The charts demonstrate the convergence of entropy estimates as the number of samples increases. The difference between the sample estimate and the expected value represents the error in the estimation. As S grows, this error diminishes, indicating that the sample entropy estimate becomes a more reliable approximation of the true entropy.
The varying convergence rates for different index pairs (i, j) suggest that the complexity of the underlying system or the relationship between the variables represented by i and j influences the required sample size for accurate entropy estimation. The (2,2) pair appears to be more sensitive to sample size, requiring more data to stabilize. The oscillations in the (1,2) and (2,1) pairs suggest a more complex relationship or higher variability in the data associated with those indices.
The fact that all lines converge to zero implies that, given enough data, the entropy estimates will become consistent and accurate, regardless of the initial index pair. The differences between the charts (a) and (c) could be due to different underlying data sets or experimental conditions. Further investigation would be needed to determine the specific reasons for these variations.
</details>
<details>
<summary>hessian_alpha_beta_right.png Details</summary>

### Visual Description
\n
## Chart: Principal Curvatures vs. Number of Samples
### Overview
The image presents two line graphs, labeled (b) and (d), displaying the relationship between principal curvatures and the number of samples (S). Both graphs share a logarithmic scale for the x-axis (number of samples) and display two distinct curves representing different curvature measurements.
### Components/Axes
* **X-axis:** Number of samples (S), labeled at the bottom of both charts. The scale is logarithmic, ranging from 10â° to 10âŽ.
* **Y-axis (Top Chart (b)):** Principal curvatures, labeled on the left side, ranging from -50 to 75.
* **Y-axis (Bottom Chart (d)):** Principal curvatures, labeled on the left side, ranging from 550 to 700.
* **Legend (Top-right of both charts):**
* Solid Black Line: `<Îșâșâ,ÎČ>`
* Dashed Red Line: `<Îșâ»â,ÎČ>`
### Detailed Analysis
**Chart (b):**
* **Solid Black Line (<Îșâșâ,ÎČ>):** The line starts at approximately 20 at S=10â°, rises to a peak of around 60 at S=10Âč, then gradually decreases, fluctuating between 20 and 30 as S increases from 10ÂČ to 10âŽ.
* **Dashed Red Line (<Îșâ»â,ÎČ>):** The line begins at approximately 25 at S=10â°, rapidly decreases to a minimum of around -25 at S=10Âč, and then slowly rises, stabilizing around 10 as S approaches 10âŽ.
**Chart (d):**
* **Solid Black Line (<Îșâșâ,ÎČ>):** The line starts at approximately 640 at S=10â°, quickly drops to around 560 at S=10Âč, and remains relatively constant, fluctuating between 560 and 580, as S increases from 10ÂČ to 10âŽ.
* **Dashed Red Line (<Îșâ»â,ÎČ>):** The line begins at approximately 670 at S=10â°, decreases to around 610 at S=10Âč, and then slowly rises, stabilizing around 630 as S approaches 10âŽ.
### Key Observations
* In both charts, the solid black line represents a curvature that is generally positive in chart (b) and relatively low in chart (d).
* The dashed red line represents a curvature that is initially positive in chart (b) and high in chart (d), but becomes negative and stabilizes as the number of samples increases.
* The scales of the y-axes are significantly different between the two charts, indicating different magnitudes of curvature being measured.
* Both charts show a trend of curvature values stabilizing as the number of samples increases, suggesting convergence.
### Interpretation
The data suggests an analysis of principal curvatures as a function of sample size. The two charts likely represent different aspects or scales of the same underlying phenomenon. The initial fluctuations and subsequent stabilization of the curves indicate that the curvature measurements are sensitive to sample size at lower sample counts, but become more consistent as more samples are included.
The difference in the y-axis scales and the overall curvature values between the two charts suggests that `<Îșâșâ,ÎČ>` and `<Îșâ»â,ÎČ>` represent different types of curvature, potentially related to different geometric properties of the analyzed surface or object. The negative curvature observed in the dashed red line in chart (b) could indicate saddle-like features or regions of negative Gaussian curvature. The stabilization of both curves at higher sample sizes suggests that the curvature measurements are converging towards a stable value, indicating that the sample size is sufficient to accurately characterize the underlying geometry.
The logarithmic scale on the x-axis is important because it shows the rate of change in curvature as the number of samples increases. The rapid changes observed at lower sample counts suggest that adding more samples initially has a significant impact on the curvature measurements, while adding more samples at higher counts has a diminishing effect.
</details>
Figure 7: Convergence of the ensemble mean (43) of Hessian elements and curvatures measures as a function of the number of random projections $S$ . (a,c) The deviation of the ensemble means $\langle(H_{\alpha,\beta})_{ij}\rangle$ ( ${i,jâ\{1,2\}}$ ) of Hessian elements from the corresponding expected values as a function of $S$ . Notice that the expected value of the diagonal elements $(H_{\alpha,\beta})_{11}$ and $(H_{\alpha,\beta})_{22}$ is equal to $\bar{\kappa}^{\alpha,\beta}$ (i.e., to the sum of principal curvatures in the original space) [see Eqs. (36) and (37)]. A relatively large number of random projections between $10^{3}$ and $10^{4}$ is required to keep the deviations at values smaller than about 2â4. (b,d) The ensemble means $\langle\kappa^{\alpha,\beta}_{±}\rangle$ [see Eq. (35)] and $\langle\tilde{\kappa}^{\alpha,\beta}_{±}\rangle$ [see Eq. (44)] as a function of $S$ . Dashed grey lines represent $\bar{\kappa}^{\alpha,\beta}=\mathrm{tr}(H_{\theta})$ . In panels (a,b) and (c,d), the $N$ -dimensional loss functions are given by Eqs. (40) and (45), respectively. We evaluate the corresponding Hessians (42) and (46) at the saddle point $\theta^{*}=(\theta^{*}_{1},...,\theta^{*}_{2n},\theta^{*}_{2n+1})=(0,...,0,1)$ . In both loss functions, we set $n=500$ and in loss function (45) we set $\tilde{n}=800$ .
Because $H_{\theta}$ has positive and negative eigenvalues, the critical point is a saddle. The corresponding principal curvatures are $\kappa_{i}^{\theta}â\{-1,1\}$ ( $iâ\{1,...,N-1\}$ ) and $\kappa^{\theta}_{N}=0$ . In this example, the mean curvature $H$ , as defined in Eq. (38), is equal to 0. According to Eq. (36), the principal curvature, $\bar{\kappa}^{\alpha,\beta}$ , associated with the expected, dimension-reduced Hessian $H_{\alpha,\beta}$ is also equal to 0, erroneously indicating an apparently flat loss landscape if one would use $\bar{\kappa}^{\alpha,\beta}$ as the main measure of curvature. To compare the convergence of different curvature measures as a function of the number of loss projections $S$ , we will now study the ensemble mean
$$
\langle X\rangle=\frac{1}{S}\sum_{k=1}^{S}X^{(k)} \tag{43}
$$
of different quantities of interest $X$ such as Hessian elements and principal curvature measures in dimension-reduced space. Here, $X^{(k)}$ is the $k$ -th realization (or sample) of $X$ .
<details>
<summary>curvature_pdfs_left.png Details</summary>

### Visual Description
\n
## Chart: Principal Curvature Distribution
### Overview
The image presents a histogram showing the probability density function (PDF) of principal curvatures, denoted as Îș±α,ÎČ. Two distributions are overlaid: one representing negative principal curvatures (Îșâα,ÎČ) and the other representing positive principal curvatures (Îș+α,ÎČ). The distributions appear approximately normal, but with some skewness.
### Components/Axes
* **X-axis:** "principal curvatures Îș±α,ÎČ". Scale ranges from -150 to 150, with tick marks every 50 units.
* **Y-axis:** "PDF" (Probability Density Function). Scale ranges from 0.0 to 1.5, with tick marks every 0.5 units.
* **Legend:** Located at the top-right of the chart.
* Red dashed line: Îșâα,ÎČ
* Black solid line: Îș+α,ÎČ
* **Label (a):** Located at the top-left of the chart.
### Detailed Analysis
The chart displays two overlapping histograms.
**Îșâα,ÎČ (Red Dashed Line):**
The distribution is centered around approximately -30 to -40. The PDF rises from near 0 at -150, reaches a peak around -30, and then declines back to near 0 at 150. The peak PDF value is approximately 1.05. The distribution is right-skewed.
**Îș+α,ÎČ (Black Solid Line):**
The distribution is centered around approximately 40 to 50. The PDF rises from near 0 at -150, reaches a peak around 40, and then declines back to near 0 at 150. The peak PDF value is approximately 0.95. The distribution is left-skewed.
The two distributions overlap significantly between approximately -20 and 80.
### Key Observations
* The negative principal curvatures (Îșâα,ÎČ) are more spread out and have a higher peak PDF value than the positive principal curvatures (Îș+α,ÎČ).
* The distributions are not symmetrical around zero, indicating an asymmetry in the principal curvatures.
* There is a noticeable overlap between the two distributions, suggesting that both positive and negative curvatures are present in the data.
### Interpretation
The chart suggests that the principal curvatures are not evenly distributed around zero. The negative curvatures are more prevalent and have a wider range of values than the positive curvatures. This asymmetry could indicate a directional bias or a specific geometric property of the surface or object from which these curvatures were derived. The overlap between the distributions suggests that the surface exhibits both concave and convex regions. The data suggests that the surface is more likely to have negative curvature than positive curvature. The distributions are not perfectly normal, indicating that the curvature values may not follow a simple Gaussian distribution. This could be due to the presence of outliers or a more complex underlying process generating the curvatures.
</details>
<details>
<summary>curvature_pdfs_right.png Details</summary>

### Visual Description
\n
## Chart: Distribution of Principal Curvatures
### Overview
The image presents a chart displaying the distribution of two sets of principal curvatures, denoted as Îș<sup>α,ÎČ</sup><sub>-</sub> and Îș<sup>α,ÎČ</sup><sub>+</sub>. The chart appears to be a histogram with overlaid curves representing the distribution.
### Components/Axes
* **X-axis:** Labeled "principal curvatures Îș<sup>α,ÎČ</sup><sub>±</sub>". The scale ranges from approximately 450 to 750, with tick marks at intervals of 50.
* **Y-axis:** Represents frequency or probability density, ranging from 0.0 to 1.5, with tick marks at intervals of 0.5.
* **Legend:** Located in the top-left corner.
* Red dashed line: Îș<sup>α,ÎČ</sup><sub>-</sub>
* Black solid line: Îș<sup>α,ÎČ</sup><sub>+</sub>
* **Title:** "(b)" in the top-left corner. This likely indicates this is a sub-figure within a larger figure.
### Detailed Analysis
The chart shows two overlapping distributions.
**Îș<sup>α,ÎČ</sup><sub>-</sub> (Red Dashed Line):**
The distribution is approximately normal, peaking around 550. The curve rises from a value of approximately 0.1 at 450, reaches a maximum of approximately 1.05 at around 550, and then declines to approximately 0.1 at 750. The histogram bars closely follow the curve.
**Îș<sup>α,ÎČ</sup><sub>+</sub> (Black Solid Line):**
This distribution is also approximately normal, but is shifted to the right compared to the red distribution. It peaks around 650. The curve rises from a value of approximately 0.1 at 450, reaches a maximum of approximately 1.0 at around 650, and then declines to approximately 0.1 at 750. The histogram bars closely follow the curve.
The two distributions overlap significantly between approximately 600 and 700.
### Key Observations
* The two distributions have different means. The red distribution (Îș<sup>α,ÎČ</sup><sub>-</sub>) is centered around 550, while the black distribution (Îș<sup>α,ÎČ</sup><sub>+</sub>) is centered around 650.
* Both distributions have similar shapes and spreads.
* There is a noticeable overlap between the two distributions, indicating that some values of principal curvatures are common to both.
* The distributions are not perfectly symmetrical.
### Interpretation
The chart suggests that the principal curvatures Îș<sup>α,ÎČ</sup><sub>-</sub> and Îș<sup>α,ÎČ</sup><sub>+</sub> are not identically distributed. The difference in their means indicates that there is a systematic difference in the curvature properties being measured. The overlap between the distributions suggests that there is some correlation between the two curvature measures.
The fact that both distributions are approximately normal suggests that the underlying processes generating these curvatures are well-behaved. The slight asymmetry in the distributions might indicate the presence of some skewness or outliers in the data.
Without further context, it is difficult to determine the specific meaning of these principal curvatures. However, the chart provides valuable information about their statistical properties and relationships. The label "principal curvatures" suggests that these values are related to the geometry of a surface or object, and the differences between the two distributions might reflect different aspects of its shape.
</details>
Figure 8: Distribution of principal curvatures $\kappa_{-}^{\alpha,\beta}$ (red bars) and $\kappa_{+}^{\alpha,\beta}$ (black bars). In panels (a) and (b), the loss functions are given by Eqs. (40) and (45), respectively. We evaluate the corresponding Hessians (42) and (46) at the saddle point $\theta^{*}=(\theta^{*}_{1},...,\theta^{*}_{2n},\theta^{*}_{2n+1})=(0,...,0,1)$ . In both loss functions, we set $n=500$ and in loss function (45) we set $\tilde{n}=800$ . In panel (a), the probability $\Pr(\kappa_{+}^{\alpha,\beta}\kappa_{-}^{\alpha,\beta}>0)$ that the critical point in the lower-dimensional, random projection does not appear as a saddle is about 0.3, and it is 1 in panel (b). Histograms are based on 10,000 random projections used to compute $\kappa_{±}^{\alpha,\beta}$ . Solid grey lines are Gaussian approximations of the empirical distributions.
We first study the dependence of ensemble means $\langle(H_{\alpha,\beta})_{ij}\rangle$ ( $i,jâ\{1,2\}$ ) of elements of the dimension-reduced Hessian $H_{\alpha,\beta}$ on the number of samples $S$ . According to Eq. (36), the diagonal elements of $\mathds{E}[H_{\alpha,\beta}]$ are proportional to the mean curvature of the original high-dimensional space and are thus useful to examine curvature properties of high-dimensional loss functions. Figure 7 (a) shows the convergence of the ensemble means $\langle(H_{\alpha,\beta})_{ij}\rangle$ toward the expected values $\mathds{E}[(H_{\alpha,\beta})_{ij}]$ as a function of $S$ . Note that $\mathds{E}[(H_{\alpha,\beta})_{ij}]=0$ for all $i,j$ . For a few dozen loss projections, the deviations of some of the ensemble means from the corresponding expected values reach values larger than 20. A relatively large number of loss projections $S$ between $10^{3}$ â $10^{4}$ is required to keep these deviations at values that are smaller than about 2â4. The solid black and red lines in Fig. 7 (b), respectively, show the ensemble means $\langle\kappa_{±}^{\alpha,\beta}\rangle$ and
$$
\langle\tilde{\kappa}_{\pm}^{\alpha,\beta}\rangle=\frac{1}{2}\left(\langle A\rangle+\langle C\rangle\pm\sqrt{4\langle B\rangle^{2}+(\langle A\rangle-\langle C\rangle)^{2}}\right) \tag{44}
$$
as a function of $S$ . Since $\mathds{E}[A]=\mathds{E}[C]={(H_{\theta})^{i}}_{i}$ and $\mathds{E}[B]=0$ [see Eq. (36)], we have that $\langle\tilde{\kappa}_{±}^{\alpha,\beta}\rangle=\bar{\kappa}^{\alpha,\beta}$ in the limit $Sââ$ . In the current example, the ensemble means $\langle\tilde{\kappa}_{±}^{\alpha,\beta}\rangle$ thus approach $\bar{\kappa}^{\alpha,\beta}=0$ for large numbers of samples $S$ , represented by the dashed red lines in Fig. 7 (b). The ensemble means $\langle\kappa_{±}^{\alpha,\beta}\rangle$ converge towards values of opposite sign, indicating a saddle point.
For a sample size of $S=10^{4}$ , we show the distribution of the principal curvatures $\kappa_{±}^{\alpha,\beta}$ in Fig. 8 (a). We observe that the distributions are plausibly Gaussian. We also calculate the probability $\Pr(\kappa_{+}^{\alpha,\beta}\kappa_{-}^{\alpha,\beta}>0)$ that the critical point in the lower-dimensional, random projection does not appear as a saddle (i.e., $\kappa_{+}^{\alpha,\beta}\kappa_{-}^{\alpha,\beta}>0$ ). For the example shown in Fig. 8 (a), we find that $\Pr(\kappa_{+}^{\alpha,\beta}\kappa_{-}^{\alpha,\beta}>0)â 0.3$ . That is, in about 30% of the simulated projections, the lower-dimensional loss landscape wrongly indicates that it does not correspond to a saddle.
Our first example, which is based on the loss function (40), shows that the principal curvatures in the lower-dimensional representation of $L(\theta)$ may capture the saddle behavior in the original space if one computes ensemble means $\langle\kappa_{±}^{\alpha,\beta}\rangle$ in the lower-dimensional space [see Fig. 7 (b)]. However, if one first calculates ensemble means of the elements of the dimension-reduced Hessian $H_{\alpha,\beta}$ to infer $\langle\tilde{\kappa}_{±}^{\alpha,\beta}\rangle$ , the loss landscape appears to be flat in this example. We thus conclude that different ways of computing ensemble means (either before or after calculating the principal curvatures) may lead to different results with respect to the âflatnessâ of a dimension-reduced loss landscape.
Unequal number of curvature directions
In the next example, we will show that random projections cannot identify certain saddle points regardless of the underlying averaging process. We consider the loss function
$$
L(\theta)=\frac{1}{2}\theta_{2n+1}\left(\sum_{i=1}^{\tilde{n}}\theta_{i}^{2}-\sum_{i=\tilde{n}+1}^{2n}\theta_{i}^{2}\right)\,,\quad n\in\mathbb{Z}_{+}\,,n<\tilde{n}\leq 2n\,, \tag{45}
$$
where we use the convention $\sum_{i=a}^{b}(·)=0$ if $a>b$ . The Hessian at the critical point $(\theta^{*}_{1},...,\theta^{*}_{2n},\theta^{*}_{2n+1})=(0,...,0,1)$ is
$$
H_{\theta}=\mathrm{diag}(\underbrace{1,\dots,1}_{\tilde{n}~\mathrm{times}},\underbrace{-1,\dots,-1}_{2n-\tilde{n}~\mathrm{times}},0)\,. \tag{46}
$$
As in the previous example, the critical point is again a saddle, but the mean curvature is ${H=2(\tilde{n}-n)/N>0}$ . In the following numerical experiments, we set $n=500$ and $\tilde{n}=800$ . Figure 7 (c) shows that the ensemble means $\langle(H_{\alpha,\beta})_{ij}\rangle$ converge towards the expected value $\mathds{E}[(H_{\alpha,\beta})_{ij}]$ as the number of samples increases. We again observe that a relatively large number of random loss projections $S$ between $10^{3}$ and $10^{4}$ is required to keep the deviations of ensemble means from their corresponding expected values small. Because of the dominance of positive principal curvatures $\kappa_{i}^{\theta}$ in the original space, the corresponding ensemble means of principal curvatures (i.e., $\langle\kappa_{±}^{\alpha,\beta}\rangle$ , $\langle\tilde{\kappa}_{±}^{\alpha,\beta}\rangle$ ) in the lower-dimensional representation approach positive values [see Fig. 7 (d)]. The distribution of $\kappa_{±}^{\alpha,\beta}$ indicates that the probability of observing a saddle in the lower-dimensional loss landscape is vanishingly small [see Fig. 8 (b)]. In this second example, both ways of computing ensemble means, before and after calculating the lower-dimensional principal curvatures, mistakenly suggest that the saddle in the original space is a minimum in dimension-reduced space.
To summarize, for both loss functions (40) and (45), the saddle point $\theta^{*}=(0,...,0,1)$ in the original loss function $L(\theta)$ is often misrepresented in lower-dimensional representations $L(\theta+\alpha\eta+\beta\delta)$ if random directions are used. Depending on (i) the employed curvature measure and (ii) the index of the underlying Hessian $H_{\theta}$ in the original space, the saddle $\theta^{*}=(0,...,0,1)$ may appear erroneously as either a minimum, maximum, or an almost flat region.
If the critical point were a minimum or maximum (i.e., a critical point associated with a positive definite or negative definite Hessian $H_{\theta}$ ), it would be correctly represented in the corresponding expected random projection because the sign of its principal curvature $\bar{\kappa}^{\alpha,\beta}$ , which is proportional to the sum of all eigenvalues $\kappa_{i}^{\theta}$ of the Hessian $H_{\theta}$ in the original loss space, would be equal to the sign of the principal curvatures $\kappa_{i}^{\theta}$ . However, such points are scarce in high-dimensional loss spaces [140, 141].
Finally, because of the confounding factors associated with the inability of random projections to correctly identify saddle information, it does not appear advisable to use âflatnessâ around a critical point in a lower-dimensional random loss projection as a measure of generalization error [122].
Curvature-based Hessian trace estimation
<details>
<summary>trace_estimate_top.png Details</summary>

### Visual Description
\n
## Chart: Estimate of Trace of HΞ vs. Number of Samples
### Overview
The image presents a line chart illustrating the estimated trace of HΞ (denoted as `<zá”HΞz>`) and Îș^α (denoted as `<Îș^α>`) as a function of the number of samples, S. The chart appears to be investigating the convergence or behavior of these estimates as the sample size increases.
### Components/Axes
* **X-axis:** "number of samples S" - Logarithmic scale from 10â° (1) to 10Âł (1000). The axis markers are 1, 10, 100, 1000.
* **Y-axis:** "estimate of tr(HΞ)" - Linear scale from -20 to 20. The axis markers are -20, -10, 0, 10, 20.
* **Legend:** Located in the top-right corner.
* `<zá”HΞz>` - Represented by a solid black line.
* `<Îș^α>` - Represented by a dashed red line.
* **Label (a):** Located in the top-left corner. Its purpose is unclear without further context.
* **Horizontal dashed line:** A grey dashed line at y=0.
### Detailed Analysis
* **`<zá”HΞz>` (Black Line):** The black line starts at approximately 16 at S=1, rapidly decreases to around -12 at S=5, oscillates significantly between approximately -15 and 5 until S=100. After S=100, the line stabilizes, fluctuating around a value close to 0, with values ranging from approximately -5 to 5.
* **`<Îș^α>` (Red Line):** The red line begins at approximately 8 at S=1, decreases to around -4 at S=5, and then exhibits oscillations, though less pronounced than the black line. From S=100 onwards, the red line also stabilizes, fluctuating around a value close to 0, with values ranging from approximately -3 to 3.
* **Initial Behavior (S=1 to S=10):** Both lines show a significant drop in their estimated values as the number of samples increases from 1 to 10. The black line exhibits more dramatic fluctuations during this period.
* **Stabilization (S>100):** Both lines converge towards a value near 0 as the number of samples increases beyond 100. The fluctuations become less pronounced, suggesting a degree of convergence.
### Key Observations
* The initial estimates for both `<zá”HΞz>` and `<Îș^α>` are relatively high and decrease rapidly with increasing sample size.
* The `<zá”HΞz>` estimate is more volatile than the `<Îș^α>` estimate, especially in the initial stages (S < 100).
* Both estimates appear to converge towards 0 as the number of samples increases, suggesting a potential bias reduction with larger sample sizes.
* The horizontal dashed line at y=0 serves as a reference point for assessing the convergence of the estimates.
### Interpretation
The chart likely demonstrates the behavior of estimators for certain quantities related to a model parameter Ξ. The `<zá”HΞz>` and `<Îș^α>` terms could represent different measures of model complexity or goodness of fit. The logarithmic scale on the x-axis suggests that the convergence behavior is being examined over a wide range of sample sizes.
The initial high estimates and subsequent decrease suggest that the estimators are biased for small sample sizes. The convergence towards 0 as the sample size increases indicates that the bias is reduced with more data. The more volatile behavior of `<zá”HΞz>` might imply that it is a more sensitive estimator or that it is influenced by factors that are more susceptible to sampling variability.
The label "(a)" could indicate that this is part of a larger series of plots or experiments. Without additional context, it is difficult to determine the specific meaning of the quantities being estimated or the underlying model being used. The chart suggests that a sufficient number of samples (likely >100) are needed to obtain reasonably stable estimates of these quantities.
</details>
<details>
<summary>trace_estimate_bottom.png Details</summary>

### Visual Description
\n
## Line Chart: Convergence of Energy and Alpha
### Overview
The image presents a line chart illustrating the convergence of two quantities, `<zá”HΞz>` and `<α>`, as a function of the number of samples, *S*. The chart appears to demonstrate how these values stabilize as more samples are used, potentially in a machine learning or statistical estimation context.
### Components/Axes
* **X-axis:** "number of samples *S*", with a logarithmic scale ranging from 10â° (1) to 10Âł (1000). The axis markers are 1, 10, 100, 1000.
* **Y-axis:** Ranges from approximately 550 to 610, with markers at 550, 570, 590, and 610.
* **Legend:** Located in the top-right corner.
* `<zá”HΞz>`: Represented by a solid black line.
* `<α>`: Represented by a dashed red line.
* **Title:** "(b)" in the top-left corner. This suggests it is part of a larger figure.
### Detailed Analysis
* **`<zá”HΞz>` (Black Line):** The black line starts at approximately 560 at *S* = 1. It exhibits significant fluctuations initially, increasing to around 585 at *S* = 10. It continues to fluctuate, reaching a peak of approximately 598 at *S* = 50, then stabilizes around 595-600 for *S* values greater than 100.
* **`<α>` (Red Line):** The red dashed line begins at approximately 565 at *S* = 1. It also shows initial fluctuations, rising to around 590 at *S* = 10. It reaches a peak of approximately 599 at *S* = 50, and then stabilizes around 596-600 for *S* values greater than 100.
Here's a more detailed breakdown of approximate values:
| S | <zá”HΞz> | <α> |
| :----- | :------ | :----- |
| 1 | 560 | 565 |
| 10 | 585 | 590 |
| 50 | 598 | 599 |
| 100 | 596 | 597 |
| 1000 | 599 | 598 |
### Key Observations
* Both quantities, `<zá”HΞz>` and `<α>`, demonstrate a clear convergence trend as the number of samples *S* increases.
* The initial fluctuations are more pronounced for smaller values of *S*.
* The two lines are very close in value, especially for *S* > 100, suggesting a strong relationship between the two quantities.
* The convergence appears to be largely achieved by *S* = 100, with minimal changes observed beyond that point.
### Interpretation
The chart suggests that the quantities `<zá”HΞz>` and `<α>` are converging to stable values as the number of samples *S* increases. This is a common phenomenon in statistical estimation and machine learning, where increasing the amount of data leads to more accurate and reliable results. The close proximity of the two lines suggests that `<α>` might be a function or related to `<zá”HΞz>`. The initial fluctuations likely represent the variance in the estimates due to the limited number of samples. The stabilization around *S* = 100 indicates that this number of samples is sufficient to achieve a reasonable level of accuracy for these quantities. The "(b)" label suggests this is a component of a larger study, potentially comparing different methods or parameters. The logarithmic scale on the x-axis is used to effectively visualize the convergence behavior over a wide range of sample sizes.
</details>
Figure 9: Estimating the trace of the Hessian $H_{\theta}$ . In panels (a) and (b), the loss functions are given by Eqs. (40) and (45), respectively. We evaluate the corresponding Hessians (42) and (46) at the saddle point $\theta^{*}=(\theta^{*}_{1},...,\theta^{*}_{2n},\theta^{*}_{2n+1})=(0,...,0,1)$ . In both loss functions, we set $n=500$ and in loss function (45) we set $\tilde{n}=800$ . Solid black and dash-dotted red lines represent Hutchinson [ $\langle z^{âp}H_{\theta}z\rangle$ ; see Eq. (39)] and curvature-based ( $\langle\kappa^{\alpha}\rangle$ ) estimates of $\mathrm{tr}(H_{\theta})$ , respectively. We compute ensemble means $\langle·\rangle$ as defined in Eq. (43) for different numbers of random projections $S$ . The trace estimates in panels (a) and (b), respectively, converge towards the true trace values $\mathrm{tr}(H_{\theta})=0$ and $\mathrm{tr}(H_{\theta})=600$ that are indicated by dashed grey lines. In both methods, the same random vectors with elements that are distributed according to a standard normal distribution are used. For the curvature-based estimation of $\mathrm{tr}(H_{\theta})$ , we perform least-square fits of $L(\theta^{*}+\alpha\eta)$ over an interval $\alphaâ[-0.05,0.05]$ .
In accordance with Sec. 1, we now use the loss functions (40) and (45) to compare the convergence behavior between the original Hutchinson method (39) and the curvature-based trace estimation (37). Instead of two random directions, we use one random Gaussian direction $\eta$ and perform least-squares fits for 50 equidistant values of $\alpha$ in the interval $[-0.05,0.05]$ to extract estimates of $\mathrm{tr}(H_{\theta})$ from $L(\theta^{*}+\alpha\eta)$ . We use the same random Gaussian directions in Hutchinsonâs method.
Figure 9 shows how the Hutchinson and curvature-based trace estimates converge towards the true trace values, 0 for the loss function (40) and 600 for the loss function (45) with $n=500$ and $\tilde{n}=800$ . Given that we use the same random vectors in both methods, their convergence behavior towards the true trace value is similar.
With the curvature-based method, one can produce Hutchinson-type trace estimates without computing Hessian-vector products. However, it requires the user to specify an appropriate interval for $\alpha$ so that the mean curvature can be properly estimated in a quadratic-approximation regime. It also requires a sufficiently large number of points in this interval. Therefore, it may be less accurate than the original Hutchinson method.
3 Hessian directions
<details>
<summary>loss_projection_top.png Details</summary>

### Visual Description
\n
## Diagram: Saddle Point and Minimum Illustration
### Overview
The image presents two 3D diagrams illustrating the concepts of a saddle point (a) and a minimum (b) in a two-dimensional space defined by axes α and ÎČ. Both diagrams depict a curved surface, with annotations indicating curvature values. The diagrams are side-by-side for comparison.
### Components/Axes
* **Axes:** Both diagrams share the same axes labeled α (x-axis) and ÎČ (y-axis).
* **Curvature Labels:** Each diagram features two curvature labels: Îș<sup>α,ÎČ</sup><sub>+</sub> and Îș<sup>α,ÎČ</sup><sub>-</sub>. These labels are positioned on the curved surface to indicate the direction of curvature.
* **Conditions:** Each diagram includes a mathematical condition relating the curvature values:
* **(a) Saddle:** Îș<sup>α,ÎČ</sup><sub>+</sub> Îș<sup>α,ÎČ</sup><sub>-</sub> < 0
* **(b) Minimum:** Îș<sup>α,ÎČ</sup><sub>+</sub> > 0, Îș<sup>α,ÎČ</sup><sub>-</sub> > 0
* **Hessian Directions:** Below the second diagram, the text "Hessian directions η, Ύ" is present.
### Detailed Analysis / Content Details
**Diagram (a) - Saddle Point:**
* The surface resembles a saddle, with upward curvature along the α-axis and downward curvature along the ÎČ-axis.
* The label Îș<sup>α,ÎČ</sup><sub>+</sub> is positioned on the upward-curving portion of the surface, near the α-axis.
* The label Îș<sup>α,ÎČ</sup><sub>-</sub> is positioned on the downward-curving portion of the surface, near the ÎČ-axis.
* The condition Îș<sup>α,ÎČ</sup><sub>+</sub> Îș<sup>α,ÎČ</sup><sub>-</sub> < 0 indicates that the product of the curvatures is negative, which is characteristic of a saddle point.
**Diagram (b) - Minimum:**
* The surface curves upwards in both the α and ÎČ directions, forming a bowl-like shape.
* The label Îș<sup>α,ÎČ</sup><sub>+</sub> is positioned on the upward-curving portion of the surface, near the α-axis.
* The label Îș<sup>α,ÎČ</sup><sub>-</sub> is positioned on the upward-curving portion of the surface, near the ÎČ-axis.
* The condition Îș<sup>α,ÎČ</sup><sub>+</sub> > 0, Îș<sup>α,ÎČ</sup><sub>-</sub> > 0 indicates that both curvatures are positive, which is characteristic of a minimum.
### Key Observations
* The diagrams visually demonstrate the difference between a saddle point and a minimum.
* The curvature labels and conditions provide a mathematical description of these concepts.
* The Hessian directions are mentioned, suggesting a connection to the Hessian matrix used in optimization problems.
* The diagrams are simplified representations, focusing on the curvature rather than precise numerical values.
### Interpretation
The diagrams illustrate fundamental concepts in multivariable calculus and optimization. The saddle point represents a critical point where the function has zero gradient, but it is not a local minimum or maximum. The minimum represents a point where the function reaches its lowest value in a local region. The curvature values (Îș<sup>α,ÎČ</sup><sub>+</sub> and Îș<sup>α,ÎČ</sup><sub>-</sub>) quantify the rate of change of the function along the α and ÎČ axes, respectively. The conditions provided relate these curvature values to the type of critical point. The mention of Hessian directions suggests that these diagrams are relevant to the second derivative test, which uses the Hessian matrix to determine the nature of critical points. The diagrams are useful for visualizing these abstract concepts and understanding their geometric interpretation.
</details>
<details>
<summary>loss_projection_bottom.png Details</summary>

### Visual Description
\n
## Diagram: Maximum and Projection Surfaces
### Overview
The image presents two 3D diagrams, labeled "(c) Maximum" and "(d) Projection". Both diagrams depict curved surfaces, with labeled axes α and ÎČ. The diagram "(c) Maximum" shows a surface with a pronounced peak, and "(d) Projection" shows a surface that extends outwards. Both diagrams feature circular contours and labeled points.
### Components/Axes
* **Diagram (c) Maximum:**
* Axes: α (horizontal), ÎČ (horizontal, angled), Z-axis is implicit (vertical).
* Surface: A curved surface resembling a rounded hill or dome. The surface is shaded to indicate depth.
* Contours: Two circular contours are present on the peak of the surface.
* Labels:
* Îșâș<sup>α,ÎČ</sup> < 0, Îșâ»<sup>α,ÎČ</sup> < 0 (above the peak)
* Îșâ»<sup>α,ÎČ</sup> (pointing to the left contour)
* Îșâș<sup>α,ÎČ</sup> (pointing to the right contour)
* **Diagram (d) Projection:**
* Axes: α (vertical), ÎČ (horizontal), Z-axis is implicit (depth).
* Surface: A curved surface extending outwards, resembling a hyperbolic paraboloid. The surface is shaded to indicate depth.
* Contours: Two circular contours are present.
* Labels:
* Îșâ»<sup>α,ÎČ</sup> (pointing to the left contour)
* Îșâș<sup>α,ÎČ</sup> (pointing to the right contour)
* Text: "random directions η, Ύ" (bottom-right)
### Detailed Analysis or Content Details
* **Diagram (c) Maximum:**
* The surface peaks along the ÎČ-axis.
* The contours indicate lines of constant value on the surface.
* The labels Îșâș<sup>α,ÎČ</sup> and Îșâ»<sup>α,ÎČ</sup> suggest curvature values. The inequality Îșâș<sup>α,ÎČ</sup> < 0, Îșâ»<sup>α,ÎČ</sup> < 0 indicates that both principal curvatures are negative at the peak, implying a saddle-like point.
* **Diagram (d) Projection:**
* The surface extends outwards from the origin along the ÎČ-axis.
* The contours indicate lines of constant value on the surface.
* The labels Îșâș<sup>α,ÎČ</sup> and Îșâ»<sup>α,ÎČ</sup> suggest curvature values. The contours are closer together near the origin, indicating a higher curvature in that region.
* The text "random directions η, Ύ" suggests that the projection is related to random variables or directions.
### Key Observations
* Both diagrams use the same labels for the curvature values (Îșâș<sup>α,ÎČ</sup> and Îșâ»<sup>α,ÎČ</sup>), suggesting a relationship between the two surfaces.
* The inequality Îșâș<sup>α,ÎČ</sup> < 0, Îșâ»<sup>α,ÎČ</sup> < 0 is only present in diagram (c), indicating a specific property of the "Maximum" surface.
* The "Projection" surface appears to be a projection of the "Maximum" surface onto a different coordinate system.
### Interpretation
These diagrams likely represent mathematical surfaces used in a context involving curvature analysis. The labels Îșâș<sup>α,ÎČ</sup> and Îșâ»<sup>α,ÎČ</sup> likely denote the principal curvatures of the surface in the α-ÎČ plane. The inequality in diagram (c) suggests a specific geometric property of the surface at its peak. The "Projection" diagram may represent a transformation or mapping of the "Maximum" surface, potentially related to random variables or directions as indicated by the text. The diagrams are likely part of a larger mathematical or physical model, possibly related to optimization, signal processing, or statistical mechanics. The diagrams demonstrate the concept of curvature and how it changes across a surface, and how a surface can be projected or transformed.
</details>
Figure 10: Dimensionality-reduced loss $L(\theta^{*}+\alpha\eta+\beta\delta)$ of Eq. (45) with $n=900,\tilde{n}=1000$ for different directions $\eta,\delta$ . (aâc) The directions $\eta,\delta$ correspond to eigenvectors of the Hessian $H_{\theta}$ of Eq. (45). If the eigenvalues associated with $\eta,\delta$ have different signs, the corresponding loss landscape is a saddle as depicted in panel (a). If the eigenvalues associated with $\eta,\delta$ have the same sign, the corresponding loss landscape is either a minimum (both signs are positive) as shown in panel (b) or a maximum (both signs are negative) as shown in panel (c). Because there is an excess of $\tilde{n}-n=100$ positive eigenvalues in $H_{\theta}$ , a projection onto a dimension-reduced space that is spanned by the random directions $\eta,\delta$ is often associated with an apparently convex loss landscape. An example of such an apparent minimum is shown in panel (d).
Given the described shortcomings of random projections incorrectly identifying saddles of high-dimensional loss functions, we suggest to use Hessian directions (i.e., the eigenbasis of $H_{\theta}$ ) as directions $\eta,\delta$ in $L(\theta^{*}+\alpha\eta+\beta\delta)$ .
For Eq. (45) with $n=900,\tilde{n}=1000$ , we show projections along different Hessian directions in Fig. 10 (aâc). We observe that different Hessian directions indicate different types of critical points in dimension-reduced space. If the eigenvalues associated with the Hessian directions $\eta,\delta$ have different signs, the corresponding lower-dimensional loss landscape is a saddle [see Fig. 10 (a)]. If both eigenvalues have the same sign, the loss landscape is either a minimum [see Fig. 10 (b): both signs are positive] or it is a maximum [see Fig. 10 (c): both signs are negative]. If one uses a random projection instead, the resulting lower-dimensional loss landscape often appears to be a minimum in this example [see Fig. 10 (d)]. To quantify the proportion of random projections that correctly identify the saddle with $n=900,\tilde{n}=1000$ , we generated 10,000 realizations of $\kappa_{±}^{\alpha,\beta}$ [see Eq. (35)]. We find that the signs of $\kappa_{±}^{\alpha,\beta}$ were different in only about 0.5% of all simulated realizations. That is, in this example the principal curvatures $\kappa_{±}^{\alpha,\beta}$ indicate a saddle in only about 0.5% of the studied projections.
In accordance with related works that use Hessian information [142, 143], our proposal uses Hessian-vector products (HVPs). We first compute the largest-magnitude eigenvalue and then use an annihilation method [144] to compute the second largest-magnitude eigenvalue of opposite sign (see Algorithm 1). The corresponding eigenvectors are the dominant Hessian directions. Other deflation techniques can be used to find additional Hessian directions.
We employ HVPs to compute Hessian directions without an explicit representation of $H_{\theta}$ using the identity
$$
\nabla_{\theta}[(\nabla_{\theta}L)^{\top}v]=(\nabla_{\theta}\nabla_{\theta}L)v+(\nabla_{\theta}L)^{\top}\nabla_{\theta}v=H_{\theta}v\,. \tag{47}
$$
In the first step, the gradient $(â_{\theta}L)^{âp}$ is computed using reverse-mode autodifferentiation (AD) to then compute the scalar product $[(â_{\theta}L)^{âp}v]$ . In the second step, we again apply reverse-mode AD to the computational graph associated with the scalar product $[(â_{\theta}L)^{âp}v]$ . Because the vector $v$ does not depend on $\theta$ (i.e., $â_{\theta}v=0$ ), the result is $H_{\theta}v$ . One may also use forward-mode AD in the second step to provide a more memory efficient implementation.
Algorithm 1 Compute dominant Hessian directions
1: $L_{1}=\mathrm{LinearOperator((N,N),~matvec=hvp)}$ initialize linear operator for HVP calculation
2: eigval1, eigvec1 = solve_lm_evp( $L_{1}$ ) compute largest-magnitude eigenvalue and corresponding eigenvector associated with operator $L_{1}$
3: shifted_hvp(vec) = hvp(vec) - eigval1*vec define shifted HVP
4: $L_{2}=\mathrm{LinearOperator((N,N),~matvec=shifted\_hvp)}$ initialize linear operator for shifted HVP calculation
5: eigval2, eigvec2 = solve_lm_evp( $L_{2}$ ) compute largest-magnitude eigenvalue and corresponding eigenvector associated with operator $L_{2}$
6: eigval2 += eigval1
7: if eigval1 $>=$ 0 then
8: maxeigval, maxeigvec = eigval1, eigvec1
9: mineigval, mineigvec = eigval2, eigvec2
10: else
11: maxeigval, maxeigvec = eigval2, eigvec2
12: mineigval, mineigvec = eigval1, eigvec1
13: end if
14: return: maxeigval, maxeigvec, mineigval, mineigvec
Image classification
<details>
<summary>resnet_panel.png Details</summary>

### Visual Description
## 3D Surface Plots: Hessian and Random Directions
### Overview
The image presents four 3D surface plots, arranged in a 2x2 grid. Each plot visualizes a function of two variables, α and ÎČ, with the function's value represented by the height of the surface. Two plots (a and c) use "Hessian directions" for the underlying function, while the other two (b and d) use "Random directions". Plots (a) and (b) have α, ÎČ â [-1, 1], while plots (c) and (d) have α, ÎČ â [-0.05, 0.05]. The color gradient on the surfaces indicates the function's value, ranging from darker blues (lower values) to lighter blues/whites (higher values).
### Components/Axes
Each plot shares the following components:
* **Axes:** Three orthogonal axes labeled α, ÎČ, and implicitly the function value (z-axis, height).
* **Axis Ranges:**
* Plots (a) and (b): α ranges from -1 to 1, ÎČ ranges from -1 to 1.
* Plots (c) and (d): α ranges from -0.05 to 0.05, ÎČ ranges from -0.05 to 0.05.
* **Color Scale:** A gradient from dark blue to light blue/white, representing the function's value. The exact mapping is not provided.
* **Titles:** Each plot has a title indicating the direction used ("Hessian directions" or "Random directions") and a letter identifier (a, b, c, d).
### Detailed Analysis or Content Details
**Plot (a): Hessian directions, α, ÎČ â [-1, 1]**
The surface exhibits a curved shape, rising from the bottom-left towards the top-right. The highest point appears to be near ÎČ = 1 and α = 1. The surface is relatively smooth.
**Plot (b): Random directions, α, ÎČ â [-1, 1]**
This surface is significantly more irregular and undulating than plot (a). It has several peaks and valleys, and the overall shape is less predictable. The highest point appears to be near ÎČ = 1 and α = 0.5.
**Plot (c): Hessian directions, α, ÎČ â [-0.05, 0.05]**
This plot shows a zoomed-in view of the Hessian direction surface. The surface is nearly flat, with a slight curvature. The highest point is near α = 0.05 and ÎČ = 0.05.
**Plot (d): Random directions, α, ÎČ â [-0.05, 0.05]**
Similar to plot (b), this surface is irregular, but the smaller range of α and ÎČ results in a more localized and less extreme variation. The highest point appears to be near α = 0 and ÎČ = 0.05.
### Key Observations
* **Directional Impact:** The "Hessian directions" (plots a and c) result in smoother, more predictable surfaces compared to the "Random directions" (plots b and d).
* **Scale Effect:** Reducing the range of α and ÎČ (from [-1, 1] to [-0.05, 0.05]) significantly alters the appearance of the surfaces, making the variations less pronounced.
* **Irregularity:** The "Random directions" plots consistently exhibit more irregularity and local extrema than the "Hessian directions" plots.
### Interpretation
These plots likely illustrate the behavior of a function under different sampling or optimization strategies. The "Hessian directions" represent a more informed approach, leveraging second-order derivative information (the Hessian matrix) to guide the exploration of the function's landscape. This results in a smoother, more predictable path towards a potential optimum.
The "Random directions," on the other hand, represent a less informed, more stochastic approach. This can lead to more erratic behavior, with the function potentially getting stuck in local optima or requiring more iterations to converge.
The change in scale (from [-1, 1] to [-0.05, 0.05]) demonstrates the importance of the search space. Zooming in on a smaller region can reveal finer details and potentially uncover local optima that were not visible at a larger scale.
The color gradients provide a visual representation of the function's value, allowing for a quick assessment of the function's behavior in different regions of the α-ÎČ plane. The darker blues indicate regions of lower function value, while the lighter blues/whites indicate regions of higher function value.
</details>
Figure 11: Loss landscape projections for ResNet-56. (a,c) The projection directions $\eta,\delta$ are given by the eigenvectors associated with the largest and smallest eigenvalues of the Hessian $H_{\theta}$ , respectively. The zoomed inset in panel (c) shows the loss landscape for $(\alpha,\beta)â[-0.01,0.005]Ă[-0.05,0.05]$ . We observe a decreasing loss along the negative $\beta$ -axis. (b,d) The projection directions $\eta,\delta$ are given by random vectors. The domains of $(\alpha,\beta)$ in panels (a,b) and (c,d) are $[-1,1]Ă[-1,1]$ and $[-0.05,0.05]Ă[-0.05,0.05]$ , respectively. All shown cross-entropy loss landscapes are based on evaluating the CIFAR-10 training dataset that consists of 50,000 images.
As an illustration of a loss landscape of an ANN employed in image classification, we focus on the ResNet-56 architecture that has been trained as detailed in Ref. 122 on CIFAR-10 using stochastic gradient descent (SGD) with Nesterov momentum. The number of parameters of this ANN is 855,770. The training and test losses at the local optimum found by SGD are ${9.20Ă 10^{-4}}$ and 0.29, respectively; the corresponding accuracies are 100.00 and 93.66, respectively.
In accordance with Ref. 122, we apply filter normalization to random directions. This and related normalization methods are often employed when generating random projections. One reason is that simply adding random vectors to parameters of a neural network loss function (or parameters of other functions) does not consider the range of parameters associated with different elements of that function. As a result, random perturbations may be too small or large to properly resolve the influence of certain parameters on a given function.
When calculating Hessian directions, we are directly taking into account the parameterization of the underlying functions that we want to visualize. Therefore, there is no need for an additional rescaling of different parts of the perturbation vector. Still, reparameterizations of a neural network can result in changes of curvature properties (see, e.g., Theorem 4 in Ref. 120).
Figure 11 shows the two-dimensional projections of the loss function (cross entropy loss) around a local critical point. The smallest and largest eigenvalues are $-16.4$ and $5007.9$ , respectively. This means that the found critical point is a saddle with a maximum negative curvature that is more than two orders of magnitude smaller than the maximum positive curvature at that point. The saddle point is clearly visible in Fig. 11 (a,c). We observe in the zoomed inset in Fig. 11 (c) that the loss decreases along the negative $\beta$ -axis.
If random directions are used, the corresponding projections indicate that the optimizer converged to a local minimum and not to a saddle point [see Fig. 11 (b,d)]. Overall, the ResNet-56 visualizations that we show in Fig. 11 exhibit structural similarities to those that we generated using the simple loss model (45) [see Fig. 10 (a,d)].
Function approximation
<details>
<summary>heatmap_function_approximation.png Details</summary>

### Visual Description
\n
## Heatmaps: FCNN Parameter Space Loss Visualization
### Overview
The image presents eight heatmaps, each visualizing the loss function of a Fully Connected Neural Network (FCNN) across a parameter space defined by α and ÎČ. Each heatmap corresponds to a specific training/testing scenario and network configuration (FCNN 1 or FCNN 2, trained with either random or Hessian initialization). A black 'x' marks the point of minimum loss on each heatmap. The color scale represents the loss value, ranging from 0.0 to 10.0.
### Components/Axes
Each heatmap shares the following components:
* **X-axis:** Labeled "α" with a scale ranging from -0.1 to 0.1.
* **Y-axis:** Labeled "ÎČ" with a scale ranging from -0.1 to 0.1.
* **Colorbar:** Located to the right of each heatmap, representing the "Loss [x10â»ÂČ]" scale from 0.0 (dark green) to 10.0 (dark red).
* **Title:** Each heatmap has a title indicating the network (FCNN 1 or FCNN 2) and the training method (random or Hessian) along with whether it represents the training or testing set.
* **Minimum Loss Marker:** A black 'x' indicates the approximate location of the minimum loss within the parameter space.
### Detailed Analysis or Content Details
Here's a breakdown of each heatmap, including approximate coordinates of the minimum loss marker ('x'):
**(a) FCNN 1 (random, train):**
* Trend: The loss appears to decrease towards the center of the parameter space.
* Minimum Loss Location: Approximately (0.0, 0.0).
* Loss Value at Minimum: Approximately 0.5 (estimated from the colorbar).
**(b) FCNN 1 (random, test):**
* Trend: Similar to (a), loss decreases towards the center.
* Minimum Loss Location: Approximately (0.0, 0.0).
* Loss Value at Minimum: Approximately 1.0 (estimated from the colorbar).
**(c) FCNN 1 (Hessian, train):**
* Trend: Loss is concentrated around the center, with a clear minimum.
* Minimum Loss Location: Approximately (0.0, 0.0).
* Loss Value at Minimum: Approximately 0.1 (estimated from the colorbar).
**(d) FCNN 1 (Hessian, test):**
* Trend: Similar to (c), loss is concentrated around the center.
* Minimum Loss Location: Approximately (0.0, 0.0).
* Loss Value at Minimum: Approximately 0.5 (estimated from the colorbar).
**(e) FCNN 2 (random, train):**
* Trend: Loss is more spread out, with a broad minimum.
* Minimum Loss Location: Approximately (0.0, 0.0).
* Loss Value at Minimum: Approximately 1.5 (estimated from the colorbar).
**(f) FCNN 2 (random, test):**
* Trend: Similar to (e), loss is spread out.
* Minimum Loss Location: Approximately (0.0, 0.0).
* Loss Value at Minimum: Approximately 2.0 (estimated from the colorbar).
**(g) FCNN 2 (Hessian, train):**
* Trend: Loss is concentrated around the center, with a clear minimum.
* Minimum Loss Location: Approximately (0.0, 0.0).
* Loss Value at Minimum: Approximately 0.2 (estimated from the colorbar).
**(h) FCNN 2 (Hessian, test):**
* Trend: Similar to (g), loss is concentrated around the center.
* Minimum Loss Location: Approximately (0.0, 0.0).
* Loss Value at Minimum: Approximately 0.7 (estimated from the colorbar).
### Key Observations
* **Hessian Initialization:** Using the Hessian initialization consistently results in lower loss values, both during training and testing, compared to random initialization for both FCNNs.
* **FCNN 1 vs. FCNN 2:** FCNN 1 generally achieves lower loss values than FCNN 2, particularly when using Hessian initialization.
* **Training vs. Testing:** Loss values are generally higher on the test set than on the training set for both networks and both initialization methods, indicating some degree of overfitting.
* **Minimum Location:** The minimum loss consistently occurs around α = 0.0 and ÎČ = 0.0 for all scenarios.
### Interpretation
The heatmaps demonstrate the impact of initialization methods (random vs. Hessian) on the training and generalization performance of two different FCNN architectures. The Hessian initialization appears to be a more effective strategy for finding parameter values that minimize loss, leading to better performance on both training and testing data. The consistent location of the minimum loss at α = 0.0 and ÎČ = 0.0 suggests that these parameter values represent a stable and optimal configuration for both networks. The difference in loss values between FCNN 1 and FCNN 2 indicates that the network architecture itself plays a significant role in determining the model's ability to learn and generalize. The higher loss on the test set compared to the training set suggests that both networks are prone to overfitting, and regularization techniques might be necessary to improve their generalization performance. The visualization provides a clear and intuitive way to understand the relationship between network parameters and loss, aiding in the optimization and design of FCNNs.
</details>
Figure 12: Heatmaps of the mean squared error (MSE) loss along random and dominant Hessian directions for a function-approximation task. (aâd) Loss heatmaps for a fully connected neural network (FCNN) with 2 layers and 100 ReLU activations per layer [(a,c): training data, (b,d): test data]. (eâh) Loss heatmaps for an FCNN with 10 layers and 100 ReLU activations per layer [(e,g): training data, (f,h): test data]. Random directions are used in panels (a,b,e,f) while Hessian directions are used in panels (c,d,g,h). Green and red regions indicate small and large mean squared error (MSE) loss values, respectively. Black crosses indicate the positions of loss minima in the shown domain. Both neural networks, FCNN 1 and FCNN 2, are trained to approximate the smooth one-dimensional function (48).
As another example, we compare loss function visualizations that are based on random and Hessian directions in a function-approximation task. In accordance with Ref. 145, we consider the smooth one-dimensional function
$$
f(x)=\log(\sin(10x)+2)+\sin(x)\,, \tag{48}
$$
where $xâ[-1,1)$ . To approximate $f(x)$ , we use two fully connected neural networks (FCNNs) with 2 and 10 layers, respectively. Each layer has 100 ReLU activations. The numbers of parameters of the 2 and 10 layer architectures are 20,501 and 101,301, respectively. The training data is based on 50 points that are sampled uniformly at random from the interval $[-1,1)$ . We train both ANNs using a mean squared error (MSE) loss function and SGD with a learning rate of 0.1. The 2 and 10 layer architectures are respectively trained for 100,000 and 50,000 epochs to reach loss values of less than $10^{-4}$ . The best model was saved and evaluated by calculating the MSE loss for 1,000 points that were sampled uniformly at random from the interval $[-1,1)$ . We present an animation illustrating the evolution of random and Hessian loss projections during training at https://vimeo.com/787174225.
Figure 12 shows heatmaps of the training and test loss landscapes of both ANNs along random and dominant Hessian directions. Black crosses in Fig. 12 indicate loss minima. In line with the previous example, which focused on an image classification ANN, we observe for both FCNNs that random projections are associated with loss values that increase along both directions $\delta,\eta$ and for both training and test data [see Fig. 12 (a,b,e,f)]. For these projections, we find that the loss minima are very close to or at the origin of the loss space. The situation is different in the loss projections that are based on Hessian directions. Figure 12 (c) shows that the loss minimum for the 2-layer FCNN is not located at the origin but at $(\alpha,\beta)â(0,0.04)$ .
We find that the value of the training loss at that point is more than 9% smaller than the smallest training loss found in a random projection plot. The corresponding test loss is about 1% smaller than the test loss associated with the smallest training loss in the random projection plot. For the 10-layer FCNN, the smallest training loss in the Hessian direction projection is more than 26% smaller than the smallest training loss in the randomly projected loss landscape [see Fig. 12 (e,g)]. The corresponding test loss in the Hessian direction plot is about 6% smaller than the corresponding test loss minimum in the random direction plot [see Fig. 12 (f,h)]. Notice that both the training and test loss minima in the random direction heatmaps in Fig. 12 (e,f) are located at the origin while they are located at $(\alpha,\beta)â(0,-0.05)$ in the Hessian direction heatmaps in Fig. 12 (g,h).
These results show that it is possible to improve the training loss values along the dominant negative curvature direction. Smaller loss values may also be achievable with further gradient-based training iterations, so one has to consider tradeoffs between gradient and curvature-based optimization methods.
5 Conclusions
Over several decades, research at the intersection of statistical mechanics, neuroscience, and computer science has significantly enhanced our understanding of information processing in both living systems and machines. With the increasing computing power, numerous learning algorithms conceived in the latter half of the 20th century have found widespread application in tasks such as image classification, natural language processing, and anomaly detection.
The Ising model, one of the most thoroughly studied models in statistical mechanics, shares close connections with ANNs such as Hopfield networks and Boltzmann machines. In the first part of this chapter, we described these connections and provided an illustrative example of a Boltzmann machine that learned to generate new Ising configurations based on a set of training samples.
Despite decades of active research in artificial intelligence and machine learning, providing mechanistic insights into the representational capacity and learning dynamics of modern ANNs remains challenging due to their high dimensionality and complex, non-linear structure. Visualizing the loss landscapes associated with ANNs poses a particular challenge, as one must appropriately reduce dimensionality to visualize landscapes in two or three dimensions. To put it in the words of Ker-Chau Li, âThere are too many directions to project a high-dimensional data set and unguided plotting can be time-consuming and fruitlessâ. One approach is to employ random projections. However, this method often inaccurately depicts saddle points in high-dimensional loss landscapes as apparent minima in low-dimensional random projections.
In the second part of this chapter, we discussed various curvature measures that can be employed to characterize critical points in high-dimensional loss landscapes. Additionally, we outlined how Hessian directions (i.e., the eigenvectors of the Hessian at a critical point) can inform a more structured approach to projecting high-dimensional functions onto lower-dimensional representations.
While much of contemporary research in machine learning is driven by practical applications, incorporating theoretical contributions rooted in concepts from statistical mechanics and related disciplines holds promise for continuing to guide the development of learning algorithms.
Acknowledgements
LB acknowledges financial support from hessian.AI and the ARO through grant W911NF-23-1-0129.
References
- 1. L. ZdeborovĂĄ and F. Krzakala, Statistical physics of inference: Thresholds and algorithms, Advances in Physics. 65 (5), 453â552 (2016).
- 2. G. Carleo, I. Cirac, K. Cranmer, L. Daudet, M. Schuld, N. Tishby, L. Vogt-Maranto, and L. ZdeborovĂĄ, Machine learning and the physical sciences, Reviews of Modern Physics. 91 (4), 045002 (2019).
- 3. P. Mehta, M. Bukov, C.-H. Wang, A. G. Day, C. Richardson, C. K. Fisher, and D. J. Schwab, A high-bias, low-variance introduction to machine learning for physicists, Physics Reports. 810, 1â124 (2019).
- 4. D. J. Amit, H. Gutfreund, and H. Sompolinsky, Spin-glass models of neural networks, Physical Review A. 32 (2), 1007 (1985).
- 5. T. Ising, R. Folk, R. Kenna, B. Berche, and Y. Holovatch, The fate of Ernst Ising and the fate of his model, Journal of Physical Studies. 21, 3002 (2017).
- 6. R. Folk. The survival of Ernst Ising and the struggle to solve his model. In ed. Y. Holovatch, Order, Disorder and Criticality: Advanced Problems of Phase Transition Theory, pp. 1â77. World Scientific, Singapore (2023).
- 7. A. Rosenblueth, N. Wiener, and J. Bigelow, Behavior, purpose and teleology, Philosophy of Science. 10 (1), 18â24 (1943).
- 8. N. Wiener, Cybernetics or Control and Communication in the Animal and the Machine. MIT Press, Boston, MA, USA (1948).
- 9. E. Thorndike, Fundamentals of Learning. Teachers College, Columbia University, New York City, NY, USA (1932).
- 10. I. S. Berkeley, The curious case of connectionism, Open Philosophy. 2 (1), 190â205 (2019).
- 11. M. Minsky and S. Papert, Perceptrons: An Introduction to Computational Geometry, 2nd edn. MIT Press, Cambridge, MA, USA (1972).
- 12. S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, 3rd edn. Prentice Hall, Upper Saddle River, NJ, USA (2009).
- 13. V. Vapnik and A. Chervonenkis, Theory of Pattern Recognition: Statistical Learning Problems. Nauka, Moscow, Russia (1974). In Russian.
- 14. T. M. Mitchell, Machine Learning. McGraw-Hill, New York City, NY, USA (1997).
- 15. W. S. McCulloch and W. Pitts, A logical calculus of the ideas immanent in nervous activity, The Bulletin of Mathematical Biophysics. 5, 115â133 (1943).
- 16. T. H. Abraham, Nicolas Rashevskyâs mathematical biophysics, Journal of the History of Biology. 37 (2), 333â385 (2004).
- 17. A. Turing, On computable numbers with an application to the Entscheidungsproblem, Proceedings of the London Mathematical Society. 58, 230â265 (1937).
- 18. S. C. Kleene, Representation of Events in Nerve Nets and Finite Automata, In eds. C. E. Shannon and J. McCarthy, Automata Studies. (AM-34), vol. 34, pp. 3â42. Princeton University Press, Princeton, MA, USA (1956).
- 19. D. O. Hebb, The organization of behavior: A neuropsychological theory. Wiley & Sons, New York City, NY, USA (1949).
- 20. S. Löwel and W. Singer, Selection of intrinsic horizontal connections in the visual cortex by correlated neuronal activity, Science. 255 (5041), 209â212 (1992).
- 21. F. Rosenblatt, The perceptron: a probabilistic model for information storage and organization in the brain, Psychological Review. 65 (6), 386 (1958).
- 22. F. Rosenblatt et al., Principles of neurodynamics: Perceptrons and the theory of brain mechanisms. vol. 55, Spartan Books, Washington, DC, USA (1962).
- 23. B. Widrow and M. Hoff. Adaptive switching circuits. In 1960 IRE WESCON Convention Record, pp. 96â104 (1960).
- 24. S. Amari, A theory of adaptive pattern classifiers, IEEE Transactions on Electronic Computers. EC-16 (3), 299â307 (1967).
- 25. A. G. Ivakhnenko and V. G. Lapa, Cybernetics and Forecasting Techniques. American Elsevier Publishing Company, Philadelphia, PA, USA (1967).
- 26. A. G. Ivakhnenko, Polynomial theory of complex systems, IEEE Transactions on Systems, Man, and Cybernetics. (4), 364â378 (1971).
- 27. H. J. Kelley, Gradient theory of optimal flight paths, Ars Journal. 30 (10), 947â954 (1960).
- 28. S. Linnainmaa. The representation of the cumulative rounding error of an algorithm as a taylor expansion of the local rounding errors. Masterâs thesis, University of Helsinki (1970).
- 29. S. Linnainmaa, Taylor expansion of the accumulated rounding error, BIT Numerical Mathematics. 16 (2), 146â160 (1976).
- 30. P. J. Werbos. Applications of advances in nonlinear sensitivity analysis. In eds. R. F. Drenick and F. Kozin, System Modeling and Optimization, pp. 762â770, Springer Berlin Heidelberg, Berlin, Heidelberg, DE (1982).
- 31. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, Learning representations by back-propagating errors, Nature. 323 (6088), 533â536 (1986).
- 32. A. J. Noest. Phasor neural networks. In ed. D. Z. Anderson, Neural Information Processing Systems, Denver, Colorado, USA, 1987, pp. 584â591, American Institue of Physics (1987).
- 33. A. J. Noest, Discrete-state phasor neural networks, Physical Review A. 38 (4), 2196 (1988).
- 34. A. J. Noest, Associative memory in sparse phasor neural networks, Europhysics Letters. 6 (5), 469 (1988).
- 35. M. Kobayashi, Exceptional reducibility of complex-valued neural networks, IEEE Transactions on Neural Networks. 21 (7), 1060â1072 (2010).
- 36. T. Minemoto, T. Isokawa, H. Nishimura, and N. Matsui, Quaternionic multistate Hopfield neural network with extended projection rule, Artificial Life and Robotics. 21, 106â111 (2016).
- 37. H. Zhang, M. Gu, X. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. Yung, et al., An optical neural chip for implementing complex-valued neural network, Nature Communications. 12 (1), 457 (2021).
- 38. L. Böttcher and M. A. Porter, Complex networks with complex weights, Physical Review E. 109 (2), 024314 (2024).
- 39. T. Bayes, An essay towards solving a problem in the doctrine of chances, Philosophical Transactions of the Royal Society of London. 53, 370â418 (1763).
- 40. P. Laplace, Mâemoire sur la probabilitĂ© des causes par les ĂvĂ©nements, Mâemoires de MathĂ©matique et de Physique PresentĂ©s ĂĄ l/AcadĂ©mie Royale des Sciences, Par Divers Davans, LĂ»s dans ses AssemblĂ©es. 6, 621â656 (1774).
- 41. T. Seidenfeld, R. A. Fisherâs fiducial argument and Bayesâ theorem, Statistical Science. 7, 358â368 (1992).
- 42. T. Seidenfeld. Forbidden fruit: When Epistemic Probability may not take a bite of the Bayesian apple. In eds. W. Harper and G. Wheeler, Probability and Inference: Essays in Honor of Henry E. Kyburg, Jr. Kingâs College Publications, London (2007).
- 43. H. Jeffrey, The Theory of Probability, 3rd edn. Oxford Classic Texts in the Physical Sciences, Oxford University Press, Oxford, UK (1939).
- 44. C. E. Shannon, A mathematical theory of communication, The Bell System Technical Journal. 27 (3), 379â423 (1948).
- 45. E. T. Jaynes, Information theory and statistical mechanics, Physical Review. 106 (4), 620 (1957).
- 46. E. T. Jaynes, Information theory and statistical mechanics. II, Physical Review. 108 (2), 171 (1957).
- 47. E. T. Jaynes, The well-posed problem, Foundations of Physics. 3 (4), 477â493 (1973).
- 48. E. T. Jaynes, Probability Theory: The Logic of Science. Cambridge University Press, Cambridge, UK (2003).
- 49. T. Seidenfeld, Why I am not an objective Bayesian., Theory and Decision. 11 (4), 413â440 (1979).
- 50. G. Wheeler, Objective Bayesianism and the problem of non-convex evidence, The British Journal for the Philosophy of Science. 63 (3), 841â50 (2012).
- 51. I. Hacking. Letâs not talk about objectivity. In eds. F. Padovani, A. Richardson, and J. Y. Tsou, Objectivity in Science, vol. 310, Boston Studies in the Philosophy and History of Science, pp. 19â33. Springer, Cham, CH (2015).
- 52. A. Gelman, Beyond subjective and objective in statistics, Journal of the Royal Statistical Society. Series A. 180 (4), 967â1033 (2017).
- 53. R. Haenni, J.-W. Romeijn, G. Wheeler, and J. Williamson. Logical relations in a statistical problem. In eds. B. Löwe, E. Pacuit, and J.-W. Romeijn, Foundations of the Formal Sciences VI: Reasoning about probabilities and probabilistic reasoning, vol. 16, Studies in Logic, pp. 49â79, College Publications, London, UK (2009).
- 54. G. de Cooman, Belief models: An order-theoretic investigation, Annals of Mathematics and Artificial Intelligence. 45 (5â34) (2005).
- 55. R. Haenni, J.-W. Romeijn, G. Wheeler, and J. Williamson, Probabilistic Logics and Probabilistic Networks. Synthese Library, Springer, Dordrecht (2011).
- 56. D. R. Cox, The regression analysis of binary sequences, Journal of the Royal Statistical Society. 20 (2), 215â242 (1958).
- 57. G. E. Hinton and D. van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In ed. L. Pitt, Proceedings of the Sixth Annual ACM Conference on Computational Learning Theory, COLT 1993, Santa Cruz, CA, USA, July 26-28, 1993, pp. 5â13, ACM (1993).
- 58. R. M. Neal, Bayesian learning for neural networks. vol. 118, Lecture Notes in Statistics, Springer Verlog, New York City, NY, USA (1996).
- 59. S.-I. Amari, Learning patterns and pattern sequences by self-organizing nets of threshold elements, IEEE Transactions on Computers. 100 (11), 1197â1206 (1972).
- 60. J. J. Hopfield, Neural networks and physical systems with emergent collective computational abilities, Proceedings of the National Academy of Sciences of the United States of America. 79 (8), 2554â2558 (1982).
- 61. T. Isokawa, H. Nishimura, N. Kamiura, and N. Matsui, Associative memory in quaternionic Hopfield neural network, International Journal of Neural Systems. 18 (02), 135â145 (2008).
- 62. H. Ramsauer, B. SchÀfl, J. Lehner, P. Seidl, M. Widrich, L. Gruber, M. Holzleitner, T. Adler, D. P. Kreil, M. K. Kopp, G. Klambauer, J. Brandstetter, and S. Hochreiter. Hopfield networks is all you need. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021 (2021).
- 63. M. Benedetti, E. Ventura, E. Marinari, G. Ruocco, and F. Zamponi, Supervised perceptron learning vs unsupervised Hebbian unlearning: Approaching optimal memory retrieval in Hopfield-like networks, The Journal of Chemical Physics. 156 (10) (2022).
- 64. L. Böttcher and H. J. Herrmann, Computational Statistical Physics. Cambridge University Press, Cambridge, UK (2021).
- 65. S. E. Fahlman, G. E. Hinton, and T. J. Sejnowski. Massively Parallel Architectures for AI: NETL, Thistle, and Boltzmann Machines. In ed. M. R. Genesereth, Proceedings of the National Conference on Artificial Intelligence, Washington, DC, USA, August 22-26, 1983, pp. 109â113, AAAI Press (1983).
- 66. G. E. Hinton and T. J. Sejnowski. Analyzing cooperative computation. In Proceedings of the Fifth Annual Conference of the Cognitive Science Society, Rochester, NY, USA (1983).
- 67. G. E. Hinton and T. J. Sejnowski. Optimal perceptual inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, vol. 448, pp. 448â453 (1983).
- 68. D. H. Ackley, G. E. Hinton, and T. J. Sejnowski, A learning algorithm for Boltzmann machines, Cognitive Science. 9 (1), 147â169 (1985).
- 69. P. Smolensky. Information processing in dynamical systems: Foundations of harmony theory. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, pp. 194â281. MIT Press, Cambridge, MA, USA (1986).
- 70. G. E. Hinton, Training products of experts by minimizing contrastive divergence, Neural Computation. 14 (8), 1771â1800 (2002).
- 71. G. E. Hinton, S. Osindero, and Y.-W. Teh, A fast learning algorithm for deep belief nets, Neural computation. 18 (7), 1527â1554 (2006).
- 72. G. E. Hinton and R. R. Salakhutdinov, Reducing the dimensionality of data with neural networks, Science. 313 (5786), 504â507 (2006).
- 73. D. Kim and D.-H. Kim, Smallest neural network to learn the Ising criticality, Physical Review E. 98 (2), 022138 (2018).
- 74. S. Efthymiou, M. J. Beach, and R. G. Melko, Super-resolving the ising model with convolutional neural networks, Physical Review B. 99 (7), 075113 (2019).
- 75. F. DâAngelo and L. Böttcher, Learning the Ising model with generative neural networks, Physical Review Research. 2 (2), 023266 (2020).
- 76. G. Carleo and M. Troyer, Solving the quantum many-body problem with artificial neural networks, Science. 355 (6325), 602â606 (2017).
- 77. G. Torlai, G. Mazzola, J. Carrasquilla, M. Troyer, R. Melko, and G. Carleo, Neural-network quantum state tomography, Nature Physics. 14 (5), 447â450 (2018).
- 78. J. Schmidhuber, Deep learning in neural networks: An overview, Neural Networks. 61, 85â117 (2015).
- 79. J. Dean, A golden decade of deep learning: Computing systems & applications, Daedalus. 151 (2), 58â74 (2022).
- 80. S. Hochreiter and J. Schmidhuber, Long short-term memory, Neural Computation. 9 (8), 1735â1780 (1997).
- 81. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition, Proceedings of the IEEE. 86 (11), 2278â2324 (1998).
- 82. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, and Y. Bengio. Generative adversarial nets. In eds. Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pp. 2672â2680 (2014).
- 83. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. In eds. I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, and R. Garnett, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 5998â6008 (2017).
- 84. A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, Automatic differentiation in PyTorch (2017).
- 85. G. Cybenko, Approximation by superpositions of a sigmoidal function, Mathematics of Control, Signals and Systems. 2 (4), 303â314 (1989).
- 86. K. Hornik, M. Stinchcombe, and H. White, Multilayer feedforward networks are universal approximators, Neural Networks. 2 (5), 359â366 (1989).
- 87. K. Hornik, Approximation capabilities of multilayer feedforward networks, Neural Networks. 4 (2), 251â257 (1991).
- 88. Y.-J. Wang and C.-T. Lin, Runge-Kutta neural network for identification of dynamical systems in high accuracy, IEEE Transactions on Neural Networks. 9 (2), 294â307 (1998).
- 89. T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. Duvenaud. Neural ordinary differential equations. In eds. S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, MontrĂ©al, Canada, pp. 6572â6583 (2018).
- 90. E. Kaiser, J. N. Kutz, and S. L. Brunton, Data-driven discovery of Koopman eigenfunctions for control, Machine Learning: Science and Technology. 2 (3), 035023 (2021).
- 91. F. SchÀfer, P. Sekatski, M. Koppenhöfer, C. Bruder, and M. Kloc, Control of stochastic quantum dynamics by differentiable programming, Machine Learning: Science and Technology. 2 (3), 035004 (2021).
- 92. L. Böttcher, N. Antulov-Fantulin, and T. Asikis, AI Pontryagin or how artificial neural networks learn to control dynamical systems, Nature Communications. 13 (1), 333 (2022).
- 93. T. Asikis, L. Böttcher, and N. Antulov-Fantulin, Neural ordinary differential equation control of dynamics on graphs, Physical Review Research. 4 (1), 013221 (2022).
- 94. L. Böttcher and T. Asikis, Near-optimal control of dynamical systems with neural ordinary differential equations, Machine Learning: Science and Technology. 3 (4), 045004 (2022).
- 95. L. Böttcher, T. Asikis, and I. Fragkos, Control of dual-sourcing inventory systems using recurrent neural networks, INFORMS Journal on Computing. 35 (6), 1308â1328 (2023).
- 96. L. Böttcher. Gradient-free training of neural ODEs for system identification and control using ensemble Kalman inversion. In ICML Workshop on New Frontiers in Learning, Control, and Dynamical Systems, Honolulu, HI, USA, 2023 (2023).
- 97. D. Ruiz-Balet and E. Zuazua, Neural ODE control for classification, approximation, and transport, SIAM Review. 65 (3), 735â773 (2023).
- 98. T. Asikis. Towards recommendations for value sensitive sustainable consumption. In NeurIPS 2023 Workshop on Tackling Climate Change with Machine Learning: Blending New and Existing Knowledge Systems (2023). URL https://nips.cc/virtual/2023/76939.
- 99. M. Raissi, P. Perdikaris, and G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, Journal of Computational physics. 378, 686â707 (2019).
- 100. S. L. Brunton and J. N. Kutz, Data-driven science and engineering: Machine learning, dynamical systems, and control. Cambridge University Press, Cambridge, UK (2022).
- 101. S. Mowlavi and S. Nabi, Optimal control of PDEs using physics-informed neural networks, Journal of Computational Physics. 473, 111731 (2023).
- 102. C. Fronk and L. Petzold, Interpretable polynomial neural ordinary differential equations, Chaos: An Interdisciplinary Journal of Nonlinear Science. 33 (4) (2023).
- 103. M. Xia, L. Böttcher, and T. Chou, Spectrally adapted physics-informed neural networks for solving unbounded domain problems, Machine Learning: Science and Technology. 4 (2), 025024 (2023).
- 104. L. Böttcher, L. L. Fonseca, and R. C. Laubenbacher, Control of medical digital twins with artificial neural networks, arXiv preprint arXiv:2403.13851 (2024).
- 105. L. Böttcher and G. Wheeler, Visualizing high-dimensional loss landscapes with Hessian directions, Journal of Statistical Mechanics: Theory and Experiment. p. 023401 (2024).
- 106. M. Thoma. LaTeX Examples: Hopfield Network. URL https://github.com/MartinThoma/LaTeX-examples/tree/master/tikz/hopfield-network (Accessed: September 16, 2023).
- 107. J. Hertz, R. G. Palmer, and A. S. Krogh, Introduction to the Theory of Neural Computation, 1st edn. Perseus Publishing, New York City, New York, USA (1991).
- 108. A. J. Storkey. Increasing the capacity of a Hopfield network without sacrificing functionality. In eds. W. Gerstner, A. Germond, M. Hasler, and J. Nicoud, Artificial Neural Networks - ICANN â97, 7th International Conference, Lausanne, Switzerland, October 8-10, 1997, Proceedings, vol. 1327, Lecture Notes in Computer Science, pp. 451â456, Springer (1997).
- 109. N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, Equation of state calculations by fast computing machines, The Journal of Chemical Physics. 21 (6), 1087â1092 (1953).
- 110. J. E. Gubernatis, Marshall Rosenbluth and the Metropolis algorithm, Physics of Plasmas. 12 (5) (2005).
- 111. S. Kirkpatrick, C. D. Gelatt Jr, and M. P. Vecchi, Optimization by simulated annealing, Science. 220 (4598), 671â680 (1983).
- 112. R. J. Glauber, Time-dependent statistics of the Ising model, Journal of Mathematical Physics. 4 (2), 294â307 (1963).
- 113. M. Ă. Carreira-Perpiñån and G. E. Hinton. On contrastive divergence learning. In eds. R. G. Cowell and Z. Ghahramani, Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, AISTATS 2005, Bridgetown, Barbados, January 6-8, 2005, Society for Artificial Intelligence and Statistics (2005).
- 114. G. Hinton, A practical guide to training restricted Boltzmann machines, Momentum. 9 (1), 926 (2010).
- 115. S. Park, C. Yun, J. Lee, and J. Shin. Minimum width for universal approximation. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021 (2021).
- 116. M. Hardt, B. Recht, and Y. Singer. Train faster, generalize better: Stability of stochastic gradient descent. In eds. M. Balcan and K. Q. Weinberger, Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, vol. 48, JMLR Workshop and Conference Proceedings, pp. 1225â1234, JMLR.org (2016).
- 117. A. C. Wilson, R. Roelofs, M. Stern, N. Srebro, and B. Recht, The marginal value of adaptive gradient methods in machine learning. pp. 4148â4158 (2017).
- 118. D. Choi, C. J. Shallue, Z. Nado, J. Lee, C. J. Maddison, and G. E. Dahl, On empirical comparisons of optimizers for deep learning, arXiv preprint arXiv:1910.05446 (2019).
- 119. N. S. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, and P. T. P. Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings (2017).
- 120. L. Dinh, R. Pascanu, S. Bengio, and Y. Bengio. Sharp minima can generalize for deep nets. In eds. D. Precup and Y. W. Teh, Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, vol. 70, Proceedings of Machine Learning Research, pp. 1019â1028 (2017).
- 121. I. J. Goodfellow and O. Vinyals. Qualitatively characterizing neural network optimization problems. In eds. Y. Bengio and Y. LeCun, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings (2015).
- 122. H. Li, Z. Xu, G. Taylor, C. Studer, and T. Goldstein. Visualizing the loss landscape of neural nets. In eds. S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, MontrĂ©al, Canada, pp. 6391â6401 (2018).
- 123. Y. Hao, L. Dong, F. Wei, and K. Xu. Visualizing and understanding the effectiveness of BERT. In eds. K. Inui, J. Jiang, V. Ng, and X. Wan, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pp. 4141â4150, Association for Computational Linguistics (2019).
- 124. D. Wu, S. Xia, and Y. Wang. Adversarial weight perturbation helps robust generalization. In eds. H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual (2020).
- 125. S. Horoi, J. Huang, B. Rieck, G. Lajoie, G. Wolf, and S. Krishnaswamy. Exploring the geometry and topology of neural network loss landscapes. In eds. T. Bouadi, Ă. Fromont, and E. HĂŒllermeier, Advances in Intelligent Data Analysis XX - 20th International Symposium on Intelligent Data Analysis, IDA 2022, Rennes, France, April 20-22, 2022, Proceedings, vol. 13205, Lecture Notes in Computer Science, pp. 171â184, Springer (2022).
- 126. J. M. Lee, Riemannian manifolds: An introduction to curvature. vol. 176, Springer Science & Business Media, New York City, NY, USA (2006).
- 127. M. Berger and B. Gostiaux, Differential Geometry: Manifolds, Curves, and Surfaces. Springer Science & Business Media, New York City, NY, USA (2012).
- 128. W. KĂŒhnel, Differential Geometry. American Mathematical Society, Providence, RI, USA (2015).
- 129. K. Crane, Discrete differential geometry: An applied introduction, Notices of the AMS, Communication. pp. 1153â1159 (2018).
- 130. J. Martens, New insights and perspectives on the natural gradient method, Journal of Machine Learning Research. 21 (146), 1â76 (2020).
- 131. R. Hecht-Nielsen, Context vectors: general purpose approximate meaning representations self-organized from raw data, Computational Intelligence: Imitating Life, IEEE Press. pp. 43â56 (1994).
- 132. R. B. Davies, Algorithm AS 155: The distribution of a linear combination of $\chi^{2}$ random variables, Applied Statistics. 29 (3), 323â333 (1980).
- 133. B. Laurent and P. Massart, Adaptive estimation of a quadratic functional by model selection, Annals of Statistics. 28 (5), 1302â1338 (2000).
- 134. J. Bausch, On the efficient calculation of a linear combination of chi-square random variables with an application in counting string vacua, Journal of Physics A: Mathematical and Theoretical. 46 (50), 505202 (2013).
- 135. T. Chen and T. Lumley, Numerical evaluation of methods approximating the distribution of a large quadratic form in normal variables, Computational Statistics & Data Analysis. 139, 75â81 (2019).
- 136. J. MatouĆĄek, On variants of the JohnsonâLindenstrauss lemma, Random Structures and Algorithms. 33 (2), 142â156 (2008).
- 137. M. F. Hutchinson, A stochastic estimator of the trace of the influence matrix for Laplacian smoothing splines, Communications in Statistics-Simulation and Computation. 18 (3), 1059â1076 (1989).
- 138. Z. Bai, G. Fahey, and G. Golub, Some large-scale matrix computation problems, Journal of Computational and Applied Mathematics. 74 (1-2), 71â89 (1996).
- 139. H. Avron and S. Toledo, Randomized algorithms for estimating the trace of an implicit symmetric positive semi-definite matrix, Journal of the ACM (JACM). 58 (2), 1â34 (2011).
- 140. Y. N. Dauphin, R. Pascanu, Ă. GĂŒlçehre, K. Cho, S. Ganguli, and Y. Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In eds. Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pp. 2933â2941 (2014).
- 141. P. Baldi and K. Hornik, Neural networks and principal component analysis: Learning from examples without local minima, Neural Networks. 2 (1), 53â58 (1989).
- 142. Z. Yao, A. Gholami, K. Keutzer, and M. W. Mahoney. PyHessian: Neural networks through the lens of the Hessian. In eds. X. Wu, C. Jermaine, L. Xiong, X. Hu, O. Kotevska, S. Lu, W. Xu, S. Aluru, C. Zhai, E. Al-Masri, Z. Chen, and J. Saltz, 2020 IEEE International Conference on Big Data (IEEE BigData 2020), Atlanta, GA, USA, December 10-13, 2020, pp. 581â590, IEEE (2020).
- 143. Z. Liao and M. W. Mahoney. Hessian eigenspectra of more realistic nonlinear models. In eds. M. Ranzato, A. Beygelzimer, Y. N. Dauphin, P. Liang, and J. W. Vaughan, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 20104â20117 (2021).
- 144. R. L. Burden, J. D. Faires, and A. M. Burden, Numerical Analysis; 10th edition. Cengage Learning, Boston, MA, USA (2015).
- 145. B. Adcock and N. Dexter, The gap between theory and practice in function approximation with deep neural networks, SIAM Journal on Mathematics of Data Science. 3 (2), 624â655 (2021).