2405.10957v1
Model: nemotron-free
# Chapter 0 Statistical Mechanics and Artificial Neural Networks: Principles, Models, and Applications
## Abstract
The field of neuroscience and the development of artificial neural networks (ANNs) have mutually influenced each other, drawing from and contributing to many concepts initially developed in statistical mechanics. Notably, Hopfield networks and Boltzmann machines are versions of the Ising model, a model extensively studied in statistical mechanics for over a century. In the first part of this chapter, we provide an overview of the principles, models, and applications of ANNs, highlighting their connections to statistical mechanics and statistical learning theory.
Artificial neural networks can be seen as high-dimensional mathematical functions, and understanding the geometric properties of their loss landscapes (i.e., the high-dimensional space on which one wishes to find extrema or saddles) can provide valuable insights into their optimization behavior, generalization abilities, and overall performance. Visualizing these functions can help us design better optimization methods and improve their generalization abilities. Thus, the second part of this chapter focuses on quantifying geometric properties and visualizing loss functions associated with deep ANNs.
Contents
1. 1 Introduction
1. 1 The McCullochâPitts calculus and Hebbian learning
1. 2 The Perceptron
1. 3 Objective Bayes, entropy, and information theory
1. 4 Connections to the Ising model
1. 2 Hopfield network
1. 3 Boltzmann machine learning
1. 1 Boltzmann machines
1. 2 Restricted Boltzmann machines
1. 3 Derivation of the learning algorithm
1. 4 Loss landscapes of artificial neural networks
1. 1 Principal curvature in random projections
1. 2 Extracting curvature information
1. 3 Hessian directions
1. 5 Conclusions
### 1 Introduction
There has been a long tradition in studying learning systems within statistical mechanics. [1, 2, 3] Some versions of artificial neural networks (ANNs) were, in fact, inspired by the Ising model that has been proposed in 1920 by Wilhelm Lenz as a model for ferromagnetism and solved in one dimension by his doctoral student, Ernst Ising, in 1924. [4, 5, 6] In the language of machine learning, the Ising model can be considered as a non-learning recurrent neural network (RNN).
Contributions to ANN research extend beyond statistical mechanics, however, requiring a broad range of scientific disciplines. Historically, these efforts have been marked by eras of interdisciplinary collaboration referred to as cybernetics [7, 8], connectionism [9, 10], artificial intelligence [11, 12], and machine learning [13, 14]. To emphasize the importance of multidisciplinary contributions in ANN research, we will often highlight the research disciplines of individuals who have played pivotal roles in the advancement of ANNs.
#### 1 The McCullochâPitts calculus and Hebbian learning
An influential paper by the neuroscientist Warren S. McCulloch and logician Walter Pitts published in 1943 is often regarded as inaugurating neural network research, [15] but in fact there was an active research community at that time working on the mathematics of neural activity. [16] McCulloch and Pittsâs pivotal contribution was to join propositional logic and Alan Turingâs mathematical notion of computation [17] to propose a calculus for creating compositional representations of neural behavior. Their work has later on been picked up by the mathematician Stephen C. Kleene [18], who made some of their work more accessible. Kleene comments on the work by McCulloch and Pitts in his 1956 article [18]: âThe present memorandum is partly an exposition of the McCullochâPitts results; but we found the part of their paper which treats of arbitrary nets obscure; so we have proceeded independently here.â
Contemporaneously, the physchologist Donald O. Hebb formulated in his 1949 book âThe Organization of Behaviorâ [19] a neurophysiological postulate, which is known today as âHebbâs ruleâ or âHebbian learningâ: Let us assume then that the persistence or repetition of a reverberatory activity (or âtraceâ) tends to induce lasting cellular changes that add to its stability. ⊠When an axon of cell $A$ is near enough to excite a cell $B$ and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that $A$ âs efficiency, as one of the cells firing $B$ , is increased. Donald O. Hebb in âThe Organization of Behaviorâ (1949) In the modern literature, Hebbian learning is often summarized by the slogan âneurons wire together if they fire togetherâ. [20]
#### 2 The Perceptron
In 1958, the psychologist Frank Rosenblatt developed the perceptron [21, 22], which is essentially a binary classifier that is based on a single artificial neuron and a Hebbian-type learning rule. The artificial neuron of this type is sometimes referred to as âMcCullochâPitts neuronâ, owing to its connection to the foundational work of McCulloch and Pitts [15]. Some authors distinguish McCullochâPitts neurons from perceptrons by noting that the former handle binary signals, while the latter can process real-valued inputs. In parallel to Rosenblattâs work, the electrical engineers Bernard Widrow and Ted Hoff developed a similar ANN named ADALINE (Adaptive Linear Neuron). [23]
We show a schematic of a perceptron in Fig. 1. It receives inputs $x_{1},x_{2},\dots,x_{n}$ that are weighted with $w_{1},w_{2},\dots,w_{n}$ . Within the summer (indicated by a $\Sigma$ symbol), the artificial neuron computes $\sum_{i}w_{i}x_{i}+b$ , where $b$ is a bias term. This quantity is then used as input of a step activation function (indicated by a step symbol). The output of this function is $y$ . Such simple artificial neurons can be used to implement logical functions including AND, OR, and NOT. For example, consider a perceptron with Heaviside activation, two binary inputs $x_{1},x_{2}\in\{0,1\}$ , weights $w_{1},w_{1}=1$ and bias $b=-1.5$ . We can readily verify that $y=x_{1}\wedge x_{2}$ . An example of a logical function that cannot be represented by a perceptron is XOR. This was shown by the computer scientists Marvin L. Minsky and Seymour A. Papert in their 1969 book âPerceptrons: An Introduction to Computational Geometryâ [11]. Based on their analysis of perceptrons they considered the extension of research on this topic to be âsterileâ.
Problems like the one associated with representing XOR using a single perceptron can be overcome by employing multilayer perceptrons (MLPs) or, in a broader sense, multilayer feedforward networks equipped with various types of activation functions. [24, 25, 26] One effective way to train such multilayered structures is reverse mode automatic differentiation (i.e., backpropagation). [27, 28, 29, 30, 31]
Notice that perceptrons and other ANNs predominantly operate on real-valued data. Nevertheless, there is an expanding body of research focused on ANNs that are designed to handle complex-valued and, in some cases, quaternion-based information. [32, 33, 34, 35, 36, 37, 38]
$x_{1}$
$x_{2}$
$\dots$
$x_{n}$ $\Sigma$ $w_{1}$ $w_{2}$ $\dots$ $w_{n}$ $b$ $y$
Figure 1: A schematic of a perceptron with output $y$ , inputs $x_{1},x_{2},\dots,x_{n}$ , corresponding weights $w_{1},w_{2},\dots,w_{n}$ , and a bias term $b$ . Traditionally, the inputs and outputs of perceptrons were considered to be binary (e.g., $\{-1,1\}$ or $\{0,1\}$ ), while weights and biases are generally represented as real numbers. The symbol $\Sigma$ signifies the summation process wherein the artificial neuron computes $\sum_{i}w_{i}x_{i}+b$ . This computed value then becomes the input for a step activation function, denoted by a step symbol. We denote the final output as $y$ . While the original perceptron employed a step activation function, contemporary applications frequently utilize a variety of functions including sigmoid, $\tanh$ , and rectified linear unit (ReLU).
#### 3 Objective Bayes, entropy, and information theory
Over the same period of time, seminal work in probability and statistics appeared that would later prove crucial to the development of ANNs. In the early 20th century, a renewed interest in inverse âBayesianâ inference [39, 40] met strong resistance from the emerging frequentist school of statistics led by Jerzy Neyman, Karl Pearson, and especially Ronald Fisher, who advocated his own fiducial inference as a better alternative to Bayesian inference [41, 42]. In response to criticism that Bayesian methods were irredeemably subjective, Harold Jeffreys presented a methodology for âobjectivelyâ assigning prior probabilities based on symmetry principles and invariance properties. [43]
Edwin Jaynes, building on Jeffreysâs ideas for objective Bayesian priors and Shannonâs quantification of uncertainty [44], proposed the Maximum Entropy (MaxEnt) principle as a rationality criteria for assigning prior probabilities that are at once consistent with known information while remaining maximally uncommitted about what remains uncertain [45, 46, 47]. Jaynes envisioned the MaxEnt principle as the basis for a âlogic of scienceâ [48]. For instance, he proposed a reinterpretation of the principles of thermodynamics in information-theoretic terms where macro-states of a physical system are understood through the partial information they provide about the possible underlying microstates, showing how the original BoltzmannâGibbs interpretation of entropy could be replaced by a more general single framework based on Shannon entropy. Objective Bayesianism does not remove subjective judgment [49], MaxEnt is far from a universal principle for encoding prior information [50], and the terms âobjectiveâ and âsubjectiveâ are outdated. [51, 52]
For example, the MaxEnt principle can be used to select the least biased distribution from many common exponential family distributions by imposing constraints that match the expectation values of their sufficient statistics. That is, from constraints on the expected values of some function $h(x)$ on data parameterized by $\theta$ , applying the MaxEnt principle to find the probability distribution that maximizes entropy under those constraints often takes the form of an exponential family distribution
$$
p(x\mid\theta)=h(x)\,\exp\!\left(\theta^{\top}T(x)-A(\theta)\right)\,, \tag{1}
$$
where $h(x)$ is the underlying base measure, $\theta$ is the natural parameter of the distribution, $A(\theta)$ is the log normalizer, and $T(x)$ is the sufficient statistic determined by MaxEnt under the given constraints. MaxEnt derived distributions depend on contingencies of the data and the availability of the right constraints to ensure the derivation goes through, howeverâcontingencies that undermine viewing MaxEnt as a fundamental principle of rational inference.
Nevertheless, Jaynesâs notion of treating probability as an extension of logic [48, 48, 53] and probabilistic inference as governed by coherence conditions on information states [54, 55] has proved useful in optimization (e.g., cross-entropy loss [56]), entropy-based regularization of ANNs [57], and Bayesian neural networks. [58]
#### 4 Connections to the Ising model
Returning to the Ising model, a modified version of it has been endowed with a learning algorithm by the mathematical neuroscientist Shunâichi Amari in 1972 and has been employed for learning patterns and sequences of patterns. [59] Amariâs self-organizing ANN is an early model of associative memory that has later been popularized by the physicist John Hopfield. [60] Despite its limited representational capacity and use in practical applications, the study of Hopfield-type learning systems is still an active research area. [61, 36, 62, 63]
The Hopfield network shares the Hamiltonian with the Ising model. Its learning algorithm is deterministic and aims at minimizing the âenergyâ of the system. One problem with the deterministic learning algorithm of Hopfield networks is that it cannot escape local minima. A solution to this problem is to employ a stochastic learning rule akin to Monte-Carlo methods that are used to generate Ising configurations. [64] In 1985, David H. Ackley, Geoffrey E. Hinton, and Terrence J. Sejnowski proposed such a learning algorithm for an Ising-type ANN which they called âBoltzmann machineâ (BM). [65, 66, 67, 68] In 1986, Smolensky introduced the concept of âHarmoniumâ [69], which later evolved into restricted Boltzmann machines (RBMs). In contrast to Boltzmann machines, RBMs benefit from more efficient training algorithms that were developed in the early 2000s. [70, 71] Since then, RBMs have been applied in various context such as to reduce dimensionality of datasets [72], study phase transitions [73, 74, 3, 75], represent wave functions [76, 77].
Our historical overview of ANNs aims at emphasizing the strong interplay between statistical mechanics and related fields in the investigation of learning systems. Given the extensive span of over eight decades since the pioneering work of McCulloch and Pitts, summarizing the history of ANNs within a few pages cannot do justice to its depth and significance. We therefore refer the reader to the excellent review by JĂŒrgen Schmidhuber (see Ref. 78) for a more detailed overview of the history of deep learning in ANNs.
As computing power continues to increase, deep (i.e., multilayer) ANNs have, over the past decade, become pervasive across various scientific domains. [79] Different established and newly developed ANN architectures have not only been employed to tackle scientific problems but have also found extensive applications in tasks such as image classification and natural language processing. [80, 81, 82, 83]
Finally, we would also like to emphasize an area of research that has significantly advanced due to the availability of automatic differentiation [84] and the utilization of ANNs as universal function approximators. [85, 86, 87] This area resides at the intersection of non-linear dynamics [88, 89], control theory [90, 91, 92, 93, 94, 95, 96, 97, 98], and dynamics-informed learning, where researchers often aim to integrate ANNs and their associated learning algorithms with domain-specific knowledge. [99, 100, 101, 102, 103, 104] This integration introduces an inductive bias that facilitates effective learning.
The outline of this chapter is as follows. To better illustrate the connections between the Ising model and ANNs, we devote Secs. 2 and 3 to the Hopfield model and Boltzmann machines, respectively. In Sec. 4, we discuss how mathematical tools from high-dimensional probability and differential geometry can help us study the loss landscape of deep ANNs. [105] We conclude this chapter in Sec. 5.
### 2 Hopfield network
A Hopfield network is a complete (i.e., fully connected) undirected graph in which each node is an artificial neuron of McCullochâPitts type (see Fig. 1). We show an example of a Hopfield network with $n=5$ neurons in Fig. 2. We use $x_{i}\in\{-1,1\}$ to denote the state of neuron $i$ ( $i\in\{1,\dots,n\}$ ). Because the underlying graph is fully connected, neuron $i$ receives $n-1$ inputs $x_{j}$ ( $j\neq i$ ). The inputs $x_{j}$ associated with neuron $i$ are assigned weights $w_{ij}\in\mathbb{R}$ . In a Hopfield network, weights are assumed to be symmetric (i.e., $w_{ij}=w_{ji}$ ), and self-weights are considered to be absent (i.e., $w_{ii}=0$ ). $x_{1}$ $x_{2}$ $x_{3}$ $x_{4}$ $x_{5}$ $w_{12}$ $w_{15}$
Figure 2: An example of a Hopfield network with $n=5$ neurons. Each neuron is connected to all other neurons through black edges, representing both inputs and outputs. (Adapted from 106.)
With these definitions in place, the activation of neuron $i$ is given by
$$
a_{i}=\sum_{j}w_{ij}x_{j}+b_{i}\,, \tag{2}
$$
where $b_{i}$ is the bias term of neuron $i$ .
Regarding connections between the Ising model and Hopfield networks, the states of artificial neurons align with the binary values employed in the Ising model to model the orientations of elementary magnets, often referred to as classical âspinsâ. The weights in a Hopfield network represent the potentially diverse couplings between different elementary magnets. Lastly, the bias terms assume the function of external magnetic fields that act on these elementary magnets. Given these structural similarities between the Ising model and Hopfield networks, it is natural to assign them the Ising-type energy function
$$
E=-\frac{1}{2}\sum_{i,j}w_{ij}x_{i}x_{j}-\sum_{i}b_{i}x_{i}\,. \tag{3}
$$
Hopfield networks are not just a collection of interconnected artificial neurons, but they are dynamical systems whose states $x_{i}$ evolve in discrete time according to
$$
x_{i}\leftarrow\begin{cases}1\,,&\text{if }a_{i}\geq 0\\
-1\,,&\text{otherwise}\,,\end{cases} \tag{4}
$$
where $a_{i}$ is the activation of neuron $i$ [see Eq. (2)].
For asynchronous updates in which the state of one neuron is updated at a time, the update rule (4) has an interesting property: Under this update rule, the energy $E$ [see Eq. (3)] of a Hopfield network never increases.
The proof of this statement is straightforward. We are interested in the cases where an update of $x_{i}$ causes this quantity to change its sign. Otherwise the energy will stay constant. Consider the two cases (i) $a_{i}=\sum_{j}w_{ij}x_{j}+b_{i}<0$ with $x_{i}=1$ , and (ii) $a_{i}\geq 0$ with $x_{i}=-1$ . In the first case, the energy difference is
$$
\Delta E_{i}=E(x_{i}=-1)-E(x_{i}=1)=2\Bigl{(}b_{i}+\sum_{j}w_{ij}x_{j}\Bigr{)}=2a_{i}\,. \tag{5}
$$
Notice that $\Delta E_{i}<0$ because $a_{i}=\sum_{j}w_{ij}x_{j}+b_{i}<0$ .
In the second case, we have
$$
\Delta E_{i}=E(x_{i}=1)-E(x_{i}=-1)=-2\Bigl{(}b_{i}+\sum_{j}w_{ij}x_{j}\Bigr{)}=-2a_{i}\,. \tag{6}
$$
The energy difference satisfies $\Delta E_{i}\leq 0$ because $a_{i}\geq 0$ . In summary, we have shown that $\Delta E_{i}\leq 0$ for all changes of the state variable $x_{i}$ according to update rule (4).
Equations (5) and (6) also show that the energy difference associated with a change of sign in $x_{i}$ is $2a_{i}$ and $-2a_{i}$ for $a_{i}<0$ and $a_{i}\geq 0$ , respectively. Absorbing the bias term $b_{i}$ in the weights $w_{ij}$ by associating an extra active unit to every node in the network yields for the corresponding energy differences
$$
\Delta E_{i}=\pm 2\sum_{j}w_{ij}x_{j}\,. \tag{7}
$$
One potential application of Hopfield networks is to store and retrieve information in local minima by adjusting their weights. For instance, consider the task of storing $N$ binary (black and white) patterns in a Hopfield network. Let these $N$ patterns be represented by the set $\{p_{i}^{\nu}=\pm 1|1\leq i\leq n\}$ with $\nu\in\{1,\dots,N\}$ .
To learn the weights that represent our binary patterns, we can apply the Hebbian-type rule
$$
w_{ij}=w_{ji}=\frac{1}{N}\sum_{\nu=1}^{N}p_{i}^{\nu}p_{j}^{\nu}\,. \tag{8}
$$
In Hopfield networks with a large number of neurons, the capacity for storing and retrieving patterns is limited to approximately 14% of the total number of neurons. [107] However, it is possible to enhance this capacity through the use of modified versions of the learning rule (8). [108]
After applying the Hebbian learning rule to adjust the weights of a given Hopfield network, we may be interested in studying the evolution of different initial configurations $\{x_{i}\}$ according to update rule (4). A Hopfield network accurately represents a pattern $\{p_{i}^{\nu}=\pm 1|1\leq i\leq n\}$ if $x_{i}$ remains equal to $p_{i}^{\nu}$ both before and after an update for all $i$ (meaning $\{p_{i}^{\nu}\}$ is a fixed point of the system). Depending on the number of stored patterns and the chosen initial configuration, the Hopfield network may converge to a local minimum that does not align with the desired pattern. To mitigate this behavior, one can employ a stochastic update rule instead of the deterministic rule (4). This will be the topic that we are going to discuss in the following section.
### 3 Boltzmann machine learning
In Hopfield networks, if we start with an initial configuration that is close enough to a desired local energy minimum, we can recover the corresponding pattern $\{p_{i}^{\nu}\}$ . However, for certain applications, relying solely on the deterministic update rule (4) may not be enough as it never accepts transitions that are associated with an increase in energy. In constraint satisfaction tasks, we often require learning algorithms to have the capability to occasionally accept transitions to configurations of higher energy and move the system under consideration away from local minima towards a global minimum. A method that is commonly used in this context is the M(RT) 2 algorithm introduced by Nicholas Metropolis, Arianna W. Rosenbluth, Marshall N. Rosenbluth, Augusta H. Teller, and Edward Teller in their seminal 1953 paper âEquation of State Calculations by Fast Computing Machinesâ. [109, 110] This algorithm became the basis of many optimization methods, such as simulated annealing. [111]
In the 1980s, the M(RT) 2 algorithm has been adopted to equip Hopfield-type systems with a stochastic update rule, in which neuron $i$ is activated (set to $1$ ) regardless of its current state, with probability
$$
\sigma_{i}\equiv\sigma(\Delta E_{i}/T)=\sigma(2a_{i}/T)=\frac{1}{1+\exp(-\Delta E_{i}/T)}\,. \tag{9}
$$
Otherwise, it is set to $-1$ . The corresponding ANNs were dubbed ââBoltzmann machinesâ. [65, 66, 67, 68] $-10$ $-8$ $-6$ $-4$ $-2$ $0$ $2$ $4$ $6$ $8$ $10$ $0$ $0.2$ $0.4$ $0.6$ $0.8$ $1$ $\Delta E_{i}$ $\sigma_{i}$ $T=0.5$ $T=1$ $T=2$
Figure 3: The sigmoid function $\sigma_{i}\equiv\sigma(\Delta E_{i}/T)$ [see Eq. (9)] as function of $\Delta E_{i}$ for $T=0.5,1,2$ .
In Eq. (9), the quantity $\Delta E_{i}=2a_{i}$ is the energy difference between an inactive neuron $i$ and an active one [see Eq. (5)]. We use the convention employed in Refs. 67, 68. The authors of Ref. 66, use the convention that $\Delta E_{i}=-2a_{i}$ and $\sigma_{i}\equiv\sigma(\Delta E_{i}/T)=1/(1+\exp(\Delta E_{i}/T))$ . The function $\sigma(x)=1/\left(1+\exp(-x)\right)$ represents the sigmoid function, and the parameter $T$ serves as an equivalent to temperature. In the limit $T\rightarrow 0$ , we recover the deterministic update rule (4). In Fig. 3, we show $\sigma(\Delta E_{i}/T)$ as a function of $\Delta E_{i}$ for $T=0.5,1,2$ .
Examining Eqs. (3) and (9), we notice that we are simulating a system akin to the Ising model with Glauber (heat bath) dynamics. [112, 64] Because Glauber dynamics satisfy the detailed balance condition, Boltzmann machines will eventually reach thermal equilibrium. The corresponding probabilities $p_{\mathrm{eq}}(X)$ and $p_{\mathrm{eq}}(Y)$ for the ANN to be in states $X$ and $Y$ , respectively, will satisfy We set the Boltzmann constant $k_{B}$ to 1.
$$
\frac{p_{\mathrm{eq}}(Y)}{p_{\mathrm{eq}}(X)}=\exp\left(-\frac{E(Y)-E(X)}{T}\right)\,. \tag{10}
$$
In other words, the Boltzmann distribution provides the relative probability $p_{\mathrm{eq}}(Y)/p_{\mathrm{eq}}(X)$ associated with the states $X$ and $Y$ of a âthermalizedâ Boltzmann machine. Regardless of the initial configuration, at a given temperature $T$ , the stochastic update rule in which neurons are activated with probability $\sigma_{i}$ always leads to a thermal equilibrium configuration that is solely determined by its energy.
#### 1 Boltzmann machines
Unlike Hopfield networks, Boltzmann machines have two types of nodes: visible units and hidden units. We denote the corresponding sets of visible and hidden units by $V$ and $H$ , respectively. Notice that the set $H$ may be empty. In Fig. 4, we show an example of a Boltzmann machine with five visible units and three hidden units.
During the training of a Boltzmann machine, the visible units are âclampedâ to the environment, which means they are set to binary vectors drawn from an empirical distribution. Hidden units may be used to account for constraints involving more than two visible units. $h_{1}$ $h_{2}$ $h_{3}$ $v_{1}$ $v_{2}$ $v_{3}$ $v_{4}$ $v_{5}$
Figure 4: Boltzmann machines consist of visible units (blue) and hidden units (red). In the shown example, there are five visible units $\left\{v_{i}\right\}$ $({i\in\{1,\dots,5\}})$ and three hidden units $\left\{h_{j}\right\}$ $(j\in\{1,2,3\})$ . Similar to Hopfield networks, the network architecture in a Boltzmann machine is complete.
We denote the probability distribution of all configurations of visible units $\{\nu\}$ in a freely running network as $P^{\prime}\left(\{\nu\}\right)$ . Here, âfreely runningâ means that no external inputs are fixed (or âclampedâ) to the visible units. We derive the distribution $P^{\prime}\left(\{\nu\}\right)$ by summing (i.e., marginalizing) over the corresponding joint probability distribution. That is,
$$
P^{\prime}\left(\{\nu\}\right)=\sum_{\{h\}}P^{\prime}\left(\{\nu\},\{h\}\right)\,, \tag{11}
$$
where the summation is performed over all possible configurations of hidden units $\{h\}$ .
Our objective is now to devise a method such that $P^{\prime}\left(\{\nu\}\right)$ converges to the unknown environment (i.e., data) distribution $P\left(\{\nu\}\right)$ . To do so, we quantity the disparity between $P^{\prime}\left(\{\nu\}\right)$ and $P\left(\{\nu\}\right)$ using the KullbackâLeibler (KL) divergence (or relative entropy)
$$
G(P,P^{\prime})=\sum_{\{\nu\}}P\left(\{\nu\}\right)\ln\left[\frac{P\left(\{\nu\}\right)}{P^{\prime}\left(\{\nu\}\right)}\right]\,. \tag{12}
$$
To minimize the KL divergence $G(P,P^{\prime})$ , we perform a gradient descent according to
$$
\frac{\partial G}{\partial w_{ij}}=-\frac{1}{T}\left(p_{ij}-p^{\prime}_{ij}\right)\,, \tag{13}
$$
where $p_{ij}$ represents the probability of both units $i$ and $j$ being active when the environment dictates the states of the visible units, and $p^{\prime}_{ij}$ is the corresponding probability in a freely running network without any connection to the environment. [66, 67, 68] We derive Eq. (13) in Sec. 3.
Both probabilities $p_{ij}$ and $p_{ij}^{\prime}$ are measured once the Boltzmann machine has reached thermal equilibrium. Subsequently, the weights $w_{ij}$ of the network are then updated according to
$$
\Delta w_{ij}=\epsilon\left(p_{ij}-p^{\prime}_{ij}\right)\,, \tag{14}
$$
where $\epsilon$ is the learning rate. To reach thermal equilibrium, the states of the visible and hidden units are updated using the update probability (9).
In summary, the steps relevant for training a Boltzmann machine are as follows. 1.
Clamp the input data (environment distribution) to the visible units. 2.
Update the state of all hidden units according to Eq. (9) until the system reaches thermal equilibrium and then compute $p_{ij}$ . 3.
Unclamp the input data from the visible units. 4.
Update the state of all neurons according to Eq. (9) until the system reaches thermal equilibrium and then compute $p_{ij}^{\prime}$ . 5.
Update all weights according to Eq. (14) and return to step 2 or stop if the weight updates are sufficiently small. After training a Boltzmann machine, we can unclamp the visible units from the environment and generate samples to evaluate their quality. To do this, we use various initial configurations and activate neurons according to Eq. (9) until thermal equilibrium is reached. If the Boltzmann machine was trained successfully, the distribution of states for the unclamped visible units should align with the environment distribution.
#### 2 Restricted Boltzmann machines
Boltzmann machines are not widely used in general learning tasks. Their impracticality arises from the significant computational burden associated with achieving thermal equilibrium, especially in instances involving large system sizes. $h_{1}$ $h_{2}$ $h_{3}$ $h_{4}$ $v_{1}$ $v_{2}$ $v_{3}$ $v_{4}$ $v_{5}$ $v_{6}$
Figure 5: Restricted Boltzmann machines consist of a visible layer (blue) and a hidden layer (red). In the shown example, the respective layers comprise six visible units $\left\{v_{i}\right\}$ $(i\in\{1,\dots,6\})$ and four hidden units $\left\{h_{j}\right\}$ $(j\in\{1,\dots,4\})$ . The network structure of an RBM is bipartite and undirected.
Restricted Boltzmann machines provide an ANN structure that can be trained more efficiently by omitting connections between the hidden and visible units (see Fig. 5). Because of these missing intra-layer connections, the network architecture of an RBM is bipartite.
In RBMs, updates for visible and hidden units are performed alternately. Because there are no connections within each layer, we can update all units within each layer in parallel. Specifically, visible unit $v_{i}$ is activated with conditional probability
$$
p(v_{i}=1|\{h_{j}\})=\sigma\Bigl{(}\sum_{j}w_{ij}h_{j}+b_{i}\Bigr{)}\,, \tag{15}
$$
where $b_{i}$ is the bias of visible unit $v_{i}$ and $\{h_{j}\}$ is a given configuration of hidden units. We then activate all hidden units based on their conditional probabilities
$$
p(h_{j}=1|\{v_{i}\})=\sigma\Bigl{(}\sum_{i}w_{ij}v_{i}+c_{j}\Bigr{)}\,, \tag{16}
$$
where $h_{j}$ represents hidden unit $j$ , $c_{j}$ is its associated bias, and $\{v_{i}\}$ denotes a specific configuration of visible units. This technique of sampling is referred to as âblock Gibbs samplingâ.
Training an RBM shares similarities with training a BM. The key difference lies in the need to consider the bipartite network structure in the weight update equation (14). For an RBM, the weight updates are
$$
\Delta w_{ij}=\epsilon(\langle\nu_{i}h_{j}\rangle_{\text{data}}-\langle\nu_{i}h_{j}\rangle_{\text{model}})\,. \tag{17}
$$
Instead of sampling configurations to compute $\langle\nu_{i}h_{j}\rangle_{\text{data}}$ and $\langle\nu_{i}h_{j}\rangle_{\text{model}}$ at thermal equilibrium, we can instead employ a few relaxation steps. This approach is known as âcontrastive divergenceâ. [70, 113] The corresponding weight updates are
$$
\Delta w_{ij}^{\text{CD}}=\epsilon(\langle\nu_{i}h_{j}\rangle_{\text{data}}-\langle\nu_{i}h_{j}\rangle_{\text{model}}^{k})\,. \tag{18}
$$
The superscript $k$ indicates the number of block Gibbs updates performed. For a more detailed discussion on the contrastive divergence training of RBMs, see Ref. 114.
<details>
<summary>x1.png Details</summary>

### Visual Description
## Heatmap: Phase Transition Patterns in M(RT)^2 and RBM Methods
### Overview
The image presents a 2x3 grid of binary (black/white) visualizations comparing two computational methodsâ**M(RT)^2** and **RBM**âacross three temperature regimes relative to a critical temperature **T<sub>c</sub>**. Each cell represents spatial distributions of a binary variable (likely magnetization or order parameter) under distinct thermal conditions.
---
### Components/Axes
- **Left Axis (Methods)**:
- **Top Row**: **M(RT)^2** (method 1)
- **Bottom Row**: **RBM** (method 2)
- **Bottom Axis (Temperature Regimes)**:
- **Left Column**: **T < T<sub>c</sub>** (subcritical)
- **Center Column**: **T â T<sub>c</sub>** (critical)
- **Right Column**: **T > T<sub>c</sub>** (supercritical)
- **Legend**: No explicit legend; binary color coding (black = active/non-zero, white = inactive/zero) is inferred from context.
---
### Detailed Analysis
#### **M(RT)^2 Method (Top Row)**
1. **T < T<sub>c</sub>**:
- Sparse black squares (â5-10% of grid occupied).
- Isolated, small clusters dominate.
2. **T â T<sub>c</sub>**:
- Increased clustering (â30-40% occupancy).
- Larger, interconnected black regions emerge.
3. **T > T<sub>c</sub>**:
- Highly fragmented patterns (â50-60% occupancy).
- Fractal-like distribution with no dominant clusters.
#### **RBM Method (Bottom Row)**
1. **T < T<sub>c</sub>**:
- Extremely sparse (â2-5% occupancy).
- Single isolated black squares.
2. **T â T<sub>c</sub>**:
- Moderate clustering (â20-30% occupancy).
- Smaller clusters than M(RT)^2 but less structured.
3. **T > T<sub>c</sub>**:
- Diffuse, low-density patterns (â40-50% occupancy).
- No long-range correlations; random-like distribution.
---
### Key Observations
1. **Phase Transition Sensitivity**:
- Both methods show increased complexity near **T<sub>c</sub>**, but **M(RT)^2** exhibits sharper transitions (e.g., cluster size jumps at **T â T<sub>c</sub>**).
- **RBM** remains sparse across all regimes, suggesting weaker sensitivity to criticality.
2. **Structural Differences**:
- **M(RT)^2** produces more pronounced critical behavior (e.g., fractal patterns at **T > T<sub>c</sub>**).
- **RBM** lacks long-range order even at **T â T<sub>c</sub>**, indicating potential limitations in capturing phase transitions.
3. **Occupancy Trends**:
- **M(RT)^2** occupancy increases monotonically with temperature (5% â 60%).
- **RBM** occupancy plateaus at **T > T<sub>c</sub>** (40-50%), contrasting with **M(RT)^2**'s continued fragmentation.
---
### Interpretation
The data suggests **M(RT)^2** is more effective at modeling critical phenomena, capturing emergent order and fractal structures near **T<sub>c</sub>**. In contrast, **RBM** struggles to represent phase transitions, remaining sparse and disordered. This aligns with known limitations of RBMs in handling long-range correlations, which are critical for phase transitions. The binary heatmaps likely encode spatial distributions of an order parameter (e.g., magnetization), where black squares represent active regions. The stark differences in pattern evolution highlight the importance of method choice in computational physics simulations.
</details>
Figure 6: Snapshots of $32\times 32$ Ising configurations are shown for $T\in\{1.5,2.5,4\}$ . These configurations are derived from both M(RT) 2 and RBM samples. The quantity $T_{c}=2/\ln(1+\sqrt{2})\approx 2.269$ is the critical temperature of the two-dimensional Ising model.
Restricted Boltzmann machines have been employed in diverse contexts, including dimensionality reduction of datasets [72], the study of phase transitions [73, 74, 3, 75], and the representation of wave functions [76, 77]. In Fig. 6, we show three snapshots of Ising configurations, generated using both M(RT) 2 sampling and RBMs, each comprising $32\times 32$ spins. The RBM was trained using $20\times 10^{4}$ realizations of Ising configurations sampled at various temperatures. [75]
#### 3 Derivation of the learning algorithm
To derive Eq. (13), we follow the approach outlined in Ref. 68. Notice that the environment distribution $P\left(\{\nu\}\right)$ does not depend $w_{ij}$ . Hence, we have
$$
\frac{\partial G}{\partial w_{ij}}=-\sum_{\{\nu\}}\frac{P\left(\{\nu\}\right)}{P^{\prime}\left(\{\nu\}\right)}\frac{\partial P^{\prime}\left(\{\nu\}\right)}{\partial w_{ij}}\,. \tag{19}
$$
Next, we wish to compute the gradient $\partial P^{\prime}(\{\nu\})/\partial w_{ij}$ . In a freely running BM, the equilibrium distribution of the visible units follows a Boltzmann distribution. That is,
$$
P^{\prime}(\{\nu\})=\sum_{\{h\}}P^{\prime}\left(\{\nu\},\{h\}\right)=\frac{\sum_{\{h\}}e^{-E\left({\{\nu,h\}}\right)/T}}{\sum_{\{\nu,h\}}e^{-E\left({\{\nu,h\}}\right)/T}}\,. \tag{20}
$$
Here, the quantity
$$
E\left({\{\nu,h\}}\right)=-\frac{1}{2}\sum_{i,j}w_{ij}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}} \tag{21}
$$
is the Ising-type energy function in which field (or bias) terms are absorbed in the weights $w_{ij}$ [see Eqs. (3) and (7)]. For a BM in state ${\{\nu,h\}}$ , we denote the state of neuron $i$ by $x_{i}^{\{\nu,h\}}$ . Using
$$
\frac{\partial e^{-E\left({\{\nu,h\}}\right)/T}}{\partial w_{ij}}=\frac{1}{T}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}e^{-E\left({\{\nu,h\}}\right)/T} \tag{22}
$$
yields
$$
\displaystyle\begin{split}\frac{\partial P^{\prime}\left(\{\nu\}\right)}{\partial w_{ij}}&=\frac{\frac{1}{T}\sum_{\{h\}}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}e^{-E\left({\{\nu,h\}}\right)/T}}{\sum_{\{\nu,h\}}e^{-E\left({\{\nu,h\}}\right)/T}}\\
&-\frac{\sum_{\{h\}}e^{-E\left({\{\nu,h\}}\right)/T}\frac{1}{T}\sum_{\{\nu,h\}}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}e^{-E\left({\{\nu,h\}}\right)/T}}{\left(\sum_{\{\nu,h\}}e^{-E\left({\{\nu,h\}}\right)/T}\right)^{2}}\\
&=\frac{1}{T}\left[\sum_{\{h\}}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}P^{\prime}\left(\{\nu\},\{h\}\right)\right.\\
&\hskip 28.45274pt\left.-P^{\prime}(\{\nu\})\sum_{\{\nu,h\}}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}P^{\prime}\left(\{\nu\},\{h\}\right)\right]\,.\end{split} \tag{23}
$$
We will now substitute this result into Eq. (19) to obtain
$$
\displaystyle\begin{split}\frac{\partial G}{\partial w_{ij}}&=-\sum_{\{\nu\}}\frac{P\left(\{\nu\}\right)}{P^{\prime}\left(\{\nu\}\right)}\frac{1}{T}\left[\sum_{\{h\}}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}P^{\prime}\left(\{\nu\},\{h\}\right)\right.\\
&\hskip 91.04872pt\left.-P^{\prime}\left(\{\nu\}\right)\sum_{\{\nu,h\}}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}P^{\prime}\left(\{\nu\},\{h\}\right)\right]\\
&=-\frac{1}{T}\left[\sum_{\{\nu,h\}}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}P\left(\{\nu\},\{h\}\right)-\sum_{\{\nu,h\}}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}P^{\prime}\left(\{\nu\},\{h\}\right)\right]\,,\end{split} \tag{24}
$$
where we used that $\sum_{\{\nu\}}P\left(\{\nu\}\right)=1$ and $P\left(\{\nu\}\right)/P^{\prime}\left(\{\nu\}\right)P^{\prime}\left(\{\nu\},\{h\}\right)=P\left(\{\nu\},\{h\}\right)\,.$ The latter equation follows from the definition of joint probability distributions
$$
P\left(\{\nu\},\{h\}\right)=P\left(\{h\}|\{\nu\}\right)P\left(\{\nu\}\right) \tag{25}
$$
and
$$
P^{\prime}\left(\{\nu\},\{h\}\right)=P^{\prime}\left(\{h\}|\{\nu\}\right)P^{\prime}\left(\{\nu\}\right)\,, \tag{26}
$$
with $P\left(\{h\}|\{\nu\}\right)=P^{\prime}\left(\{h\}|\{\nu\}\right)$ . The conditional distributions $P(\{h\}|\{\nu\})$ and $P^{\prime}(\{h\}|\{\nu\})$ are identical, as the probability of observing a certain hidden state given a visible one is independent of the origin of the visible state in an equilibrated system. In other words, for the conditional distributions in an equilibrated system, it is irrelevant whether the visible state is provided by the environment or generated by a freely running machine. Defining
$$
p_{ij}\coloneqq\sum_{\{\nu,h\}}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}P\left(\{\nu\},\{h\}\right) \tag{27}
$$
and
$$
p_{ij}^{\prime}\coloneqq\sum_{\{\nu,h\}}x_{i}^{\{\nu,h\}}x_{j}^{\{\nu,h\}}P^{\prime}\left(\{\nu\},\{h\}\right)\,, \tag{28}
$$
we finally obtain Eq. (13).
### 4 Loss landscapes of artificial neural networks
Training an ANN involves minimizing a given loss function such as the KL divergence [see Eq. (12)]. The loss landscapes of ANNs are affected by several factors, including structural properties [86, 87, 115] and a range of implementation attributes [116, 117, 118]. Knowledge of their precise effects on learning performance, however, remains incomplete.
One path to a better understanding of the relationships between ANN structure, implementation attributes, and learning performance is through a more in-depth analysis of the geometric properties of loss landscapes. For instance, Keskar et al. [119] analyze the local curvature around candidate minimizers via the spectrum of the underlying Hessian to characterize the flatness and sharpness of loss minima, and Dinh et al. [120] demonstrate that reparameterizations can render flat minima sharp without affecting generalization properties. Even so, one challenge to the study of geometric properties of loss landscapes is high dimensionality. To meet that challenge, some propose to visualize the curvature around a given point by projecting curvature properties of high-dimensional loss functions to a lower-dimensional (and often random) projection of two or three dimensions [121, 122, 123, 124, 125]. Horoi et al. [125], building on this approach, pursue improvements in learning by dynamically sampling points in projected low-loss regions surrounding local minima during training.
However, visualizing high-dimensional loss landscape curvature relies on accurate projections of curvature properties to lower dimensions. Unfortunately, random projections do not preserve curvature information, thus do not afford accurate low-dimensional representations of curvature information. This argument is given in Sec. 1 and illustrated with a simulation example in Sec. 2. Principal curvatures in a low-dimensional projection are nevertheless given by functions of weighted ensemble means of the Hessian elements in the original, high-dimensional space, a result also established in Sec. 1. Instead of using random projections to visualize loss functions, we propose to analyze projections along dominant Hessian directions associated with the largest-magnitude positive and negative principal curvatures.
#### 1 Principal curvature in random projections
To describe the connection between the principal curvature of a loss function $L(\theta)$ with ANN parameters $\theta\in\mathbb{R}^{N}$ and that associated with a lower-dimensional, random projection, we provide in Sec. 1 an overview of concepts from differential geometry [126, 127, 128] that are useful to mathematically describe curvature in high-dimensional spaces. In Secs. 1 and 1, we show the relationship between principal curvature and random projections. Finally, in Sec. 1, we highlight a relationship between measures of curvature presented in Sec. 1 and Hessian trace estimates, which affords an alternative to Hutchinsonâs method for computing unbiased Hessian trace estimates.
Differential and information geometry concepts
In differential geometry, the principal curvatures are the eigenvalues of the shape operator (or Weingarten map Some authors distinguish between the shape operator and the Weingarten map depending on if the change of the underlying tangent vector is described in the original manifold or in Euclidean space (see, e.g., chapter 3 in [129]).), a linear endomorphism defined on the tangent space $T_{p}$ of $L$ at a point $p$ . For the original high-dimensional space, we have $(\theta,L(\theta))\subseteq\mathbb{R}^{N+1}$ and there are $N$ principal curvatures $\kappa_{1}^{\theta}\geq\kappa_{2}^{\theta}\geq\dots\geq\kappa_{N}^{\theta}$ . At a non-degenerate critical point $\theta^{*}$ where the gradient $\nabla_{\theta}L$ vanishes, the matrix of the shape parameter is given by the Hessian $H_{\theta}$ with elements $(H_{\theta})_{ij}=\partial^{2}L/(\partial\theta_{i}\partial\theta_{j})$ ( $i,j\in\{1,\dots,N\}$ ). A critical point is degenerate if the Hessian $H_{\theta}$ at this point is singular (i.e., $\mathrm{det}(H_{\theta})=0$ ). At degenerate critical points, one cannot use the eigenvalues of $H_{\theta}$ to determine if the critical point is a minimum (positive definite $H_{\theta}$ ) or a maximum (negative definite $H_{\theta}$ ). Geometrically, at a degenerate critical point, a quadratic approximation fails to capture the local behavior of the function that one wishes to study. Some works refer to the Hessian as the âcurvature matrixâ [130] or use it to characterize the curvature properties of $L(\theta)$ [122]. In the vicinity of a non-degenerate critical point $\theta^{*}$ , the eigenvalues of the Hessian $H_{\theta}$ are the principal curvatures and describe the loss function in the eigenbasis of $H_{\theta}$ according to
$$
L(\theta^{*}+\Delta\theta)=L(\theta^{*})+\frac{1}{2}\sum_{i=1}^{N}\kappa_{i}^{\theta}\Delta\theta_{i}^{2}\,. \tag{29}
$$
The Morse lemma states that, if a critical point $\theta^{*}$ of $L(\theta)$ is non-degenerate, then there exists a chart $(\tilde{\theta}_{1},\dots,\tilde{\theta}_{N})$ in a neighborhood of $\theta^{*}$ such that
$$
L(\tilde{\theta})=-\tilde{\theta}^{2}_{1}-\cdots-\tilde{\theta}^{2}_{i}+\tilde{\theta}^{2}_{i+1}+\cdots+\tilde{\theta}^{2}_{N}+L(\theta^{*})\,, \tag{30}
$$
where $\tilde{\theta}_{i}(\theta)=0$ for $i\in\{1,\dots,N\}$ . The loss function $L(\tilde{\theta})$ in Eq. (30) is decreasing along $i$ directions and increasing along the remaining $i+1$ to $N$ directions. Further, the index $i$ of a critical point $\theta^{*}$ is the number of negative eigenvalues of the Hessian $H_{\theta}$ at that point.
In the standard basis, the Hessian is
$$
H_{\theta}\coloneqq\nabla_{\theta}\nabla_{\theta}L(\theta)=\begin{pmatrix}\frac{\partial^{2}L}{\partial\theta_{1}^{2}}&\cdots&\frac{\partial^{2}L}{\partial\theta_{1}\partial\theta_{N}}\\
\vdots&\ddots&\vdots\\
\frac{\partial^{2}L}{\partial\theta_{N}\partial\theta_{1}}&\cdots&\frac{\partial^{2}L}{\partial\theta_{N}^{2}}\end{pmatrix}\,. \tag{31}
$$
Random projections
To graphically explore an $N$ -dimensional loss function $L$ around a critical point $\theta^{*}$ , one may wish to work in a lower-dimensional representation. For example, a two-dimensional projection of $L$ around $\theta^{*}$ is provided by
$$
L(\theta^{*}+\alpha\eta+\beta\delta)\,, \tag{32}
$$
where the parameters $\alpha,\beta\in\mathbb{R}$ scale the directions $\eta,\delta\in\mathbb{R}^{N}$ . The corresponding graph representation is $(\alpha,\beta,L(\alpha,\beta))\subseteq\mathbb{R}^{3}$ .
In high-dimensional spaces, there exist vastly many more almost-orthogonal than orthogonal directions. In fact, if the dimension of our space is large enough, with high probability, random vectors will be sufficiently close to orthogonal [131]. Following this result, many related works [122, 124, 125] use random Gaussian directions with independent and identically distributed vector elements $\eta_{i},\delta_{i}\sim\mathcal{N}(0,1)$ ( $i\in\{1,\dots,N\}$ ).
The scalar product of random Gaussian vectors $\eta,\delta$ is a sum of the difference between two chi-squared distributed random variables because
$$
\sum_{i=1}^{N}\eta_{i}\delta_{i}=\sum_{i=1}^{N}\frac{1}{4}(\eta_{i}+\delta_{i})^{2}-\frac{1}{4}(\eta_{i}-\delta_{i})^{2}=\sum_{i=1}^{N}\frac{1}{2}X_{i}^{2}-\frac{1}{2}Y_{i}^{2}\,, \tag{33}
$$
where $X_{i},Y_{i}\sim\mathcal{N}(0,1)$ .
Notice that $\eta,\delta$ are almost orthogonal, which can be proved using a concentration inequality for chi-squared distributed random variables. For further details, see Ref. 105.
Principal curvature
With the form of random Gaussian projections in hand, we now analyze the principal curvatures in both the original and lower-dimensional spaces. The Hessian associated with the two-dimensional loss projection (32) is
$$
\displaystyle\begin{split}H_{\alpha,\beta}&=\begin{pmatrix}\frac{\partial^{2}L}{\partial\alpha^{2}}&\frac{\partial^{2}L}{\partial\alpha\partial\beta}\\
\frac{\partial^{2}L}{\partial\beta\partial\alpha}&\frac{\partial^{2}L}{\partial\beta^{2}}\\
\end{pmatrix}\\
&=\begin{pmatrix}\sum_{i,j}\eta_{i}\eta_{j}\frac{\partial^{2}L}{\partial\theta_{i}\theta_{j}}&\sum_{i,j}\eta_{i}\delta_{j}\frac{\partial^{2}L}{\partial\theta_{i}\theta_{j}}\\
\sum_{i,j}\eta_{i}\delta_{j}\frac{\partial^{2}L}{\partial\theta_{i}\theta_{j}}&\sum_{i,j}\delta_{i}\delta_{j}\frac{\partial^{2}L}{\partial\theta_{i}\theta_{j}}\\
\end{pmatrix}\\
&=\begin{pmatrix}(H_{\theta})_{ij}\eta^{i}\eta^{j}&(H_{\theta})_{ij}\eta^{i}\delta^{j}\\
(H_{\theta})_{ij}\eta^{i}\delta^{j}&(H_{\theta})_{ij}\delta^{i}\delta^{j}\\
\end{pmatrix}\,,\end{split} \tag{34}
$$
where we use Einstein notation in the last equality.
Because the elements of $\delta,\eta$ are distributed according to a standard normal distribution, the second derivatives of the loss function $L$ in Eq. (34) have prefactors that are products of standard normal variables and, hence, can be expressed as sums of chi-squared distributed random variables as in Eq. (33). To summarize, elements of $H_{\alpha,\beta}$ are sums of second derivatives of $L$ in the original space weighted with chi-squared distributed prefactors.
The principal curvatures $\kappa_{\pm}^{\alpha,\beta}$ (i.e., the eigenvalues of $H_{\alpha,\beta}$ ) are
$$
\kappa_{\pm}^{\alpha,\beta}=\frac{1}{2}\left(A+C\pm\sqrt{4B^{2}+(A-C)^{2}}\right)\,, \tag{35}
$$
where $A=(H_{\theta})_{ij}\eta^{i}\eta^{j}$ , $B=(H_{\theta})_{ij}\eta^{i}\delta^{j}$ , and $C=(H_{\theta})_{ij}\delta^{i}\delta^{j}$ . To the best of our knowledge, a closed, analytic expression for the distribution of the quantities $A,B,C$ is not yet known [132, 133, 134, 135].
Returning to principal curvature, since $\sum_{i,j}a_{ij}\eta^{i}\eta^{j}=\sum_{i}a_{ii}\eta^{i}\eta^{i}+\sum_{i\neq j}a_{ij}\eta^{i}\eta^{j}$ ( $a_{ij}\in\mathbb{R}$ ), we find that $\mathds{E}[A]=\mathds{E}[C]={(H_{\theta})^{i}}_{i}$ and $\mathds{E}[B]=0$ where ${(H_{\theta})^{i}}_{i}\equiv\mathrm{tr}(H_{\theta})=\sum_{i=1}^{N}\kappa_{i}^{\theta}$ . The expected values of the quantities $A$ , $B$ , and $C$ correspond to ensemble means (43) in the limit $S\rightarrow\infty$ , where $S$ is the number of independent realizations of the underlying random variable. To show that the expected values of $a_{ij}\eta^{i}\eta^{j}$ ( $i\neq j$ ) or $a_{ij}\eta^{i}\delta^{j}$ vanish, one can either invoke independence of $\eta^{i},\eta^{j}$ ( $i\neq j$ ) and $\eta^{i},\delta^{j}$ or transform both products into corresponding differences of two chi-squared random variables with the same mean [see Eq. (33)].
Hence, the expected, dimension-reduced Hessian (34) is
$$
\mathds{E}[H_{\alpha,\beta}]=\begin{pmatrix}{(H_{\theta})^{i}}_{i}&0\\
0&{(H_{\theta})^{i}}_{i}\\
\end{pmatrix}\,. \tag{36}
$$
The corresponding eigenvalue (or principal curvature) $\bar{\kappa}^{\alpha,\beta}$ is therefore given by the sum over all principal curvatures in the original space (i.e., $\bar{\kappa}^{\alpha,\beta}=\sum_{i=1}^{N}\kappa_{i}^{\theta}$ ). Hence, the value of the principal curvature $\bar{\kappa}^{\alpha,\beta}$ in the expected dimension-reduced space will be either positive (if the positive principal curvatures in the original space dominate), negative (if the negative principal curvatures in the original space dominate), or close to zero (if positive and negative principal curvatures in the original space cancel out each other). As a result, saddle points will not appear as such in the expected random projection.
In addition to the connection between $\bar{\kappa}^{\alpha,\beta}$ and the principal curvatures $\kappa_{i}^{\theta}$ , we now provide an overview of additional mathematical relations between different curvature measures that are useful to quantify curvature properties of high-dimensional loss functions and their two-dimensional random projections.
Quantities to characterize curvature. Symbol Definition $H_{\theta}\in\mathbb{R}^{N\times N}$ Hessian in original loss space $\kappa_{i}^{\theta}\in\mathbb{R}$ principal curvatures in original loss space with $i\in\{1,\dots,N\}$ (i.e., the eigenvalues of $H_{\theta}$ ) $H_{\alpha,\beta}\in\mathbb{R}^{2\times 2}$ Hessian in a two-dimensional projection of an $N$ -dimensional loss function $\kappa_{\pm}^{\alpha,\beta}\in\mathbb{R}$ principal curvatures in a two-dimensional loss projection (i.e., the eigenvalues of $H_{\alpha,\beta}$ ) $\bar{\kappa}^{\alpha,\beta}\in\mathbb{R}$ principal curvature in expected, two-dimensional loss projection (i.e., the eigenvalues of $\mathbb{E}[H_{\alpha,\beta}]$ ) $H\in\mathbb{R}$ mean curvature (i.e., $\sum_{i=1}^{N}\kappa_{i}^{\theta}/N=\bar{\kappa}^{\alpha,\beta}/N$ )
Invoking Eq. (35), we can relate $\bar{\kappa}^{\alpha,\beta}$ to $\mathrm{tr}(H_{\theta})$ and $\kappa_{\pm}^{\alpha,\beta}$ . Because $\kappa_{+}^{\alpha,\beta}+\kappa_{-}^{\alpha,\beta}=A+C$ , we have
$$
\mathrm{tr}(H_{\theta})=\bar{\kappa}^{\alpha,\beta}=\sum_{i=1}^{N}\kappa_{i}^{\theta}=\frac{1}{2}\left(\mathbb{E}[\kappa_{-}^{\alpha,\beta}]+\mathbb{E}[\kappa_{+}^{\alpha,\beta}]\right)\,. \tag{37}
$$
The mean curvature $H$ in the original space is related to $\bar{\kappa}^{\alpha,\beta}$ via
$$
H=\frac{1}{N}\bar{\kappa}^{\alpha,\beta}=\frac{1}{N}\sum_{i=1}^{N}\kappa_{i}^{\theta}\,. \tag{38}
$$
We summarize the definitions of the employed Hessians and curvature measures in Tab. 1.
The appeal of random projections is that pairwise distances between points in a high-dimensional space can be nearly preserved by a lower-dimensional linear embedding, affording a low-dimensional representation of mean and variance information with minimal distortion [136]. The relationship between random Gaussian directions and principal curvature is less straightforward. Our results show that the principal curvatures $\kappa_{\pm}^{\alpha,\beta}$ in a two-dimensional loss projection are weighted averages of the Hessian elements $(H_{\theta})_{ij}$ in the original space, not weighted averages of the principal curvatures $\kappa_{i}^{\theta}$ as claimed by Ref. 122. Similar arguments apply to projections with dimension larger than 2.
Hessian trace estimates
Finally, we point to a connection between curvature measures and Hessian trace estimates. A common way of estimating $\mathrm{tr}(H_{\theta})$ without explicitly computing all eigenvalues of $H_{\theta}$ is based on Hutchinsonâs method [137] and random numerical linear algebra [138, 139]. The basic idea behind this approach is to (i) use a random vector $z\in\mathbb{R}^{N}$ with elements $z_{i}$ that are distributed according to a distribution function with zero mean and unit variance (e.g., a Rademacher distribution with $\Pr\left(z_{i}=\pm 1\right)=1/2$ ), and (ii) compute $z^{\top}H_{\theta}z$ , an unbiased estimator of $\mathrm{tr}(H_{\theta})$ . That is,
$$
\mathrm{tr}(H_{\theta})=\mathbb{E}[z^{\top}H_{\theta}z]\,. \tag{39}
$$
Recall that Eq. (37) shows that the principal curvature of the expected random loss projection, $\bar{\kappa}^{\alpha,\beta}$ , is equal to $\mathrm{tr}(H_{\theta})$ . Instead of estimating $\mathrm{tr}(H_{\theta})$ using Hutchinsonâs method (39), an alternative Hutchinson-type estimate of this quantity is provided by the mean of the expected values of $\kappa_{-}^{\alpha,\beta}$ and $\kappa_{+}^{\alpha,\beta}$ [see Eq. (37)].
#### 2 Extracting curvature information
We now study two examples that will help build intuitions. In the first example presented in Sec. 2, we study a critical point $\theta^{*}$ of an $N$ -dimensional loss function $L(\theta)$ for which (i) all principal curvatures have the same magnitude and (ii) the number of positive curvature directions is equal to the number of negative curvature directions. In this example, saddles can be correctly detected by ensemble means but success depends on the averaging process used. In the second example presented in Sec. 2, we use a loss function associated with an unequal number of negative and positive curvature directions. For the different curvature measures derived in Sec. 1, we find that random projections cannot identify the underlying saddle point. In Sec. 2, we will use the two example loss functions to discuss how curvature-based Hessian trace estimates relate to those obtained with the original Hutchinsonâs method.
Equal number of curvature directions
The loss function of our first example is
$$
L(\theta)=\frac{1}{2}\theta_{2n+1}\left(\sum_{i=1}^{n}\theta_{i}^{2}-\theta_{i+n}^{2}\right)\,,\quad n\in\mathbb{Z}_{+}\,, \tag{40}
$$
where we set $N=2n+1$ . A critical point $\theta^{*}$ of the loss function (40) satisfies
$$
(\nabla_{\theta}L)(\theta^{*})=\left(\begin{array}[]{c}\theta^{*}_{1}\theta^{*}_{2n+1}\\
\vdots\\
\theta^{*}_{n}\theta^{*}_{2n+1}\\
-\theta^{*}_{n+1}\theta^{*}_{2n+1}\\
\vdots\\
-\theta^{*}_{2n}\theta^{*}_{2n+1}\\
\frac{1}{2}\left(\sum_{i=1}^{n}{\theta_{i}^{{*\textsuperscript{2}}}}-{\theta_{i+n}^{{*\textsuperscript{2}}}}\right)\end{array}\right)=0\,. \tag{41}
$$
The Hessian at the critical point $\theta^{*}=(\theta^{*}_{1},\dots,\theta^{*}_{2n},\theta^{*}_{2n+1})$ $=(0,\dots,0,1)$ is
$$
H_{\theta}=\mathrm{diag}(\underbrace{1,\dots,1}_{n~\mathrm{times}},\underbrace{-1,\dots,-1}_{n~\mathrm{times}},0)\,. \tag{42}
$$
<details>
<summary>hessian_alpha_beta_left.png Details</summary>

### Visual Description
## Line Graphs: Expectation Difference vs. Number of Samples
### Overview
The image contains two identical line graphs (labeled (a) and (c)) comparing the difference between âš(H_α,ÎČ)_ijâ© and E[(H_α,ÎČ)_ij] across three (i,j) pairs: (1,1), (2,2), and (1,2)/(2,1). Both graphs share the same axes, labels, and data trends, with minor visual differences in line placement.
---
### Components/Axes
- **X-axis**: "number of samples S" (logarithmic scale: 10â° to 10âŽ)
- **Y-axis**: âš(H_α,ÎČ)_ijâ© â E[(H_α,ÎČ)_ij] (range: -20 to 20)
- **Legends**:
- Solid black line: (i,j) = (1,1)
- Dashed blue line: (i,j) = (2,2)
- Dashed red line: (i,j) = (1,2), (i,j) = (2,1)
- **Placement**: Legends are positioned in the top-right corner of each graph.
---
### Detailed Analysis
#### Graph (a)
- **Solid black line (1,1)**:
- Starts at ~15 at S=10â°, dips below zero (~â10) at S=10Âč, then stabilizes near zero by S=10Âł.
- Peak-to-trough oscillation amplitude decreases with increasing S.
- **Dashed blue line (2,2)**:
- Begins at ~5 at S=10â°, fluctuates between â5 and 15, converging to zero by S=10Âł.
- Shows sharper oscillations than the (1,1) line.
- **Dashed red line (1,2)/(2,1)**:
- Starts at ~â15 at S=10â°, rises to ~5 by S=10ÂČ, then stabilizes near zero.
- Largest initial deviation among all lines.
#### Graph (c)
- **Solid black line (1,1)**:
- Similar to (a), but converges to zero slightly earlier (by S=10ÂČ.5).
- **Dashed blue line (2,2)**:
- Mirrors (a) but with reduced oscillation amplitude.
- **Dashed red line (1,2)/(2,1)**:
- Converges to zero faster than in (a), reaching stability by S=10ÂČ.
---
### Key Observations
1. **Convergence**: All lines converge to zero as S increases, with faster convergence in graph (c).
2. **Initial Deviations**:
- (1,1) pair starts highest but stabilizes fastest.
- (1,2)/(2,1) pair exhibits the largest initial negative deviation.
3. **Oscillation Patterns**:
- (2,2) pair shows the most pronounced oscillations before stabilization.
4. **Graph (a) vs. (c)**:
- Graph (c) demonstrates earlier convergence across all (i,j) pairs.
---
### Interpretation
The data suggests that increasing the number of samples (S) reduces the discrepancy between the expected value E[(H_α,ÎČ)_ij] and the observed expectation âš(H_α,ÎČ)_ijâ©. This implies:
- **Stability**: Larger sample sizes lead to more consistent measurements, aligning with statistical principles of reduced variance.
- **Parameter Sensitivity**:
- The (1,1) pairâs rapid stabilization may indicate lower sensitivity to sampling noise.
- The (1,2)/(2,1) pairâs delayed convergence suggests higher sensitivity to initial conditions or parameter interactions.
- **Graph (c) Insights**: Faster convergence in (c) could reflect optimized system parameters or reduced noise in the modeled scenario.
The trends highlight the importance of sample size in achieving reliable estimates, particularly for systems with complex parameter dependencies (e.g., (1,2)/(2,1) pairs).
</details>
<details>
<summary>hessian_alpha_beta_right.png Details</summary>

### Visual Description
## Line Graphs: Principal Curvatures vs. Number of Samples
### Overview
Two line graphs (labeled (b) and (d)) depict the relationship between principal curvatures and the number of samples \( S \) (logarithmic scale). Each graph compares two curvature metrics: a solid black line (\( \langle \kappa_{\alpha,\beta} \rangle \)) and a dashed red line (\( \langle \tilde{\kappa}_{\alpha,\beta} \rangle \)).
### Components/Axes
- **Y-axis**:
- Graph (b): "principal curvatures" ranging from \(-50\) to \(75\).
- Graph (d): "principal curvatures" ranging from \(550\) to \(700\).
- **X-axis**: "number of samples \( S \)" on a logarithmic scale (\(10^0\) to \(10^4\)).
- **Legends**:
- Solid black line: \( \langle \kappa_{\alpha,\beta} \rangle \).
- Dashed red line: \( \langle \tilde{\kappa}_{\alpha,\beta} \rangle \).
- **Placement**: Legends are positioned in the upper-right corner of each graph.
### Detailed Analysis
#### Graph (b)
- **Solid black line (\( \langle \kappa_{\alpha,\beta} \rangle \))**:
- Starts at \( \sim 50 \), peaks near \( 75 \) at \( S = 10^1 \), then declines to \( \sim -25 \) by \( S = 10^4 \).
- Exhibits high variability (oscillations) between \( S = 10^1 \) and \( 10^2 \).
- **Dashed red line (\( \langle \tilde{\kappa}_{\alpha,\beta} \rangle \))**:
- Starts at \( \sim 25 \), peaks near \( 50 \) at \( S = 10^1 \), then declines to \( \sim -25 \) by \( S = 10^4 \).
- Shows sharper oscillations than the black line between \( S = 10^1 \) and \( 10^2 \).
- **Convergence**: Both lines converge near \( -25 \) as \( S \to 10^4 \).
#### Graph (d)
- **Solid black line (\( \langle \kappa_{\alpha,\beta} \rangle \))**:
- Starts at \( \sim 650 \), peaks near \( 700 \) at \( S = 10^1 \), then declines to \( \sim 600 \) by \( S = 10^4 \).
- Smoother trend compared to graph (b).
- **Dashed red line (\( \langle \tilde{\kappa}_{\alpha,\beta} \rangle \))**:
- Starts at \( \sim 600 \), peaks near \( 650 \) at \( S = 10^1 \), then declines to \( \sim 550 \) by \( S = 10^4 \).
- Exhibits minor oscillations but remains relatively stable after \( S = 10^2 \).
- **Convergence**: Both lines converge near \( 550 \) as \( S \to 10^4 \).
### Key Observations
1. **Convergence**: In both graphs, the two curvature metrics (\( \langle \kappa_{\alpha,\beta} \rangle \) and \( \langle \tilde{\kappa}_{\alpha,\beta} \rangle \)) converge as \( S \) increases, suggesting reduced divergence with larger sample sizes.
2. **Initial Peaks**: Both metrics exhibit initial peaks at \( S = 10^1 \), followed by declines.
3. **Amplitude Differences**:
- Graph (b): Curvatures span a wider range (\(-50\) to \(75\)) compared to graph (d) (\(550\) to \(700\)).
- Graph (d): Curvatures remain positive throughout, while graph (b) includes negative values.
### Interpretation
The data suggests that principal curvature estimates stabilize as the number of samples grows. The convergence of \( \langle \kappa_{\alpha,\beta} \rangle \) and \( \langle \tilde{\kappa}_{\alpha,\beta} \rangle \) implies that both metrics agree on curvature values at large \( S \), potentially indicating robustness in the estimation process. The initial variability (especially in graph (b)) may reflect noise or sensitivity to small sample sizes. The logarithmic x-axis emphasizes the impact of exponential sample growth on curvature trends.
**Note**: No textual content in other languages is present. All values are approximate, with uncertainty arising from visual estimation of the plotted lines.
</details>
Figure 7: Convergence of the ensemble mean (43) of Hessian elements and curvatures measures as a function of the number of random projections $S$ . (a,c) The deviation of the ensemble means $\langle(H_{\alpha,\beta})_{ij}\rangle$ ( ${i,j\in\{1,2\}}$ ) of Hessian elements from the corresponding expected values as a function of $S$ . Notice that the expected value of the diagonal elements $(H_{\alpha,\beta})_{11}$ and $(H_{\alpha,\beta})_{22}$ is equal to $\bar{\kappa}^{\alpha,\beta}$ (i.e., to the sum of principal curvatures in the original space) [see Eqs. (36) and (37)]. A relatively large number of random projections between $10^{3}$ and $10^{4}$ is required to keep the deviations at values smaller than about 2â4. (b,d) The ensemble means $\langle\kappa^{\alpha,\beta}_{\pm}\rangle$ [see Eq. (35)] and $\langle\tilde{\kappa}^{\alpha,\beta}_{\pm}\rangle$ [see Eq. (44)] as a function of $S$ . Dashed grey lines represent $\bar{\kappa}^{\alpha,\beta}=\mathrm{tr}(H_{\theta})$ . In panels (a,b) and (c,d), the $N$ -dimensional loss functions are given by Eqs. (40) and (45), respectively. We evaluate the corresponding Hessians (42) and (46) at the saddle point $\theta^{*}=(\theta^{*}_{1},\dots,\theta^{*}_{2n},\theta^{*}_{2n+1})=(0,\dots,0,1)$ . In both loss functions, we set $n=500$ and in loss function (45) we set $\tilde{n}=800$ .
Because $H_{\theta}$ has positive and negative eigenvalues, the critical point is a saddle. The corresponding principal curvatures are $\kappa_{i}^{\theta}\in\{-1,1\}$ ( $i\in\{1,\dots,N-1\}$ ) and $\kappa^{\theta}_{N}=0$ . In this example, the mean curvature $H$ , as defined in Eq. (38), is equal to 0. According to Eq. (36), the principal curvature, $\bar{\kappa}^{\alpha,\beta}$ , associated with the expected, dimension-reduced Hessian $H_{\alpha,\beta}$ is also equal to 0, erroneously indicating an apparently flat loss landscape if one would use $\bar{\kappa}^{\alpha,\beta}$ as the main measure of curvature. To compare the convergence of different curvature measures as a function of the number of loss projections $S$ , we will now study the ensemble mean
$$
\langle X\rangle=\frac{1}{S}\sum_{k=1}^{S}X^{(k)} \tag{43}
$$
of different quantities of interest $X$ such as Hessian elements and principal curvature measures in dimension-reduced space. Here, $X^{(k)}$ is the $k$ -th realization (or sample) of $X$ .
<details>
<summary>curvature_pdfs_left.png Details</summary>

### Visual Description
## Line Graph: Probability Density Functions of Principal Curvatures
### Overview
The graph displays two probability density functions (PDFs) for principal curvatures, labeled as Îș_-^α,ÎČ (red) and Îș_+^α,ÎČ (black). The x-axis represents principal curvatures (Îș_±^α,ÎČ), ranging from -150 to 150, while the y-axis shows the PDF, ranging from 0.0 to 1.5. The curves are plotted with distinct colors and overlap in the central region.
### Components/Axes
- **X-axis**: "principal curvatures Îș_±^α,ÎČ" (range: -150 to 150, increments of 50).
- **Y-axis**: "PDF" (range: 0.0 to 1.5, increments of 0.5).
- **Legend**: Located at the top-right corner, with:
- Red line: Îș_-^α,ÎČ
- Black line: Îș_+^α,ÎČ
- **Title**: "(a)" in the top-left corner.
### Detailed Analysis
1. **Îș_-^α,ÎČ (Red Curve)**:
- Peaks at approximately -50 on the x-axis.
- PDF reaches ~1.0 at its peak.
- Tapers off symmetrically toward both ends of the x-axis.
- Overlaps with Îș_+^α,ÎČ between -50 and 50.
2. **Îș_+^α,ÎČ (Black Curve)**:
- Peaks at approximately +50 on the x-axis.
- PDF reaches ~1.0 at its peak.
- Tapers off symmetrically toward both ends of the x-axis.
- Overlaps with Îș_-^α,ÎČ between -50 and 50.
3. **Overlap Region**:
- Between -50 and 50, both curves intersect, suggesting shared probability density in this range.
### Key Observations
- The distributions are symmetric about the origin (0), indicating mirrored behavior for Îș_- and Îș_+.
- The PDFs for Îș_- and Îș_+ are identical in shape but shifted oppositely along the x-axis.
- The maximum PDF value (~1.0) occurs at the peaks of both curves, confirming normalized distributions.
### Interpretation
The graph likely represents the statistical distribution of principal curvatures in a system where positive and negative curvatures are equally probable but occur in opposite directions. The overlap between -50 and 50 suggests a transitional zone where both curvature types coexist or interact. This could reflect phenomena in materials science (e.g., stress distribution in curved surfaces) or geometry (e.g., saddle points in manifolds). The symmetry implies no inherent bias toward positive or negative curvatures in the modeled system.
</details>
<details>
<summary>curvature_pdfs_right.png Details</summary>

### Visual Description
## Line Graph: Comparison of Principal Curvatures Îș±α,ÎČ
### Overview
The graph compares two distributions of principal curvatures (Îș±α,ÎČ) across a range of values. Two overlapping distributions are plotted: one in red (Îșâα,ÎČ) and one in black (Îș+α,ÎČ). The x-axis represents principal curvature values, while the y-axis represents normalized frequency or probability density.
### Components/Axes
- **X-axis**: Labeled "principal curvatures Îș±α,ÎČ" with values ranging from 450 to 750 in increments of 50.
- **Y-axis**: Labeled with values from 0.0 to 1.5 in increments of 0.5.
- **Legend**: Located at the top-right corner, with:
- Red line: Îșâα,ÎČ
- Black line: Îș+α,ÎČ
### Detailed Analysis
1. **Red Distribution (Îșâα,ÎČ)**:
- Peaks at approximately **550** on the x-axis.
- Y-axis value at peak: ~1.0.
- Tails extend from ~450 to ~600, with gradual decline.
- Overlaps with the black distribution between ~550â600.
2. **Black Distribution (Îș+α,ÎČ)**:
- Peaks at approximately **650** on the x-axis.
- Y-axis value at peak: ~1.0.
- Tails extend from ~550 to ~750, with gradual decline.
- Overlaps with the red distribution between ~550â600.
3. **Overlap Region**:
- Between ~550â600, both distributions show significant overlap, with combined y-values exceeding 1.0.
### Key Observations
- The red distribution (Îșâα,ÎČ) is centered at lower curvature values (~550) compared to the black distribution (Îș+α,ÎČ) (~650).
- Both distributions exhibit similar peak magnitudes (~1.0) but differ in their curvature ranges.
- The overlap region (~550â600) suggests a shared range of principal curvatures where both distributions are active.
### Interpretation
The graph likely represents a comparative analysis of two curvature-related properties (e.g., material stress, geometric deformation) in a physical or structural system. The separation of peaks indicates distinct dominant curvature regimes for Îșâ and Îș+, while the overlap suggests transitional or hybrid states. This could be critical in fields like materials science, biomechanics, or computational geometry, where curvature distributions influence mechanical behavior or stability. The normalized y-axis implies probabilistic or density-based interpretation, highlighting regions of highest likelihood for each curvature type.
</details>
Figure 8: Distribution of principal curvatures $\kappa_{-}^{\alpha,\beta}$ (red bars) and $\kappa_{+}^{\alpha,\beta}$ (black bars). In panels (a) and (b), the loss functions are given by Eqs. (40) and (45), respectively. We evaluate the corresponding Hessians (42) and (46) at the saddle point $\theta^{*}=(\theta^{*}_{1},\dots,\theta^{*}_{2n},\theta^{*}_{2n+1})=(0,\dots,0,1)$ . In both loss functions, we set $n=500$ and in loss function (45) we set $\tilde{n}=800$ . In panel (a), the probability $\Pr(\kappa_{+}^{\alpha,\beta}\kappa_{-}^{\alpha,\beta}>0)$ that the critical point in the lower-dimensional, random projection does not appear as a saddle is about 0.3, and it is 1 in panel (b). Histograms are based on 10,000 random projections used to compute $\kappa_{\pm}^{\alpha,\beta}$ . Solid grey lines are Gaussian approximations of the empirical distributions.
We first study the dependence of ensemble means $\langle(H_{\alpha,\beta})_{ij}\rangle$ ( $i,j\in\{1,2\}$ ) of elements of the dimension-reduced Hessian $H_{\alpha,\beta}$ on the number of samples $S$ . According to Eq. (36), the diagonal elements of $\mathds{E}[H_{\alpha,\beta}]$ are proportional to the mean curvature of the original high-dimensional space and are thus useful to examine curvature properties of high-dimensional loss functions. Figure 7 (a) shows the convergence of the ensemble means $\langle(H_{\alpha,\beta})_{ij}\rangle$ toward the expected values $\mathds{E}[(H_{\alpha,\beta})_{ij}]$ as a function of $S$ . Note that $\mathds{E}[(H_{\alpha,\beta})_{ij}]=0$ for all $i,j$ . For a few dozen loss projections, the deviations of some of the ensemble means from the corresponding expected values reach values larger than 20. A relatively large number of loss projections $S$ between $10^{3}$ â $10^{4}$ is required to keep these deviations at values that are smaller than about 2â4. The solid black and red lines in Fig. 7 (b), respectively, show the ensemble means $\langle\kappa_{\pm}^{\alpha,\beta}\rangle$ and
$$
\langle\tilde{\kappa}_{\pm}^{\alpha,\beta}\rangle=\frac{1}{2}\left(\langle A\rangle+\langle C\rangle\pm\sqrt{4\langle B\rangle^{2}+(\langle A\rangle-\langle C\rangle)^{2}}\right) \tag{44}
$$
as a function of $S$ . Since $\mathds{E}[A]=\mathds{E}[C]={(H_{\theta})^{i}}_{i}$ and $\mathds{E}[B]=0$ [see Eq. (36)], we have that $\langle\tilde{\kappa}_{\pm}^{\alpha,\beta}\rangle=\bar{\kappa}^{\alpha,\beta}$ in the limit $S\rightarrow\infty$ . In the current example, the ensemble means $\langle\tilde{\kappa}_{\pm}^{\alpha,\beta}\rangle$ thus approach $\bar{\kappa}^{\alpha,\beta}=0$ for large numbers of samples $S$ , represented by the dashed red lines in Fig. 7 (b). The ensemble means $\langle\kappa_{\pm}^{\alpha,\beta}\rangle$ converge towards values of opposite sign, indicating a saddle point.
For a sample size of $S=10^{4}$ , we show the distribution of the principal curvatures $\kappa_{\pm}^{\alpha,\beta}$ in Fig. 8 (a). We observe that the distributions are plausibly Gaussian. We also calculate the probability $\Pr(\kappa_{+}^{\alpha,\beta}\kappa_{-}^{\alpha,\beta}>0)$ that the critical point in the lower-dimensional, random projection does not appear as a saddle (i.e., $\kappa_{+}^{\alpha,\beta}\kappa_{-}^{\alpha,\beta}>0$ ). For the example shown in Fig. 8 (a), we find that $\Pr(\kappa_{+}^{\alpha,\beta}\kappa_{-}^{\alpha,\beta}>0)\approx 0.3$ . That is, in about 30% of the simulated projections, the lower-dimensional loss landscape wrongly indicates that it does not correspond to a saddle.
Our first example, which is based on the loss function (40), shows that the principal curvatures in the lower-dimensional representation of $L(\theta)$ may capture the saddle behavior in the original space if one computes ensemble means $\langle\kappa_{\pm}^{\alpha,\beta}\rangle$ in the lower-dimensional space [see Fig. 7 (b)]. However, if one first calculates ensemble means of the elements of the dimension-reduced Hessian $H_{\alpha,\beta}$ to infer $\langle\tilde{\kappa}_{\pm}^{\alpha,\beta}\rangle$ , the loss landscape appears to be flat in this example. We thus conclude that different ways of computing ensemble means (either before or after calculating the principal curvatures) may lead to different results with respect to the âflatnessâ of a dimension-reduced loss landscape.
Unequal number of curvature directions
In the next example, we will show that random projections cannot identify certain saddle points regardless of the underlying averaging process. We consider the loss function
$$
L(\theta)=\frac{1}{2}\theta_{2n+1}\left(\sum_{i=1}^{\tilde{n}}\theta_{i}^{2}-\sum_{i=\tilde{n}+1}^{2n}\theta_{i}^{2}\right)\,,\quad n\in\mathbb{Z}_{+}\,,n<\tilde{n}\leq 2n\,, \tag{45}
$$
where we use the convention $\sum_{i=a}^{b}(\cdot)=0$ if $a>b$ . The Hessian at the critical point $(\theta^{*}_{1},\dots,\theta^{*}_{2n},\theta^{*}_{2n+1})=(0,\dots,0,1)$ is
$$
H_{\theta}=\mathrm{diag}(\underbrace{1,\dots,1}_{\tilde{n}~\mathrm{times}},\underbrace{-1,\dots,-1}_{2n-\tilde{n}~\mathrm{times}},0)\,. \tag{46}
$$
As in the previous example, the critical point is again a saddle, but the mean curvature is ${H=2(\tilde{n}-n)/N>0}$ . In the following numerical experiments, we set $n=500$ and $\tilde{n}=800$ . Figure 7 (c) shows that the ensemble means $\langle(H_{\alpha,\beta})_{ij}\rangle$ converge towards the expected value $\mathds{E}[(H_{\alpha,\beta})_{ij}]$ as the number of samples increases. We again observe that a relatively large number of random loss projections $S$ between $10^{3}$ and $10^{4}$ is required to keep the deviations of ensemble means from their corresponding expected values small. Because of the dominance of positive principal curvatures $\kappa_{i}^{\theta}$ in the original space, the corresponding ensemble means of principal curvatures (i.e., $\langle\kappa_{\pm}^{\alpha,\beta}\rangle$ , $\langle\tilde{\kappa}_{\pm}^{\alpha,\beta}\rangle$ ) in the lower-dimensional representation approach positive values [see Fig. 7 (d)]. The distribution of $\kappa_{\pm}^{\alpha,\beta}$ indicates that the probability of observing a saddle in the lower-dimensional loss landscape is vanishingly small [see Fig. 8 (b)]. In this second example, both ways of computing ensemble means, before and after calculating the lower-dimensional principal curvatures, mistakenly suggest that the saddle in the original space is a minimum in dimension-reduced space.
To summarize, for both loss functions (40) and (45), the saddle point $\theta^{*}=(0,\dots,0,1)$ in the original loss function $L(\theta)$ is often misrepresented in lower-dimensional representations $L(\theta+\alpha\eta+\beta\delta)$ if random directions are used. Depending on (i) the employed curvature measure and (ii) the index of the underlying Hessian $H_{\theta}$ in the original space, the saddle $\theta^{*}=(0,\dots,0,1)$ may appear erroneously as either a minimum, maximum, or an almost flat region.
If the critical point were a minimum or maximum (i.e., a critical point associated with a positive definite or negative definite Hessian $H_{\theta}$ ), it would be correctly represented in the corresponding expected random projection because the sign of its principal curvature $\bar{\kappa}^{\alpha,\beta}$ , which is proportional to the sum of all eigenvalues $\kappa_{i}^{\theta}$ of the Hessian $H_{\theta}$ in the original loss space, would be equal to the sign of the principal curvatures $\kappa_{i}^{\theta}$ . However, such points are scarce in high-dimensional loss spaces [140, 141].
Finally, because of the confounding factors associated with the inability of random projections to correctly identify saddle information, it does not appear advisable to use âflatnessâ around a critical point in a lower-dimensional random loss projection as a measure of generalization error [122].
Curvature-based Hessian trace estimation
<details>
<summary>trace_estimate_top.png Details</summary>

### Visual Description
## Line Graph: Estimate of tr(HΞ) vs. Number of Samples S
### Overview
The graph depicts two time series trends (âšzá”HΞzâ© and âšÎșαâ©) plotted against a logarithmic scale of sample size (S). Both series exhibit initial volatility followed by stabilization, with convergence observed at larger sample sizes.
### Components/Axes
- **X-axis**: "number of samples S" (logarithmic scale: 10â° to 10Âł)
- **Y-axis**: "estimate of tr(HΞ)" (linear scale: -20 to 20)
- **Legend**:
- Solid black line: âšzá”HΞzâ©
- Dashed red line: âšÎșαâ©
- **Reference line**: Horizontal dashed gray line at y=0
### Detailed Analysis
1. **âšzá”HΞzâ© (Black Line)**:
- **S=10â°**: Starts at ~15
- **S=10Âč**: Drops sharply to ~-10
- **S=10ÂČ**: Fluctuates between -5 and 5
- **S=10³**: Stabilizes near 0 with minor oscillations (~±2)
2. **âšÎșα⩠(Red Dashed Line)**:
- **S=10â°**: Begins at ~5
- **S=10Âč**: Plummets to ~-15
- **S=10ÂČ**: Rises to ~-2
- **S=10Âł**: Converges with âšzá”HΞzâ© near 0
### Key Observations
- Both series show **initial divergence** (S=10â°â10Âč) followed by **gradual convergence** (S=10ÂČâ10Âł).
- âšzá”HΞzâ© exhibits **larger amplitude fluctuations** early on but recovers faster.
- âšÎșα⩠demonstrates a **deeper initial trough** but slower recovery.
- Both lines **cross the zero reference line** between S=10Âč and S=10ÂČ.
- Final convergence at S=10³ suggests **asymptotic stability** toward tr(HΞ) = 0.
### Interpretation
The graph illustrates how estimation error for two matrix traces (âšzá”HΞzâ© and âšÎșαâ©) evolves with increasing sample size. The logarithmic x-axis emphasizes performance across orders of magnitude in data collection. Key insights:
1. **Early Instability**: Both estimators show high variance in initial samples (S<10ÂČ), likely due to insufficient data for reliable trace estimation.
2. **Convergence Mechanism**: The shared asymptotic behavior (Sâ„10ÂČ) implies that both methods achieve similar accuracy with sufficient samples, despite differing initial performance.
3. **Zero-Centered Recovery**: The horizontal reference line at tr(HΞ)=0 suggests this is the true value being estimated, with both methods correcting toward it as S increases.
4. **Method Comparison**: âšzá”HΞzâ© appears more robust to early-sample noise but requires more samples for final stabilization compared to âšÎșαâ©.
*Note: All values are approximate due to the absence of gridlines. The convergence pattern suggests potential applications in statistical learning or signal processing where trace estimation stability is critical.*
</details>
<details>
<summary>trace_estimate_bottom.png Details</summary>

### Visual Description
## Line Graph: âšzâ HΞzâ© and âšÎșα⩠vs. Number of Samples S
### Overview
The graph depicts two data series plotted against a logarithmic scale of the number of samples (S) on the x-axis and a linear scale of values (550â610) on the y-axis. The black solid line represents âšzâ HΞzâ©, while the dashed red line represents âšÎșαâ©. A horizontal dashed gray line at y=590 is labeled (b). Both series exhibit increasing trends with larger S, though âšzâ HΞzâ© consistently remains higher than âšÎșαâ©.
### Components/Axes
- **X-axis**: "number of samples S" (logarithmic scale: 10â° to 10Âł).
- **Y-axis**: Values ranging from 550 to 610 (linear scale).
- **Legend**: Located on the right, with:
- Black solid line: âšzâ HΞzâ©.
- Dashed red line: âšÎșαâ©.
- **Additional Element**: Dashed gray horizontal line at y=590, labeled (b) in the top-right corner.
### Detailed Analysis
1. **âšzâ HΞzâ© (Black Solid Line)**:
- Starts at ~560 when S=10â°.
- Peaks at ~585 when S=10Âč.
- Fluctuates but trends upward, reaching ~595 at S=10ÂČ and stabilizing near 595 at S=10Âł.
- Shows minor oscillations but maintains a clear upward trajectory.
2. **âšÎșα⩠(Dashed Red Line)**:
- Begins at ~565 when S=10â°.
- Peaks at ~580 when S=10Âč.
- Fluctuates more prominently than âšzâ HΞzâ©, reaching ~590 at S=10ÂČ and ~595 at S=10Âł.
- Exhibits sharper dips and rises compared to the black line.
3. **Dashed Gray Line (y=590)**:
- Acts as a reference threshold, intersecting both series at higher S values.
### Key Observations
- Both series increase with S, but âšzâ HΞzâ© remains consistently higher than âšÎșα⩠across all S values.
- âšÎșα⩠shows greater variability, with more pronounced fluctuations.
- The gray line at y=590 aligns with the asymptotic values of both series at S=10Âł.
- Diminishing returns are evident as both lines stabilize near 595 for S â„ 10ÂČ.
### Interpretation
The data suggests that âšzâ HΞzâ© and âšÎșα⩠converge toward similar values as the number of samples increases, with âšzâ HΞzâ© maintaining a slight advantage. The stabilization near y=595 implies a saturation point or equilibrium state for large S. The gray reference line at 590 may indicate a critical threshold or target value in the system being modeled. The red lineâs greater volatility could reflect higher sensitivity to sampling noise or experimental conditions. This convergence highlights the importance of sample size in stabilizing measurements, with practical implications for optimizing data collection efficiency.
</details>
Figure 9: Estimating the trace of the Hessian $H_{\theta}$ . In panels (a) and (b), the loss functions are given by Eqs. (40) and (45), respectively. We evaluate the corresponding Hessians (42) and (46) at the saddle point $\theta^{*}=(\theta^{*}_{1},\dots,\theta^{*}_{2n},\theta^{*}_{2n+1})=(0,\dots,0,1)$ . In both loss functions, we set $n=500$ and in loss function (45) we set $\tilde{n}=800$ . Solid black and dash-dotted red lines represent Hutchinson [ $\langle z^{\top}H_{\theta}z\rangle$ ; see Eq. (39)] and curvature-based ( $\langle\kappa^{\alpha}\rangle$ ) estimates of $\mathrm{tr}(H_{\theta})$ , respectively. We compute ensemble means $\langle\cdot\rangle$ as defined in Eq. (43) for different numbers of random projections $S$ . The trace estimates in panels (a) and (b), respectively, converge towards the true trace values $\mathrm{tr}(H_{\theta})=0$ and $\mathrm{tr}(H_{\theta})=600$ that are indicated by dashed grey lines. In both methods, the same random vectors with elements that are distributed according to a standard normal distribution are used. For the curvature-based estimation of $\mathrm{tr}(H_{\theta})$ , we perform least-square fits of $L(\theta^{*}+\alpha\eta)$ over an interval $\alpha\in[-0.05,0.05]$ .
In accordance with Sec. 1, we now use the loss functions (40) and (45) to compare the convergence behavior between the original Hutchinson method (39) and the curvature-based trace estimation (37). Instead of two random directions, we use one random Gaussian direction $\eta$ and perform least-squares fits for 50 equidistant values of $\alpha$ in the interval $[-0.05,0.05]$ to extract estimates of $\mathrm{tr}(H_{\theta})$ from $L(\theta^{*}+\alpha\eta)$ . We use the same random Gaussian directions in Hutchinsonâs method.
Figure 9 shows how the Hutchinson and curvature-based trace estimates converge towards the true trace values, 0 for the loss function (40) and 600 for the loss function (45) with $n=500$ and $\tilde{n}=800$ . Given that we use the same random vectors in both methods, their convergence behavior towards the true trace value is similar.
With the curvature-based method, one can produce Hutchinson-type trace estimates without computing Hessian-vector products. However, it requires the user to specify an appropriate interval for $\alpha$ so that the mean curvature can be properly estimated in a quadratic-approximation regime. It also requires a sufficiently large number of points in this interval. Therefore, it may be less accurate than the original Hutchinson method.
#### 3 Hessian directions
<details>
<summary>loss_projection_top.png Details</summary>

### Visual Description
## 3D Function Visualization: Saddle Point vs. Minimum
### Overview
The image contains two 3D surface plots comparing critical points in a multivariable function. Plot (a) shows a saddle point configuration, while plot (b) illustrates a minimum point. Both plots use α and ÎČ as independent variables on orthogonal axes, with implicit third-axis values representing function output.
### Components/Axes
- **Axes Labels**:
- Horizontal axes: α (x-axis), ÎČ (y-axis)
- Vertical axis: Implicit function value (z-axis)
- **Mathematical Labels**:
- Îș+α,ÎČ and Îșâα,ÎČ annotations on both plots
- Hessian directions η, Ύ labeled at bottom of plot (b)
- **Color Coding**:
- Light blue shading for surface areas
- Dark blue lines for critical point contours
- Black dots marking critical points
### Detailed Analysis
#### Plot (a) Saddle Point
- **Surface Structure**:
- Hyperbolic paraboloid shape with opposing curvature
- Central black dot at intersection of Îș+ and Îșâ contours
- **Mathematical Conditions**:
- Îș+α,ÎČ < 0 and Îșâα,ÎČ < 0 (both negative definite)
- Critical point occurs where surface crosses itself
- **Spatial Features**:
- Left side shows upward curvature (α-direction)
- Right side shows downward curvature (ÎČ-direction)
#### Plot (b) Minimum
- **Surface Structure**:
- Parabolic bowl shape with single global minimum
- Central black dot at lowest point
- **Mathematical Conditions**:
- Îș+α,ÎČ > 0 and Îșâα,ÎČ > 0 (both positive definite)
- Hessian directions η, Ύ indicate gradient paths
- **Spatial Features**:
- All directions curve upward from central minimum
- Contour lines form concentric ellipses around minimum
### Key Observations
1. **Curvature Sign Relationship**:
- Saddle point (a) shows mixed curvature signs (one positive, one negative)
- Minimum (b) shows uniform positive curvature
2. **Hessian Directionality**:
- Only present in minimum plot (b)
- Arrows point toward minimum along principal axes
3. **Critical Point Identification**:
- Saddle point marked by intersecting surface
- Minimum marked by lowest surface point
4. **Symmetry**:
- Both plots show bilateral symmetry about α=0 and ÎČ=0
### Interpretation
The visualization demonstrates fundamental concepts in multivariable calculus:
1. **Saddle Point Significance**:
- Represents unstable equilibrium where function increases in one direction and decreases in another
- Mathematical manifestation of competing gradients
2. **Minimum Characteristics**:
- Stable equilibrium point with positive definite Hessian
- Indicates local/global optimization target
3. **Hessian Geometry**:
- Directions η, Ύ in plot (b) show paths of steepest descent
- Elliptical contour patterns confirm quadratic approximation validity
4. **Practical Implications**:
- Saddle points often represent transition states in physical systems
- Minima correspond to equilibrium states in optimization problems
- Curvature analysis provides insight into function behavior near critical points
The plots effectively demonstrate how second derivative information (Hessian matrix) determines the nature of critical points through curvature analysis.
</details>
<details>
<summary>loss_projection_bottom.png Details</summary>

### Visual Description
## 3D Mathematical Diagrams: Maximum and Projection
### Overview
The image contains two 3D mathematical diagrams labeled **(c) Maximum** and **(d) Projection**. Both plots use axes labeled **α** and **ÎČ**, with surfaces defined by mathematical expressions involving **Îș** (kappa) terms with exponents **α, ÎČ**. The diagrams appear to represent optimization or stability analysis, with shaded regions and critical points marked.
---
### Components/Axes
#### **(c) Maximum**
- **Axes**:
- Horizontal axes: **α** (x-axis) and **ÎČ** (y-axis).
- Vertical axis: Implicitly represents the value of the function being maximized.
- **Surface**:
- A shaded, curved surface with a **peak** (maximum point) at the center.
- Labels on the surface:
- **Îșâ^{α,ÎČ} < 0** (top-left edge).
- **Îșâ^{α,ÎČ} < 0** (bottom-right edge).
- A critical point is marked at the peak with a dark dot.
#### **(d) Projection**
- **Axes**:
- Horizontal axes: **α** (x-axis) and **ÎČ** (y-axis).
- Vertical axis: Implicitly represents the projected value.
- **Surface**:
- A saddle-shaped surface with a **saddle point** marked by a dark dot.
- Labels on the surface:
- **Îșâ^{α,ÎČ}** (bottom-left edge).
- **Îșâ^{α,ÎČ}** (top-right edge).
- **Additional Text**:
- Below both plots: **"random directions η, Ύ"** (Greek letters eta and delta).
---
### Detailed Analysis
#### **(c) Maximum**
- The surface is shaded in gradients of blue, with darker regions near the peak.
- The inequalities **Îșâ^{α,ÎČ} < 0** and **Îșâ^{α,ÎČ} < 0** suggest constraints or conditions for the maximum.
- The peak represents the global maximum of the function under the given constraints.
#### **(d) Projection**
- The saddle shape indicates a critical point where the function transitions from concave to convex.
- The labels **Îșâ^{α,ÎČ}** and **Îșâ^{α,ÎČ}** likely denote directional derivatives or curvature terms in the projection.
- The term **"random directions η, Ύ"** implies stochastic or multi-directional analysis.
---
### Key Observations
1. **Symmetry**: Both plots share the same axes (**α, ÎČ**) but differ in surface geometry (peak vs. saddle).
2. **Critical Points**:
- **(c)** has a single maximum point.
- **(d)** has a saddle point, suggesting a local extremum with mixed curvature.
3. **Mathematical Context**: The use of **Îș** terms with exponents **α, ÎČ** hints at a parameterized system, possibly in physics or optimization.
---
### Interpretation
- The diagrams likely model a system where **α** and **ÎČ** are parameters influencing stability or optimization.
- The **maximum** plot (c) could represent a stable equilibrium, while the **projection** (d) might illustrate sensitivity to perturbations (e.g., random directions **η, Ύ**).
- The inequalities **Îșâ^{α,ÎČ} < 0** and **Îșâ^{α,ÎČ} < 0** in (c) suggest that the maximum is bounded by negative curvature terms, possibly indicating stability.
- The saddle point in (d) implies a bifurcation or transition point in the systemâs behavior.
No numerical data or legends are present, so trends cannot be quantified. The focus is on symbolic relationships between parameters and curvature terms.
</details>
Figure 10: Dimensionality-reduced loss $L(\theta^{*}+\alpha\eta+\beta\delta)$ of Eq. (45) with $n=900,\tilde{n}=1000$ for different directions $\eta,\delta$ . (aâc) The directions $\eta,\delta$ correspond to eigenvectors of the Hessian $H_{\theta}$ of Eq. (45). If the eigenvalues associated with $\eta,\delta$ have different signs, the corresponding loss landscape is a saddle as depicted in panel (a). If the eigenvalues associated with $\eta,\delta$ have the same sign, the corresponding loss landscape is either a minimum (both signs are positive) as shown in panel (b) or a maximum (both signs are negative) as shown in panel (c). Because there is an excess of $\tilde{n}-n=100$ positive eigenvalues in $H_{\theta}$ , a projection onto a dimension-reduced space that is spanned by the random directions $\eta,\delta$ is often associated with an apparently convex loss landscape. An example of such an apparent minimum is shown in panel (d).
Given the described shortcomings of random projections incorrectly identifying saddles of high-dimensional loss functions, we suggest to use Hessian directions (i.e., the eigenbasis of $H_{\theta}$ ) as directions $\eta,\delta$ in $L(\theta^{*}+\alpha\eta+\beta\delta)$ .
For Eq. (45) with $n=900,\tilde{n}=1000$ , we show projections along different Hessian directions in Fig. 10 (aâc). We observe that different Hessian directions indicate different types of critical points in dimension-reduced space. If the eigenvalues associated with the Hessian directions $\eta,\delta$ have different signs, the corresponding lower-dimensional loss landscape is a saddle [see Fig. 10 (a)]. If both eigenvalues have the same sign, the loss landscape is either a minimum [see Fig. 10 (b): both signs are positive] or it is a maximum [see Fig. 10 (c): both signs are negative]. If one uses a random projection instead, the resulting lower-dimensional loss landscape often appears to be a minimum in this example [see Fig. 10 (d)]. To quantify the proportion of random projections that correctly identify the saddle with $n=900,\tilde{n}=1000$ , we generated 10,000 realizations of $\kappa_{\pm}^{\alpha,\beta}$ [see Eq. (35)]. We find that the signs of $\kappa_{\pm}^{\alpha,\beta}$ were different in only about 0.5% of all simulated realizations. That is, in this example the principal curvatures $\kappa_{\pm}^{\alpha,\beta}$ indicate a saddle in only about 0.5% of the studied projections.
In accordance with related works that use Hessian information [142, 143], our proposal uses Hessian-vector products (HVPs). We first compute the largest-magnitude eigenvalue and then use an annihilation method [144] to compute the second largest-magnitude eigenvalue of opposite sign (see Algorithm 1). The corresponding eigenvectors are the dominant Hessian directions. Other deflation techniques can be used to find additional Hessian directions.
We employ HVPs to compute Hessian directions without an explicit representation of $H_{\theta}$ using the identity
$$
\nabla_{\theta}[(\nabla_{\theta}L)^{\top}v]=(\nabla_{\theta}\nabla_{\theta}L)v+(\nabla_{\theta}L)^{\top}\nabla_{\theta}v=H_{\theta}v\,. \tag{47}
$$
In the first step, the gradient $(\nabla_{\theta}L)^{\top}$ is computed using reverse-mode autodifferentiation (AD) to then compute the scalar product $[(\nabla_{\theta}L)^{\top}v]$ . In the second step, we again apply reverse-mode AD to the computational graph associated with the scalar product $[(\nabla_{\theta}L)^{\top}v]$ . Because the vector $v$ does not depend on $\theta$ (i.e., $\nabla_{\theta}v=0$ ), the result is $H_{\theta}v$ . One may also use forward-mode AD in the second step to provide a more memory efficient implementation.
Algorithm 1 Compute dominant Hessian directions
1: $L_{1}=\mathrm{LinearOperator((N,N),~matvec=hvp)}$ initialize linear operator for HVP calculation
2: eigval1, eigvec1 = solve_lm_evp( $L_{1}$ ) compute largest-magnitude eigenvalue and corresponding eigenvector associated with operator $L_{1}$
3: shifted_hvp(vec) = hvp(vec) - eigval1*vec define shifted HVP
4: $L_{2}=\mathrm{LinearOperator((N,N),~matvec=shifted\_hvp)}$ initialize linear operator for shifted HVP calculation
5: eigval2, eigvec2 = solve_lm_evp( $L_{2}$ ) compute largest-magnitude eigenvalue and corresponding eigenvector associated with operator $L_{2}$
6: eigval2 += eigval1
7: if eigval1 $>=$ 0 then
8: maxeigval, maxeigvec = eigval1, eigvec1
9: mineigval, mineigvec = eigval2, eigvec2
10: else
11: maxeigval, maxeigvec = eigval2, eigvec2
12: mineigval, mineigvec = eigval1, eigvec1
13: end if
14: return: maxeigval, maxeigvec, mineigval, mineigvec
Image classification
<details>
<summary>resnet_panel.png Details</summary>

### Visual Description
## 3D Function Surface Plots: Hessian vs. Random Directions
### Overview
The image contains four 3D surface plots comparing function behavior under Hessian directions and random directions. Each plot uses α and ÎČ parameters with distinct value ranges, visualized through color gradients and surface topography.
### Components/Axes
1. **Axes Labels**:
- All plots use α (x-axis) and ÎČ (y-axis) parameters.
- Z-axis represents function value (no explicit label).
2. **Parameter Ranges**:
- **(a) & (b)**: α, ÎČ â [-1, 1]
- **(c) & (d)**: α, ÎČ â [-0.05, 0.05]
3. **Color Gradient**:
- Blue-to-red gradient (no explicit legend; inferred to represent function magnitude).
4. **Surface Features**:
- **(a)**: Saddle-shaped surface with sharp curvature.
- **(b)**: Smooth surface with gentle curvature.
- **(c)**: Narrow saddle shape with visible directional vectors.
- **(d)**: Smooth surface with subtle curvature.
### Detailed Analysis
1. **Plot (a) - Hessian Directions (α, ÎČ â [-1,1])**:
- Saddle shape indicates mixed curvature (positive and negative eigenvalues).
- Color gradient shows rapid value changes near the saddle point.
- No directional vectors or annotations.
2. **Plot (b) - Random Directions (α, ÎČ â [-1,1])**:
- Uniformly smooth surface with minimal curvature.
- Color gradient is more gradual compared to (a).
- No directional vectors or annotations.
3. **Plot (c) - Hessian Directions (α, ÎČ â [-0.05,0.05])**:
- Narrow saddle shape with directional vectors (arrows) pointing toward critical points.
- Color gradient highlights localized curvature changes.
- Vectors suggest gradient flow toward the saddle's center.
4. **Plot (d) - Random Directions (α, ÎČ â [-0.05,0.05])**:
- Uniformly smooth surface with minimal curvature.
- Color gradient is consistent across the range.
- No directional vectors or annotations.
### Key Observations
1. **Curvature Differences**:
- Hessian directions (a,c) produce saddle shapes, while random directions (b,d) yield smooth surfaces.
2. **Scale Sensitivity**:
- Narrower ranges (c,d) emphasize local curvature, while wider ranges (a,b) show global behavior.
3. **Directional Vectors**:
- Only present in (c), indicating gradient flow toward critical points.
4. **Color Correlation**:
- Red regions likely represent higher function values; blue regions lower values (inferred from gradient).
### Interpretation
1. **Hessian vs. Random Directions**:
- Hessian directions (a,c) reveal critical curvature properties (saddle points), while random directions (b,d) show averaged, smooth behavior.
2. **Parameter Range Impact**:
- Wider ranges (a,b) capture global function structure, while narrower ranges (c,d) focus on local curvature near critical points.
3. **Directional Vectors in (c)**:
- Arrows suggest gradient descent paths converging at the saddle point, highlighting optimization challenges in such regions.
4. **Function Behavior**:
- The saddle shapes imply the presence of inflection points or saddle points in the function's landscape, critical for optimization algorithms.
### Spatial Grounding
- All plots share consistent axis labeling (α, ÎČ) and color gradient direction.
- Plot (c) uniquely includes directional vectors anchored to the surface, spatially grounding gradient flow.
### Content Details
- No explicit numerical values or legends are provided; analysis relies on visual interpretation of curvature, color gradients, and vector directions.
- All plots use the same α-ÎČ parameter space but differ in surface topology and directional annotations.
</details>
Figure 11: Loss landscape projections for ResNet-56. (a,c) The projection directions $\eta,\delta$ are given by the eigenvectors associated with the largest and smallest eigenvalues of the Hessian $H_{\theta}$ , respectively. The zoomed inset in panel (c) shows the loss landscape for $(\alpha,\beta)\in[-0.01,0.005]\times[-0.05,0.05]$ . We observe a decreasing loss along the negative $\beta$ -axis. (b,d) The projection directions $\eta,\delta$ are given by random vectors. The domains of $(\alpha,\beta)$ in panels (a,b) and (c,d) are $[-1,1]\times[-1,1]$ and $[-0.05,0.05]\times[-0.05,0.05]$ , respectively. All shown cross-entropy loss landscapes are based on evaluating the CIFAR-10 training dataset that consists of 50,000 images.
As an illustration of a loss landscape of an ANN employed in image classification, we focus on the ResNet-56 architecture that has been trained as detailed in Ref. 122 on CIFAR-10 using stochastic gradient descent (SGD) with Nesterov momentum. The number of parameters of this ANN is 855,770. The training and test losses at the local optimum found by SGD are ${9.20\times 10^{-4}}$ and 0.29, respectively; the corresponding accuracies are 100.00 and 93.66, respectively.
In accordance with Ref. 122, we apply filter normalization to random directions. This and related normalization methods are often employed when generating random projections. One reason is that simply adding random vectors to parameters of a neural network loss function (or parameters of other functions) does not consider the range of parameters associated with different elements of that function. As a result, random perturbations may be too small or large to properly resolve the influence of certain parameters on a given function.
When calculating Hessian directions, we are directly taking into account the parameterization of the underlying functions that we want to visualize. Therefore, there is no need for an additional rescaling of different parts of the perturbation vector. Still, reparameterizations of a neural network can result in changes of curvature properties (see, e.g., Theorem 4 in Ref. 120).
Figure 11 shows the two-dimensional projections of the loss function (cross entropy loss) around a local critical point. The smallest and largest eigenvalues are $-16.4$ and $5007.9$ , respectively. This means that the found critical point is a saddle with a maximum negative curvature that is more than two orders of magnitude smaller than the maximum positive curvature at that point. The saddle point is clearly visible in Fig. 11 (a,c). We observe in the zoomed inset in Fig. 11 (c) that the loss decreases along the negative $\beta$ -axis.
If random directions are used, the corresponding projections indicate that the optimizer converged to a local minimum and not to a saddle point [see Fig. 11 (b,d)]. Overall, the ResNet-56 visualizations that we show in Fig. 11 exhibit structural similarities to those that we generated using the simple loss model (45) [see Fig. 10 (a,d)].
Function approximation
<details>
<summary>heatmap_function_approximation.png Details</summary>

### Visual Description
## Heatmap Grid: Loss Landscapes for FCNN 1 and FCNN 2 under Different Training/Test Conditions
### Overview
The image displays eight heatmaps arranged in a 4x2 grid, visualizing loss landscapes for two feedforward convolutional neural networks (FCNN 1 and FCNN 2) under varying training and testing conditions. Each heatmap uses a color gradient (green to red) to represent loss magnitude, with axes labeled α (horizontal) and ÎČ (vertical). A black "X" marks the optimal parameter combination (minimum loss) in each case.
---
### Components/Axes
1. **Axes**:
- **X-axis (α)**: Ranges from -0.1 to 0.1 in all heatmaps.
- **Y-axis (ÎČ)**: Ranges from -0.1 to 0.1 in all heatmaps.
- **Color Scale**: Loss values from 0.0 (green) to 10.0 (red), with intermediate steps at 2.5, 5.0, and 7.5.
2. **Labels**:
- **Top Row (a, b)**: FCNN 1 (random initialization).
- **Second Row (c, d)**: FCNN 1 (Hessian-based optimization).
- **Third Row (e, f)**: FCNN 2 (random initialization).
- **Bottom Row (g, h)**: FCNN 2 (Hessian-based optimization).
- **Sub-labels**:
- `(train)`: Training phase (left column: a, c, e, g).
- `(test)`: Testing phase (right column: b, d, f, h).
3. **Legend**:
- Color bar on the right of each heatmap maps loss values to colors (green = low loss, red = high loss).
---
### Detailed Analysis
#### FCNN 1 (Random Initialization)
- **(a) FCNN 1 (random, train)**:
- Loss landscape is smooth with a broad minimum centered near α=0, ÎČ=0.
- Loss values increase radially outward, peaking at ~10.0 in corners.
- **(b) FCNN 1 (random, test)**:
- Similar to (a) but with a slightly shifted minimum (αâ0.05, ÎČâ-0.05).
- Loss values are marginally higher in the top-right quadrant.
#### FCNN 1 (Hessian Optimization)
- **(c) FCNN 1 (Hessian, train)**:
- Loss landscape is flatter with a concentrated minimum at αâ0.02, ÎČâ-0.03.
- Loss values remain below 5.0 in most regions.
- **(d) FCNN 1 (Hessian, test)**:
- Minimum shifts to αâ0.03, ÎČâ-0.02.
- Loss values are more uniform, with a sharp gradient near the optimal point.
#### FCNN 2 (Random Initialization)
- **(e) FCNN 2 (random, train)**:
- Loss landscape has a saddle-like structure with minima at αâ-0.05, ÎČâ0.05 and αâ0.05, ÎČâ-0.05.
- Loss values exceed 7.5 in the top-left and bottom-right quadrants.
- **(f) FCNN 2 (random, test)**:
- Minima shift to αâ-0.03, ÎČâ0.03 and αâ0.03, ÎČâ-0.03.
- Loss values are more concentrated but retain a bimodal distribution.
#### FCNN 2 (Hessian Optimization)
- **(g) FCNN 2 (Hessian, train)**:
- Loss landscape is nearly flat with a diffuse minimum at αâ0.01, ÎČâ-0.01.
- Loss values remain below 3.0 across most regions.
- **(h) FCNN 2 (Hessian, test)**:
- Minimum sharpens to αâ0.02, ÎČâ-0.02.
- Loss values show a clear gradient, with the lowest point at ~1.0.
---
### Key Observations
1. **Optimal Points (X)**:
- All heatmaps show the optimal parameter combination (X) near the center (αâ0, ÎČâ0), but its exact position varies slightly between training and testing.
- Hessian-based methods (c, d, g, h) exhibit more precise minima compared to random initialization (a, b, e, f).
2. **Loss Distribution**:
- **Random Initialization**: Broader, more dispersed loss landscapes (e.g., a, e).
- **Hessian Optimization**: Sharper, more focused minima (e.g., c, g).
3. **Training vs. Testing**:
- Training heatmaps (a, c, e, g) generally show smoother gradients.
- Testing heatmaps (b, d, f, h) exhibit sharper transitions near the optimal point, suggesting overfitting in some cases (e.g., e vs. f).
---
### Interpretation
The data demonstrates that Hessian-based optimization methods produce more stable and concentrated loss landscapes compared to random initialization. This suggests:
- **Improved Generalization**: Hessian methods reduce parameter sensitivity, leading to more consistent performance during testing.
- **Training Efficiency**: The flatter landscapes in Hessian-trained models (c, g) may indicate faster convergence during training.
- **Overfitting Risk**: The sharper minima in testing heatmaps (d, h) could imply overfitting, though this is mitigated by the Hessian approach.
The consistent positioning of the optimal point near the center (αâ0, ÎČâ0) across all heatmaps implies that the model's optimal parameters are inherently centered, but the optimization method critically influences the landscape's shape and stability.
</details>
Figure 12: Heatmaps of the mean squared error (MSE) loss along random and dominant Hessian directions for a function-approximation task. (aâd) Loss heatmaps for a fully connected neural network (FCNN) with 2 layers and 100 ReLU activations per layer [(a,c): training data, (b,d): test data]. (eâh) Loss heatmaps for an FCNN with 10 layers and 100 ReLU activations per layer [(e,g): training data, (f,h): test data]. Random directions are used in panels (a,b,e,f) while Hessian directions are used in panels (c,d,g,h). Green and red regions indicate small and large mean squared error (MSE) loss values, respectively. Black crosses indicate the positions of loss minima in the shown domain. Both neural networks, FCNN 1 and FCNN 2, are trained to approximate the smooth one-dimensional function (48).
As another example, we compare loss function visualizations that are based on random and Hessian directions in a function-approximation task. In accordance with Ref. 145, we consider the smooth one-dimensional function
$$
f(x)=\log(\sin(10x)+2)+\sin(x)\,, \tag{48}
$$
where $x\in[-1,1)$ . To approximate $f(x)$ , we use two fully connected neural networks (FCNNs) with 2 and 10 layers, respectively. Each layer has 100 ReLU activations. The numbers of parameters of the 2 and 10 layer architectures are 20,501 and 101,301, respectively. The training data is based on 50 points that are sampled uniformly at random from the interval $[-1,1)$ . We train both ANNs using a mean squared error (MSE) loss function and SGD with a learning rate of 0.1. The 2 and 10 layer architectures are respectively trained for 100,000 and 50,000 epochs to reach loss values of less than $10^{-4}$ . The best model was saved and evaluated by calculating the MSE loss for 1,000 points that were sampled uniformly at random from the interval $[-1,1)$ . We present an animation illustrating the evolution of random and Hessian loss projections during training at https://vimeo.com/787174225.
Figure 12 shows heatmaps of the training and test loss landscapes of both ANNs along random and dominant Hessian directions. Black crosses in Fig. 12 indicate loss minima. In line with the previous example, which focused on an image classification ANN, we observe for both FCNNs that random projections are associated with loss values that increase along both directions $\delta,\eta$ and for both training and test data [see Fig. 12 (a,b,e,f)]. For these projections, we find that the loss minima are very close to or at the origin of the loss space. The situation is different in the loss projections that are based on Hessian directions. Figure 12 (c) shows that the loss minimum for the 2-layer FCNN is not located at the origin but at $(\alpha,\beta)\approx(0,0.04)$ .
We find that the value of the training loss at that point is more than 9% smaller than the smallest training loss found in a random projection plot. The corresponding test loss is about 1% smaller than the test loss associated with the smallest training loss in the random projection plot. For the 10-layer FCNN, the smallest training loss in the Hessian direction projection is more than 26% smaller than the smallest training loss in the randomly projected loss landscape [see Fig. 12 (e,g)]. The corresponding test loss in the Hessian direction plot is about 6% smaller than the corresponding test loss minimum in the random direction plot [see Fig. 12 (f,h)]. Notice that both the training and test loss minima in the random direction heatmaps in Fig. 12 (e,f) are located at the origin while they are located at $(\alpha,\beta)\approx(0,-0.05)$ in the Hessian direction heatmaps in Fig. 12 (g,h).
These results show that it is possible to improve the training loss values along the dominant negative curvature direction. Smaller loss values may also be achievable with further gradient-based training iterations, so one has to consider tradeoffs between gradient and curvature-based optimization methods.
### 5 Conclusions
Over several decades, research at the intersection of statistical mechanics, neuroscience, and computer science has significantly enhanced our understanding of information processing in both living systems and machines. With the increasing computing power, numerous learning algorithms conceived in the latter half of the 20th century have found widespread application in tasks such as image classification, natural language processing, and anomaly detection.
The Ising model, one of the most thoroughly studied models in statistical mechanics, shares close connections with ANNs such as Hopfield networks and Boltzmann machines. In the first part of this chapter, we described these connections and provided an illustrative example of a Boltzmann machine that learned to generate new Ising configurations based on a set of training samples.
Despite decades of active research in artificial intelligence and machine learning, providing mechanistic insights into the representational capacity and learning dynamics of modern ANNs remains challenging due to their high dimensionality and complex, non-linear structure. Visualizing the loss landscapes associated with ANNs poses a particular challenge, as one must appropriately reduce dimensionality to visualize landscapes in two or three dimensions. To put it in the words of Ker-Chau Li, âThere are too many directions to project a high-dimensional data set and unguided plotting can be time-consuming and fruitlessâ. One approach is to employ random projections. However, this method often inaccurately depicts saddle points in high-dimensional loss landscapes as apparent minima in low-dimensional random projections.
In the second part of this chapter, we discussed various curvature measures that can be employed to characterize critical points in high-dimensional loss landscapes. Additionally, we outlined how Hessian directions (i.e., the eigenvectors of the Hessian at a critical point) can inform a more structured approach to projecting high-dimensional functions onto lower-dimensional representations.
While much of contemporary research in machine learning is driven by practical applications, incorporating theoretical contributions rooted in concepts from statistical mechanics and related disciplines holds promise for continuing to guide the development of learning algorithms.
### Acknowledgements
LB acknowledges financial support from hessian.AI and the ARO through grant W911NF-23-1-0129.
## References
- 1. L. ZdeborovĂĄ and F. Krzakala, Statistical physics of inference: Thresholds and algorithms, Advances in Physics. 65 (5), 453â552 (2016).
- 2. G. Carleo, I. Cirac, K. Cranmer, L. Daudet, M. Schuld, N. Tishby, L. Vogt-Maranto, and L. ZdeborovĂĄ, Machine learning and the physical sciences, Reviews of Modern Physics. 91 (4), 045002 (2019).
- 3. P. Mehta, M. Bukov, C.-H. Wang, A. G. Day, C. Richardson, C. K. Fisher, and D. J. Schwab, A high-bias, low-variance introduction to machine learning for physicists, Physics Reports. 810, 1â124 (2019).
- 4. D. J. Amit, H. Gutfreund, and H. Sompolinsky, Spin-glass models of neural networks, Physical Review A. 32 (2), 1007 (1985).
- 5. T. Ising, R. Folk, R. Kenna, B. Berche, and Y. Holovatch, The fate of Ernst Ising and the fate of his model, Journal of Physical Studies. 21, 3002 (2017).
- 6. R. Folk. The survival of Ernst Ising and the struggle to solve his model. In ed. Y. Holovatch, Order, Disorder and Criticality: Advanced Problems of Phase Transition Theory, pp. 1â77. World Scientific, Singapore (2023).
- 7. A. Rosenblueth, N. Wiener, and J. Bigelow, Behavior, purpose and teleology, Philosophy of Science. 10 (1), 18â24 (1943).
- 8. N. Wiener, Cybernetics or Control and Communication in the Animal and the Machine. MIT Press, Boston, MA, USA (1948).
- 9. E. Thorndike, Fundamentals of Learning. Teachers College, Columbia University, New York City, NY, USA (1932).
- 10. I. S. Berkeley, The curious case of connectionism, Open Philosophy. 2 (1), 190â205 (2019).
- 11. M. Minsky and S. Papert, Perceptrons: An Introduction to Computational Geometry, 2nd edn. MIT Press, Cambridge, MA, USA (1972).
- 12. S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, 3rd edn. Prentice Hall, Upper Saddle River, NJ, USA (2009).
- 13. V. Vapnik and A. Chervonenkis, Theory of Pattern Recognition: Statistical Learning Problems. Nauka, Moscow, Russia (1974). In Russian.
- 14. T. M. Mitchell, Machine Learning. McGraw-Hill, New York City, NY, USA (1997).
- 15. W. S. McCulloch and W. Pitts, A logical calculus of the ideas immanent in nervous activity, The Bulletin of Mathematical Biophysics. 5, 115â133 (1943).
- 16. T. H. Abraham, Nicolas Rashevskyâs mathematical biophysics, Journal of the History of Biology. 37 (2), 333â385 (2004).
- 17. A. Turing, On computable numbers with an application to the Entscheidungsproblem, Proceedings of the London Mathematical Society. 58, 230â265 (1937).
- 18. S. C. Kleene, Representation of Events in Nerve Nets and Finite Automata, In eds. C. E. Shannon and J. McCarthy, Automata Studies. (AM-34), vol. 34, pp. 3â42. Princeton University Press, Princeton, MA, USA (1956).
- 19. D. O. Hebb, The organization of behavior: A neuropsychological theory. Wiley & Sons, New York City, NY, USA (1949).
- 20. S. Löwel and W. Singer, Selection of intrinsic horizontal connections in the visual cortex by correlated neuronal activity, Science. 255 (5041), 209â212 (1992).
- 21. F. Rosenblatt, The perceptron: a probabilistic model for information storage and organization in the brain, Psychological Review. 65 (6), 386 (1958).
- 22. F. Rosenblatt et al., Principles of neurodynamics: Perceptrons and the theory of brain mechanisms. vol. 55, Spartan Books, Washington, DC, USA (1962).
- 23. B. Widrow and M. Hoff. Adaptive switching circuits. In 1960 IRE WESCON Convention Record, pp. 96â104 (1960).
- 24. S. Amari, A theory of adaptive pattern classifiers, IEEE Transactions on Electronic Computers. EC-16 (3), 299â307 (1967).
- 25. A. G. Ivakhnenko and V. G. Lapa, Cybernetics and Forecasting Techniques. American Elsevier Publishing Company, Philadelphia, PA, USA (1967).
- 26. A. G. Ivakhnenko, Polynomial theory of complex systems, IEEE Transactions on Systems, Man, and Cybernetics. (4), 364â378 (1971).
- 27. H. J. Kelley, Gradient theory of optimal flight paths, Ars Journal. 30 (10), 947â954 (1960).
- 28. S. Linnainmaa. The representation of the cumulative rounding error of an algorithm as a taylor expansion of the local rounding errors. Masterâs thesis, University of Helsinki (1970).
- 29. S. Linnainmaa, Taylor expansion of the accumulated rounding error, BIT Numerical Mathematics. 16 (2), 146â160 (1976).
- 30. P. J. Werbos. Applications of advances in nonlinear sensitivity analysis. In eds. R. F. Drenick and F. Kozin, System Modeling and Optimization, pp. 762â770, Springer Berlin Heidelberg, Berlin, Heidelberg, DE (1982).
- 31. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, Learning representations by back-propagating errors, Nature. 323 (6088), 533â536 (1986).
- 32. A. J. Noest. Phasor neural networks. In ed. D. Z. Anderson, Neural Information Processing Systems, Denver, Colorado, USA, 1987, pp. 584â591, American Institue of Physics (1987).
- 33. A. J. Noest, Discrete-state phasor neural networks, Physical Review A. 38 (4), 2196 (1988).
- 34. A. J. Noest, Associative memory in sparse phasor neural networks, Europhysics Letters. 6 (5), 469 (1988).
- 35. M. Kobayashi, Exceptional reducibility of complex-valued neural networks, IEEE Transactions on Neural Networks. 21 (7), 1060â1072 (2010).
- 36. T. Minemoto, T. Isokawa, H. Nishimura, and N. Matsui, Quaternionic multistate Hopfield neural network with extended projection rule, Artificial Life and Robotics. 21, 106â111 (2016).
- 37. H. Zhang, M. Gu, X. Jiang, J. Thompson, H. Cai, S. Paesani, R. Santagati, A. Laing, Y. Zhang, M. Yung, et al., An optical neural chip for implementing complex-valued neural network, Nature Communications. 12 (1), 457 (2021).
- 38. L. Böttcher and M. A. Porter, Complex networks with complex weights, Physical Review E. 109 (2), 024314 (2024).
- 39. T. Bayes, An essay towards solving a problem in the doctrine of chances, Philosophical Transactions of the Royal Society of London. 53, 370â418 (1763).
- 40. P. Laplace, Mâemoire sur la probabilitĂ© des causes par les ĂvĂ©nements, Mâemoires de MathĂ©matique et de Physique PresentĂ©s ĂĄ l/AcadĂ©mie Royale des Sciences, Par Divers Davans, LĂ»s dans ses AssemblĂ©es. 6, 621â656 (1774).
- 41. T. Seidenfeld, R. A. Fisherâs fiducial argument and Bayesâ theorem, Statistical Science. 7, 358â368 (1992).
- 42. T. Seidenfeld. Forbidden fruit: When Epistemic Probability may not take a bite of the Bayesian apple. In eds. W. Harper and G. Wheeler, Probability and Inference: Essays in Honor of Henry E. Kyburg, Jr. Kingâs College Publications, London (2007).
- 43. H. Jeffrey, The Theory of Probability, 3rd edn. Oxford Classic Texts in the Physical Sciences, Oxford University Press, Oxford, UK (1939).
- 44. C. E. Shannon, A mathematical theory of communication, The Bell System Technical Journal. 27 (3), 379â423 (1948).
- 45. E. T. Jaynes, Information theory and statistical mechanics, Physical Review. 106 (4), 620 (1957).
- 46. E. T. Jaynes, Information theory and statistical mechanics. II, Physical Review. 108 (2), 171 (1957).
- 47. E. T. Jaynes, The well-posed problem, Foundations of Physics. 3 (4), 477â493 (1973).
- 48. E. T. Jaynes, Probability Theory: The Logic of Science. Cambridge University Press, Cambridge, UK (2003).
- 49. T. Seidenfeld, Why I am not an objective Bayesian., Theory and Decision. 11 (4), 413â440 (1979).
- 50. G. Wheeler, Objective Bayesianism and the problem of non-convex evidence, The British Journal for the Philosophy of Science. 63 (3), 841â50 (2012).
- 51. I. Hacking. Letâs not talk about objectivity. In eds. F. Padovani, A. Richardson, and J. Y. Tsou, Objectivity in Science, vol. 310, Boston Studies in the Philosophy and History of Science, pp. 19â33. Springer, Cham, CH (2015).
- 52. A. Gelman, Beyond subjective and objective in statistics, Journal of the Royal Statistical Society. Series A. 180 (4), 967â1033 (2017).
- 53. R. Haenni, J.-W. Romeijn, G. Wheeler, and J. Williamson. Logical relations in a statistical problem. In eds. B. Löwe, E. Pacuit, and J.-W. Romeijn, Foundations of the Formal Sciences VI: Reasoning about probabilities and probabilistic reasoning, vol. 16, Studies in Logic, pp. 49â79, College Publications, London, UK (2009).
- 54. G. de Cooman, Belief models: An order-theoretic investigation, Annals of Mathematics and Artificial Intelligence. 45 (5â34) (2005).
- 55. R. Haenni, J.-W. Romeijn, G. Wheeler, and J. Williamson, Probabilistic Logics and Probabilistic Networks. Synthese Library, Springer, Dordrecht (2011).
- 56. D. R. Cox, The regression analysis of binary sequences, Journal of the Royal Statistical Society. 20 (2), 215â242 (1958).
- 57. G. E. Hinton and D. van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In ed. L. Pitt, Proceedings of the Sixth Annual ACM Conference on Computational Learning Theory, COLT 1993, Santa Cruz, CA, USA, July 26-28, 1993, pp. 5â13, ACM (1993).
- 58. R. M. Neal, Bayesian learning for neural networks. vol. 118, Lecture Notes in Statistics, Springer Verlog, New York City, NY, USA (1996).
- 59. S.-I. Amari, Learning patterns and pattern sequences by self-organizing nets of threshold elements, IEEE Transactions on Computers. 100 (11), 1197â1206 (1972).
- 60. J. J. Hopfield, Neural networks and physical systems with emergent collective computational abilities, Proceedings of the National Academy of Sciences of the United States of America. 79 (8), 2554â2558 (1982).
- 61. T. Isokawa, H. Nishimura, N. Kamiura, and N. Matsui, Associative memory in quaternionic Hopfield neural network, International Journal of Neural Systems. 18 (02), 135â145 (2008).
- 62. H. Ramsauer, B. SchÀfl, J. Lehner, P. Seidl, M. Widrich, L. Gruber, M. Holzleitner, T. Adler, D. P. Kreil, M. K. Kopp, G. Klambauer, J. Brandstetter, and S. Hochreiter. Hopfield networks is all you need. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021 (2021).
- 63. M. Benedetti, E. Ventura, E. Marinari, G. Ruocco, and F. Zamponi, Supervised perceptron learning vs unsupervised Hebbian unlearning: Approaching optimal memory retrieval in Hopfield-like networks, The Journal of Chemical Physics. 156 (10) (2022).
- 64. L. Böttcher and H. J. Herrmann, Computational Statistical Physics. Cambridge University Press, Cambridge, UK (2021).
- 65. S. E. Fahlman, G. E. Hinton, and T. J. Sejnowski. Massively Parallel Architectures for AI: NETL, Thistle, and Boltzmann Machines. In ed. M. R. Genesereth, Proceedings of the National Conference on Artificial Intelligence, Washington, DC, USA, August 22-26, 1983, pp. 109â113, AAAI Press (1983).
- 66. G. E. Hinton and T. J. Sejnowski. Analyzing cooperative computation. In Proceedings of the Fifth Annual Conference of the Cognitive Science Society, Rochester, NY, USA (1983).
- 67. G. E. Hinton and T. J. Sejnowski. Optimal perceptual inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, vol. 448, pp. 448â453 (1983).
- 68. D. H. Ackley, G. E. Hinton, and T. J. Sejnowski, A learning algorithm for Boltzmann machines, Cognitive Science. 9 (1), 147â169 (1985).
- 69. P. Smolensky. Information processing in dynamical systems: Foundations of harmony theory. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, pp. 194â281. MIT Press, Cambridge, MA, USA (1986).
- 70. G. E. Hinton, Training products of experts by minimizing contrastive divergence, Neural Computation. 14 (8), 1771â1800 (2002).
- 71. G. E. Hinton, S. Osindero, and Y.-W. Teh, A fast learning algorithm for deep belief nets, Neural computation. 18 (7), 1527â1554 (2006).
- 72. G. E. Hinton and R. R. Salakhutdinov, Reducing the dimensionality of data with neural networks, Science. 313 (5786), 504â507 (2006).
- 73. D. Kim and D.-H. Kim, Smallest neural network to learn the Ising criticality, Physical Review E. 98 (2), 022138 (2018).
- 74. S. Efthymiou, M. J. Beach, and R. G. Melko, Super-resolving the ising model with convolutional neural networks, Physical Review B. 99 (7), 075113 (2019).
- 75. F. DâAngelo and L. Böttcher, Learning the Ising model with generative neural networks, Physical Review Research. 2 (2), 023266 (2020).
- 76. G. Carleo and M. Troyer, Solving the quantum many-body problem with artificial neural networks, Science. 355 (6325), 602â606 (2017).
- 77. G. Torlai, G. Mazzola, J. Carrasquilla, M. Troyer, R. Melko, and G. Carleo, Neural-network quantum state tomography, Nature Physics. 14 (5), 447â450 (2018).
- 78. J. Schmidhuber, Deep learning in neural networks: An overview, Neural Networks. 61, 85â117 (2015).
- 79. J. Dean, A golden decade of deep learning: Computing systems & applications, Daedalus. 151 (2), 58â74 (2022).
- 80. S. Hochreiter and J. Schmidhuber, Long short-term memory, Neural Computation. 9 (8), 1735â1780 (1997).
- 81. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition, Proceedings of the IEEE. 86 (11), 2278â2324 (1998).
- 82. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, and Y. Bengio. Generative adversarial nets. In eds. Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pp. 2672â2680 (2014).
- 83. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. In eds. I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, and R. Garnett, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 5998â6008 (2017).
- 84. A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, Automatic differentiation in PyTorch (2017).
- 85. G. Cybenko, Approximation by superpositions of a sigmoidal function, Mathematics of Control, Signals and Systems. 2 (4), 303â314 (1989).
- 86. K. Hornik, M. Stinchcombe, and H. White, Multilayer feedforward networks are universal approximators, Neural Networks. 2 (5), 359â366 (1989).
- 87. K. Hornik, Approximation capabilities of multilayer feedforward networks, Neural Networks. 4 (2), 251â257 (1991).
- 88. Y.-J. Wang and C.-T. Lin, Runge-Kutta neural network for identification of dynamical systems in high accuracy, IEEE Transactions on Neural Networks. 9 (2), 294â307 (1998).
- 89. T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. Duvenaud. Neural ordinary differential equations. In eds. S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, MontrĂ©al, Canada, pp. 6572â6583 (2018).
- 90. E. Kaiser, J. N. Kutz, and S. L. Brunton, Data-driven discovery of Koopman eigenfunctions for control, Machine Learning: Science and Technology. 2 (3), 035023 (2021).
- 91. F. SchÀfer, P. Sekatski, M. Koppenhöfer, C. Bruder, and M. Kloc, Control of stochastic quantum dynamics by differentiable programming, Machine Learning: Science and Technology. 2 (3), 035004 (2021).
- 92. L. Böttcher, N. Antulov-Fantulin, and T. Asikis, AI Pontryagin or how artificial neural networks learn to control dynamical systems, Nature Communications. 13 (1), 333 (2022).
- 93. T. Asikis, L. Böttcher, and N. Antulov-Fantulin, Neural ordinary differential equation control of dynamics on graphs, Physical Review Research. 4 (1), 013221 (2022).
- 94. L. Böttcher and T. Asikis, Near-optimal control of dynamical systems with neural ordinary differential equations, Machine Learning: Science and Technology. 3 (4), 045004 (2022).
- 95. L. Böttcher, T. Asikis, and I. Fragkos, Control of dual-sourcing inventory systems using recurrent neural networks, INFORMS Journal on Computing. 35 (6), 1308â1328 (2023).
- 96. L. Böttcher. Gradient-free training of neural ODEs for system identification and control using ensemble Kalman inversion. In ICML Workshop on New Frontiers in Learning, Control, and Dynamical Systems, Honolulu, HI, USA, 2023 (2023).
- 97. D. Ruiz-Balet and E. Zuazua, Neural ODE control for classification, approximation, and transport, SIAM Review. 65 (3), 735â773 (2023).
- 98. T. Asikis. Towards recommendations for value sensitive sustainable consumption. In NeurIPS 2023 Workshop on Tackling Climate Change with Machine Learning: Blending New and Existing Knowledge Systems (2023). URL https://nips.cc/virtual/2023/76939.
- 99. M. Raissi, P. Perdikaris, and G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, Journal of Computational physics. 378, 686â707 (2019).
- 100. S. L. Brunton and J. N. Kutz, Data-driven science and engineering: Machine learning, dynamical systems, and control. Cambridge University Press, Cambridge, UK (2022).
- 101. S. Mowlavi and S. Nabi, Optimal control of PDEs using physics-informed neural networks, Journal of Computational Physics. 473, 111731 (2023).
- 102. C. Fronk and L. Petzold, Interpretable polynomial neural ordinary differential equations, Chaos: An Interdisciplinary Journal of Nonlinear Science. 33 (4) (2023).
- 103. M. Xia, L. Böttcher, and T. Chou, Spectrally adapted physics-informed neural networks for solving unbounded domain problems, Machine Learning: Science and Technology. 4 (2), 025024 (2023).
- 104. L. Böttcher, L. L. Fonseca, and R. C. Laubenbacher, Control of medical digital twins with artificial neural networks, arXiv preprint arXiv:2403.13851 (2024).
- 105. L. Böttcher and G. Wheeler, Visualizing high-dimensional loss landscapes with Hessian directions, Journal of Statistical Mechanics: Theory and Experiment. p. 023401 (2024).
- 106. M. Thoma. LaTeX Examples: Hopfield Network. URL https://github.com/MartinThoma/LaTeX-examples/tree/master/tikz/hopfield-network (Accessed: September 16, 2023).
- 107. J. Hertz, R. G. Palmer, and A. S. Krogh, Introduction to the Theory of Neural Computation, 1st edn. Perseus Publishing, New York City, New York, USA (1991).
- 108. A. J. Storkey. Increasing the capacity of a Hopfield network without sacrificing functionality. In eds. W. Gerstner, A. Germond, M. Hasler, and J. Nicoud, Artificial Neural Networks - ICANN â97, 7th International Conference, Lausanne, Switzerland, October 8-10, 1997, Proceedings, vol. 1327, Lecture Notes in Computer Science, pp. 451â456, Springer (1997).
- 109. N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, Equation of state calculations by fast computing machines, The Journal of Chemical Physics. 21 (6), 1087â1092 (1953).
- 110. J. E. Gubernatis, Marshall Rosenbluth and the Metropolis algorithm, Physics of Plasmas. 12 (5) (2005).
- 111. S. Kirkpatrick, C. D. Gelatt Jr, and M. P. Vecchi, Optimization by simulated annealing, Science. 220 (4598), 671â680 (1983).
- 112. R. J. Glauber, Time-dependent statistics of the Ising model, Journal of Mathematical Physics. 4 (2), 294â307 (1963).
- 113. M. Ă. Carreira-Perpiñån and G. E. Hinton. On contrastive divergence learning. In eds. R. G. Cowell and Z. Ghahramani, Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, AISTATS 2005, Bridgetown, Barbados, January 6-8, 2005, Society for Artificial Intelligence and Statistics (2005).
- 114. G. Hinton, A practical guide to training restricted Boltzmann machines, Momentum. 9 (1), 926 (2010).
- 115. S. Park, C. Yun, J. Lee, and J. Shin. Minimum width for universal approximation. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021 (2021).
- 116. M. Hardt, B. Recht, and Y. Singer. Train faster, generalize better: Stability of stochastic gradient descent. In eds. M. Balcan and K. Q. Weinberger, Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, vol. 48, JMLR Workshop and Conference Proceedings, pp. 1225â1234, JMLR.org (2016).
- 117. A. C. Wilson, R. Roelofs, M. Stern, N. Srebro, and B. Recht, The marginal value of adaptive gradient methods in machine learning. pp. 4148â4158 (2017).
- 118. D. Choi, C. J. Shallue, Z. Nado, J. Lee, C. J. Maddison, and G. E. Dahl, On empirical comparisons of optimizers for deep learning, arXiv preprint arXiv:1910.05446 (2019).
- 119. N. S. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, and P. T. P. Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings (2017).
- 120. L. Dinh, R. Pascanu, S. Bengio, and Y. Bengio. Sharp minima can generalize for deep nets. In eds. D. Precup and Y. W. Teh, Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, vol. 70, Proceedings of Machine Learning Research, pp. 1019â1028 (2017).
- 121. I. J. Goodfellow and O. Vinyals. Qualitatively characterizing neural network optimization problems. In eds. Y. Bengio and Y. LeCun, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings (2015).
- 122. H. Li, Z. Xu, G. Taylor, C. Studer, and T. Goldstein. Visualizing the loss landscape of neural nets. In eds. S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, MontrĂ©al, Canada, pp. 6391â6401 (2018).
- 123. Y. Hao, L. Dong, F. Wei, and K. Xu. Visualizing and understanding the effectiveness of BERT. In eds. K. Inui, J. Jiang, V. Ng, and X. Wan, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pp. 4141â4150, Association for Computational Linguistics (2019).
- 124. D. Wu, S. Xia, and Y. Wang. Adversarial weight perturbation helps robust generalization. In eds. H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual (2020).
- 125. S. Horoi, J. Huang, B. Rieck, G. Lajoie, G. Wolf, and S. Krishnaswamy. Exploring the geometry and topology of neural network loss landscapes. In eds. T. Bouadi, Ă. Fromont, and E. HĂŒllermeier, Advances in Intelligent Data Analysis XX - 20th International Symposium on Intelligent Data Analysis, IDA 2022, Rennes, France, April 20-22, 2022, Proceedings, vol. 13205, Lecture Notes in Computer Science, pp. 171â184, Springer (2022).
- 126. J. M. Lee, Riemannian manifolds: An introduction to curvature. vol. 176, Springer Science & Business Media, New York City, NY, USA (2006).
- 127. M. Berger and B. Gostiaux, Differential Geometry: Manifolds, Curves, and Surfaces. Springer Science & Business Media, New York City, NY, USA (2012).
- 128. W. KĂŒhnel, Differential Geometry. American Mathematical Society, Providence, RI, USA (2015).
- 129. K. Crane, Discrete differential geometry: An applied introduction, Notices of the AMS, Communication. pp. 1153â1159 (2018).
- 130. J. Martens, New insights and perspectives on the natural gradient method, Journal of Machine Learning Research. 21 (146), 1â76 (2020).
- 131. R. Hecht-Nielsen, Context vectors: general purpose approximate meaning representations self-organized from raw data, Computational Intelligence: Imitating Life, IEEE Press. pp. 43â56 (1994).
- 132. R. B. Davies, Algorithm AS 155: The distribution of a linear combination of $\chi^{2}$ random variables, Applied Statistics. 29 (3), 323â333 (1980).
- 133. B. Laurent and P. Massart, Adaptive estimation of a quadratic functional by model selection, Annals of Statistics. 28 (5), 1302â1338 (2000).
- 134. J. Bausch, On the efficient calculation of a linear combination of chi-square random variables with an application in counting string vacua, Journal of Physics A: Mathematical and Theoretical. 46 (50), 505202 (2013).
- 135. T. Chen and T. Lumley, Numerical evaluation of methods approximating the distribution of a large quadratic form in normal variables, Computational Statistics & Data Analysis. 139, 75â81 (2019).
- 136. J. MatouĆĄek, On variants of the JohnsonâLindenstrauss lemma, Random Structures and Algorithms. 33 (2), 142â156 (2008).
- 137. M. F. Hutchinson, A stochastic estimator of the trace of the influence matrix for Laplacian smoothing splines, Communications in Statistics-Simulation and Computation. 18 (3), 1059â1076 (1989).
- 138. Z. Bai, G. Fahey, and G. Golub, Some large-scale matrix computation problems, Journal of Computational and Applied Mathematics. 74 (1-2), 71â89 (1996).
- 139. H. Avron and S. Toledo, Randomized algorithms for estimating the trace of an implicit symmetric positive semi-definite matrix, Journal of the ACM (JACM). 58 (2), 1â34 (2011).
- 140. Y. N. Dauphin, R. Pascanu, Ă. GĂŒlçehre, K. Cho, S. Ganguli, and Y. Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In eds. Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pp. 2933â2941 (2014).
- 141. P. Baldi and K. Hornik, Neural networks and principal component analysis: Learning from examples without local minima, Neural Networks. 2 (1), 53â58 (1989).
- 142. Z. Yao, A. Gholami, K. Keutzer, and M. W. Mahoney. PyHessian: Neural networks through the lens of the Hessian. In eds. X. Wu, C. Jermaine, L. Xiong, X. Hu, O. Kotevska, S. Lu, W. Xu, S. Aluru, C. Zhai, E. Al-Masri, Z. Chen, and J. Saltz, 2020 IEEE International Conference on Big Data (IEEE BigData 2020), Atlanta, GA, USA, December 10-13, 2020, pp. 581â590, IEEE (2020).
- 143. Z. Liao and M. W. Mahoney. Hessian eigenspectra of more realistic nonlinear models. In eds. M. Ranzato, A. Beygelzimer, Y. N. Dauphin, P. Liang, and J. W. Vaughan, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 20104â20117 (2021).
- 144. R. L. Burden, J. D. Faires, and A. M. Burden, Numerical Analysis; 10th edition. Cengage Learning, Boston, MA, USA (2015).
- 145. B. Adcock and N. Dexter, The gap between theory and practice in function approximation with deep neural networks, SIAM Journal on Mathematics of Data Science. 3 (2), 624â655 (2021).