Coherent Ising machines - Quantum optics and neural network perspectives -
Y. Yamamoto 1 , T. Leleu 2,3 , S. Ganguli 4 , and H. Mabuchi 4
- 1 Physics & Informatics Laboratories, NTT Research, Inc., 1950 University Ave. #600, East Palo Alto, CA 94303, USA
- 2 Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan
- 3 International Research Center for Neurointelligence, The University of Tokyo, 7-3-1 Hongo Bunkyo-ku, Tokyo 113-0033, JAPAN
- 4 Department of Applied Physics, Stanford University, Stanford, CA94305, USA
Author to whom correspondence should be addressed: yoshihisa.yamamoto@ntt-research.com
## ABSTRACT
A coherent Ising machine (CIM) is a network of optical parametric oscillators (OPOs), in which the 'strongest' collective mode of oscillation at well above threshold corresponds to an optimum solution of a given Ising problem. When a pump rate or network coupling rate is increased from below to above threshold, however, the eigenvectors with a smallest eigenvalue of Ising coupling matrix [ π½π½ ππππ ] appear near threshold and impede the machine to relax to true ground states. Two complementary approaches to attack this problem are described here. One approach is to utilize squeezed/antisqueezed vacuum noise of OPOs below threshold to produce coherent spreading over numerous local minima via quantum noise correlation, which could enable the machine to access either true ground states or excited states with eigen-energies close enough to that of ground states above threshold. The other approach is to implement real-time error correction feedback loop so that the machine migrates from one local minimum to another during an explorative search for ground states. Finally, a set of qualitative analogies connecting the CIM and traditional computer science techniques are pointed out. In particular, belief propagation and survey propagation used in combinatorial optimization are touched upon.
## Introduction
Recently, various heuristics and hardware platforms have been proposed and demonstrated to solve hard combinatorial or continuous optimization problems. The cost functions to be minimized
in those problems are either Ising Hamiltonian, β Ising = βπ½π½ππππ ππ ππ ππ ππ , for binary variables ππ ππ = Β±1 or XY Hamiltonian, β XY = βπ½π½ππππ cos οΏ½ππ ππ -πππποΏ½ , for continuous variables ππ ππ = [0, 2 ππ ] , whic h is mapped to the energy landscape of classical spins, [1][2][3] quantum spins, [4][5] solid state devices [6][7][8] or neural networks. [9][10] Convergence to a ground state is assured for a slow enough decrease of the temperature in simulated annealing. [11] An alternative approach based on networks of optical parametric oscillators (OPOs) [12][13][14][15][16][17][18] and Bose-Einstein condensates [19][20] has been also actively pursued, in which the target function is mapped to a loss landscape. Intuitively, by increasing the gain of such an open-dissipative network with a slow enough speed by ramping an external pump source, a lowest-loss ground state is expected to emerge as a single oscillation/condensation mode. [13][21] In practice, ramping the gain of such a system results in a complex series of bifurcations that may guide or divert evolution towards optimal solution states.
One of the unique theoretical advantages of the second approach, for instance in a coherent Ising machine (CIM), [12][13][14][15][16] is that quantum noise correlation formed among OPOs below oscillation threshold could in principle facilitate quantum parallel search across multiple regions of phase space. [22] Another unique advantage is that following the oscillation-threshold transition, exponential amplification of the amplitude of a selected ground state is realized in a relatively short time scale of the order of a photon lifetime. In a non-dissipative degenerate parametric oscillator, two stable states at above bifurcation point co-exist as a linear superposition state. [23][24] On the other hand, the network of dissipative OPOs [13][14][15][16][17] changes its character from a quantum analog device below threshold to a classical digital device above threshold. Such quantum-to-classical crossover behavior of CIM guarantees a robust classical output as a computational result, which is in sharp contrast to a standard quantum computer based on linear amplitude amplification realized by Grover algorithm and projective measurement. [25]
A CIM based on coupled OPOs, however, has one serious drawback as an engine for solving combinatorial optimization problems: mapping of a cost function to the network loss landscape often fails due to the fundamentally analog nature of the constituent spins, i.e. , the possibility for constituent OPOs to oscillate with unequal amplitudes. This problem is particularly serious for a frustrated spin model. The network may spontaneously find an excited state of the target Hamiltonian with lower effective loss than a true ground state by self-adjusting oscillator amplitudes. [13] An oscillator configuration with frustration and thus higher loss may retain only small probability amplitude, while an oscillator configuration with no frustration and thus smaller loss acquires a large probability amplitude. In this way, an excited state can achieve a smaller overall loss than a ground state (see Fig. 6 of [13]). Recently, the use of an error detection and correction feedback loop has been proposed to
suppress this amplitude heterogeneity problem [20] and the improved performance of such a feedback controlled CIM has been numerically confirmed. [26] The proposed system has a recurrent neural network configuration with asymmetric weights ( π½π½ ππππ β π½π½ ππππ ) so that it is not a simple gradient-descent system any more. The machine can escape from a local minimum by a diverging error correction field and migrate from one local minimum to another. The ground state can be identified during such a random exploration of the machine.
In this letter, we present several complementary perspectives for this computing machine, which are based on diverse, interdisciplinary viewpoints spanning quantum optics, neural networks and message passing. Along the way we will touch upon connections between the CIM and foundational concepts spanning the fields of statistical physics, mathematics, and computer science, including dynamical systems theory, bifurcation theory, chaos, spin glasses, belief propagation and survey propagation. We hope the bridges we build in this article between such diverse fields will provide the inspiration for future directions of interdisciplinary research that can benefit from the crosspollination of ideas across multifaceted classical, quantum and neural approaches to combinatorial optimization.
## Optimization dynamics in continuous variable space
CIM studies today could well be characterized as experimentally-driven computer science, much like contemporary deep learning research and in contrast to the current scenario of mainstream quantum computing. Large-scale measurement feedback coupling coherent Ising machine (MFBCIM) prototypes constructed by NTT Basic Research Laboratories [15] are reaching intriguing levels of computational performance that, in a fundamental theoretical sense, we do not really understand. While we can thoroughly analyze some quantum-optical aspects of CIM component device behavior in the small size regime, [27][28][29] we lack a crisp understanding of how the physical dynamics of large CIMs relate to the computational complexity of combinatorial optimization. Promising experimental benchmarking results [30] are thus driving theoretical studies aimed at better elucidating fundamental operating principles of the CIM architecture and at enabling confident predictions of future scaling potential. We thus face complementary obstacles to those of mainstream quantum computing, in which we have long had theoretical analyses pointing to exponential speedups while even small-scale implementations have required sustained laboratory efforts over several decades.
What is the effective search mechanism of large-scale CIM? Are quantum effects decisive for the performance of current and near-term MFB-CIM prototypes, and if not, could existing architectures and algorithms be generalized to realize quantum performance enhancements? Can we
relate exponential gain (as understood from a quantum optics perspective) to features of the phase portraits of CIMs viewed as dynamical systems, and thereby rationalize its role in facilitating rapid evolution towards states with low Ising energy? Can we rationally design better strategies for varying the pump strength?
Generally speaking, CIM may be viewed as an approach to mapping combinatorial (discrete variable) optimization problems into physical dynamics on a continuous variable space, in which the dynamics can furthermore be modulated to evolve/bifurcate the phase portrait during an individual optimization trajectory. The overarching problem of CIM algorithm design could thus be posed as choosing initial conditions for the phase-space variables together with a modulation scheme for the dynamics, such that we maximize the probability and minimize the time required to converge to states from which we can infer very good solutions to a combinatorial optimization problem instance encoded in parameters of the dynamics. While our initialization and modulation scheme obviously cannot require prior knowledge of what these very good solutions are, it should be admissible to consider strategies that depend upon inexpensive structural analyses of a given problem instance and/or real-time feedback during dynamic optimization. The structure of near-term-feasible CIM hardware places constraints on the practicable set of algorithms, while limits on our capacity to prove theorems about such complex dynamical scenarios generally restricts us to the development of heuristics rather than algorithms with performance guarantees.
We may note in passing that in addition to lifting combinatorial problems into continuous variable spaces, analog physics-based engines such as CIMs generally also embed them in larger model spaces that can be traversed in real time. The canonical CIM algorithm implicitly transitions from a linear solver to a soft-spin Ising model, and a recently-developed generalized CIM algorithm with feedback control can access a regime of fixed-amplitude Ising dynamics as well. [26] Given the central role of the optical parametric amplifier (OPA) in the CIM architecture, it stands to reason that it could be possible to transition smoothly between XY-type and Ising-type models by adjusting hardware parameters that tune the OPA between non-degenerate and degenerate operation. [31] Analog physics-based engines thus motivate a broader study of relationships among the landscapes of Isingtype optimization problems with fixed coupling coefficients but different variable types, which could further help to inform the development of generalized CIM algorithms.
The dynamics of a classical, noiseless CIM can be modeled using coupled ordinary differential equations (ODEs):
$$\frac { d x _ { i } } { d t } = - x _ { i } ^ { 3 } + a x _ { i } - \sum J _ { i j } x _ { j } ,$$
where π₯π₯ ππ is the (quadrature) amplitude of the ππ π‘π‘β OPO mode (spin), π½π½ ππππ are the coupling coefficients defining an Ising optimization problem of interest (here we will assume π½π½ ππππ = 0 ), and ππ is a gain-loss parameter corresponding to the difference between the CIM's parametric (OPA) gain and its round-trip (passive linear) optical losses (e.g., [13][32]). We note that similar equations appear in the neuroscience literature for modeling neural networks ( e.g. , [33]). In the absence of couplings among the spins ( π½π½ ππππ β 0 ) each OPO mode independently exhibits a pitchfork bifurcation as the gainloss parameter ππ crosses through zero (increasing from negative to positive value), corresponding to the usual OPO 'lasing' transition. With non-zero couplings however, the bifurcation set of the model is much more complicated.
In the standard CIM algorithm the π½π½ ππππ matrix is chosen to be (real) symmetric, although current hardware architectures would easily permit asymmetric implementations. With π½π½ ππππ symmetric it is possible to view the overall CIM dynamics as gradient descent in a landscape determined jointly by the individual OPO terms and the Ising potential energy. Following recent practice in related fields, [33][34] we may assess generic behavior of the above model for large problem size (large number of spins, ππ ) by treating π½π½ ππππ as a random matrix whose elements are drawn i.i.d. from a zero mean Gaussian with variance 1/ ππ . This choice corresponds to the famous Sherrington-Kirkpatrick (SK) Ising spin glass model. [35] The origin π₯π₯ ππ = 0 is clearly a fixed point of the dynamics for all parameter values, and in the loss-dominated regime ( ππ negative, and less than the smallest eigenvalue of π½π½ ππππ matrix) it is the unique stable fixed point. Assuming π½π½ ππππ is symmetric as implemented, the first bifurcation as ππ is increased (pump power is increased) necessarily occurs as ππ crosses the smallest eigenvalue of π½π½ ππππ and results in destabilization of the origin, with a pair of local minima emerging along positive and negative directions aligned with the eigenvector of π½π½ ππππ corresponding to this lowest eigenvalue. If we assume that the CIM is initialized at the origin (all OPO modes in vacuum) and the pump is increased gradually from zero, we may expect the spin-amplitudes to adiabatically follow this bifurcation and thus take values such that the π₯π₯ ππ are proportional to the eigenvector with a smallest eigenvalue of π½π½ ππππ just after ππ crosses the smallest eigenvalue. The sign structure of this eigenvector is known to be a simple (although not necessarily very good) heuristic for a low-energy solution of the corresponding Ising optimization problem. For example, for the SK model, the spin configuration obtained from rounding the eigenvector with a smallest eigenvalue of π½π½ ππππ is thought to have a 16% higher energy density (energy per spin) than that of the ground state spin configuration. [36]
In the opposite regime of high pump amplitude, ππ β«οΏ½π½π½ππππ οΏ½ , we can infer the existence of a set of fixed points determined by the independent OPO dynamics (ignoring the π½π½ ππππ terms) with each of the π₯π₯ ππ assuming one of three possible values οΏ½ 0, Β± βπποΏ½ . The leading-order effect of the coupling
terms can then be considered perturbatively, leading to the conclusion [37] that the subset of fixed points without any zero values among the π₯π₯ ππ are local minima having energies
$$E ( \overline { x } ) = - \frac { a ^ { 2 } } { 4 } + \sum _ { i , j } J _ { i j } \frac { \overline { x } _ { i } } { | \overline { x } _ { i } | } \frac { \overline { x } _ { j } } { | \overline { x } _ { j } | } + 0 ( a ^ { - 3 } ) ,$$
with energy-distance relation
$$E ( \bar { x } ) = - \frac { 1 } { 4 } \sum _ { i } \bar { x } _ { i } ^ { 4 } .$$
Here the bar above π₯π₯ means an ensemble average over many trajectories, when there exists stochastic noise in the system.
It follows that the global minimum spin configuration for the Ising problem instance encoded by π½π½ ππππ can be inferred from the sign structure of the local minimum lying at greatest distance from the origin, and that very good solutions can similarly be inferred from local minima at large squaredradius. We may see in this some validation of the foundational physical intuition that in a network of OPOs coupled according to a set of π½π½ ππππ coefficients, the 'strongest' (largest amplitude, for a given pump strength) collective mode of oscillation should correspond somehow with an optimum solution (having minimum value of the π½π½ ππππ coupling term) of an Ising problem defined by these π½π½ ππππ .
A big picture thus emerges in which initialization at the origin (all OPOs in vacuum) and adiabatic increase of the pump amplitude induces a transition between a low-pump regime in which the spin-amplitudes assume a sign structure determined by the minimum eigenvector of π½π½ ππππ , and a high-pump regime in which good Ising solutions are encoded in the sign structures of minima sitting at greatest distance from the origin [37] . Apparently, complex things happen in the intermediate regime. Qualitatively speaking, the gradual increase of ππ in the above equations of motion induces a sequence of bifurcations that modify the phase portrait in which the CIM state evolves. In simple cases, the state variables π₯π₯ ππ could follow an 'adiabatic trajectory' that connects the origin (at zero pump amplitude) to a fixed point in the high-pump regime (asymptotic in large ππ ) whose sign structure yields a heuristic solution to the Ising optimization. In general, one observes that such adiabatic trajectories include sign flips relative to the first-bifurcated state proportional to the eigenvector with a smallest eigenvalue of π½π½ ππππ . In a non-negligible fraction of cases, as revealed by numerical characterization of the bifurcation set for randomly-generated π½π½ ππππ with ππ ~10 2 , the adiabatic trajectory starting from the origin is at some point interrupted by a subcritical bifurcation that destabilizes the local minimum being followed without creating other local minima in the immediate neighborhood. (Indeed, some period of evolution along an unstable manifold would seem
to be required for the observation of a lasing transition with exponential gain.) For such problem instances, a fiduciary evolution of the CIM state cannot be directly inferred from computation of fixed-point trajectories as a function of ππ .
Generally speaking, in the 'near-threshold' regime with ππ ~0 we may expect the CIM to exhibit 'glassy' dynamics with pervasive marginally-stable local minima, and as a consequence the actual solution trajectory followed in a real experimental run could depend strongly on exogenous factors such as technical noise and instabilities. Hence it is not clear whether we should expect the type of adiabatic trajectory described above to occur commonly, in practice. Indeed, fluctuations could potentially induce accidental asymmetries in the implementation of the π½π½ ππππ coupling term, which could in turn induce chaotic transients that significantly affect the optimization dynamics. We note that the existence of a chaotic phase has been predicted [33] on the basis of mean-field theory (in the sense of statistical mechanics) for a model similar to the CIM model considered here, but with a fully random coupling matrix without symmetry constraint. Characterization of the phase diagram for near-symmetric π½π½ ππππ (nominally symmetric but with small asymmetric perturbations) seems feasible and is currently being studied. [38] It is tempting to ask whether a glassy phase portrait for the classical ODE model in the near-threshold regime could correspond in some way with non-classical behavior observed in full quantum simulations of optical delay line coupled coherent Ising machine (ODL-CIM) models near threshold, as reviewed in the next section. It seems natural to conjecture that quantum uncertainties associated with anti-squeezing below threshold could induce coherent spreading over a glassy landscape with numerous marginal minima, with associated buildup of quantum correlation among spin-amplitudes.
The above picture calls attention to a need to understand the topological nature of the phase portrait and its evolution as the pump amplitude, ππ , is varied. Indeed, we may restate in some sense the abstract formulation of the CIM algorithm design problem: Can we find a strategy for modulating the CIM dynamics in a way that enables us to predict (without prior knowledge of actual solutions) how to initialize the spin-amplitudes such that they are guided into the basin of attraction of the largest-radius minimum in the high pump regime? Or into one of the basins of attraction of a class of acceptably large-radius minima (corresponding to very good solutions)? Of course, an additional auxiliary design goal will be to guide the CIM state evolution in such a way that the asymptotic sign structure is reached quickly.
In the near/below-threshold regime, we may anticipate at least two general features of the phase portrait that could present obstacles to rapid equilibration. One would be the afore-mentioned prevalence of marginal local minima (having eigenvalues with very small or vanishing real part), but
another would be a prevalence of low-index saddle points. Trajectories within either type of phase portrait could display intermittent dynamics that impede gradient-descent towards states of lower energy. Focusing on the below-threshold regime in which the Ising-interaction energy term may still dominate the phase portrait topology, we might infer from works such as [39] that for large ππ with π½π½ ππππ symmetric-random-Gaussian, fixed points lying well above the minimum energy could dominantly be saddles with strong correlation between the energy of a fixed point and its index (fraction of unstable eigenvalues), as discussed in [34]. If such features of the landscape for randomGaussian π½π½ ππππ generalize to instances of practical interest, as a gradient-descent trajectory approaches phase space regions of lower and lower energy, results from [34][39] suggest that the rate of descent could become limited by escape times from low-index saddles whose eigenvalues are not necessarily small, but whose local unstable manifold may have dimension small relative to ππ .
One wonders whether there might be CIM dynamical regimes in which the gradient-descent trajectory takes on the character of an 'instanton cascade' that visits (neighborhoods of) a sequence of saddle points with decreasing index, [40] leading finally to a local minimum at low energy. If such dynamics actually occurs in relevant operating regimes for CIM, we may speculate as to whether the overall gradient descent process including stochastic driving terms (caused by classical-technical or quantum noise) could reasonably be abstracted as probability (or quantum probability-amplitude) flow on a graph. Here the nodes of the graph would represent fixed points and the edges would represent heteroclinic orbits, with the precise structure of the graph of course determined by ππ and π½π½ ππππ . If the graph for a given problem-instance exhibits loops, we could ask whether interference effects might lead to different transport rates for quantum versus classical flows (as in quantum random walks [41] ). Such effects, if they exist, would provide intriguing examples of ways in which limited transient entanglement (localized to the phase space-time volume of traversing a graph loop) could impact optimization dynamics in a computational architecture.
## Quantum noise correlation for parallel search
The first CIM demonstrated in 2014 uses ( ππ 1 ) optical delay lines to all-to-all couple N-OPO pulses circulating in a ring cavity according to the target Hamiltonian β = βπ½π½ππππ π₯π₯ ππ π₯π₯ ππ , where π₯π₯ ππ is the in-phase amplitude of ith OPO pulse (see Fig. 1 in [14]). A coupling field Ii is chosen as the gradient of the potential, πΌπΌππ β -ππβ πππ₯π₯ ππ / , where the analog (quadrature) amplitude xi represents the binary spin variable by π₯π₯ ππ = ππ ππ | π₯π₯ ππ | . When an Ising coupling coefficient π½π½ ππππ between the i -th and j -th OPO pulses is positive (ferromagnetic coupling), an optical delay line realizes in-phase coupling between (internal) i -th and (externally injected) j -th OPO pulse amplitudes (see Fig. 1(a)). The OPO
pulses incident upon an extraction beam splitter (XBS) carry anti-squeezed vacuum noise along a generalized coordinate x at below threshold, which produces a positive noise correlation between transmitted and reflected OPO pulses after XBS, while vacuum noise from the XBS open port is negligible compared to the anti-squeezed noise of incident OPO pulse. Positive noise correlation is similarly established between the i -th and j -th OPO pulses after combining the injected j -th OPO pulse and internal i -th OPO pulse at an injection beam splitter (IBS), as shown in Fig. 1(a). When the Ising coupling coefficient π½π½ ππππ is negative (anti-ferromagnetic coupling), the optical delay line realizes an out-of-phase coupling between the i -th and j -th OPO pulse amplitudes. This setup of the optical delay line results in the negative noise correlation between the two OPO pulse amplitudes.
Below threshold, each OPO pulse is in an anti-squeezed vacuum state which can be interpreted as a linear superposition (not statistical mixture) of generalized coordinate eigenstates, β ππππ | ππ ππ β© ππ , if the decoherence effect by linear cavity loss is neglected. In fact, quantum coherence between different | ππβ© eigenstates is very robust against small linear loss. [23] Figure 1(b) shows the quantum noise trajectory in β©βπΏπΏ οΏ½ ππ βͺ and β©βπ·π· οΏ½ ππ βͺ phase space. The uncertainty product stays close to the Heisenberg limit, with a very small excess factor of less than 30%, during an entire computation process, which suggests the purity of an OPO state is well maintained. [42] Therefore, the above mentioned positive/negative noise correlation between two OPO pulses depending on ferromagnetic/anti-ferromagnetic coupling, implements a sort of quantum parallel search. That is, if the two OPO pulses couple ferromagnetically, the formed positive quantum noise correlation prefers ferromagnetic phase states | ππβ© ππ | ππβ© ππ and
| π
π
β© ππ | π
π
β© ππ , where | ππβ© = β« ππ πΏπΏ | ππβ©ππππ β ππ and | π
π
β© = β« ππ πΏπΏ | ππβ©ππππ ππ -β . If two OPO pulses couple anti- ferromagnetically, the formed negative quantum noise correlation prefers anti-ferromagnetic phase states | ππβ© ππ | π
π
β© ππ and | π
π
β© ππ | ππβ© ππ .
Entanglement and quantum discord between two OPO pulses can be computed to demonstrate such quantum noise correlations. [27][28][29] Figure 1(c) and (d) show the degrees of entanglement and quantum discord versus normalized pump rate ππ = ππ ππ π
π
ππ / , whew ππ is a pump field amplitude incident upon the nonlinear crystal and ππ π
π
ππ is that at the oscillation threshold for a solitary OPO, for an ODL-CIM with N = 2 pulses. [29] In Fig. 1(c), it is shown that Duan-Giedke-Cirac-Zoller entanglement criterion [43] is satisfied at all pump rates. In Fig 1(d), it is shown that Adesso-Datta quantum discord criterion [44] is also satisfied at all pump rate. [29] Both results on entanglement and quantum discord demonstrate maximal quantum noise correlation formed at threshold pump rate p = 1. On the other hand, if a (fictitious) mean-field without quantum noise is assumed to couple two OPO pulses, there exists no quantum correlation below or above threshold, as shown by open circles
FIG. 1. (a) An optical delay line couples two OPO pulses in ODL-CIM. [14] (b) Variances β©βπΏπΏ οΏ½ ππ βͺ and β©βπ·π· οΏ½ ππ βͺ in a MFB-CIM with π΅π΅ = ππππ OPO pulses. The uncertainty product deviates from the Heisenberg limit by less than 30%. [42] (c) Duan-Giedke-Cirac-Zoller inseparability criterion ( ππ / ππ < ππ ) vs. normalized pump rate ππ = ππ ππ π
π
ππ / . Numerical simulations are performed by the positiveP , truncated-Wigner and truncatedHusimi stochastic differential equations (SDE). The dashed line represents an analytical solution. [29] (d) AdessoDatta quantum discord criterion ( Ζ > 0) vs. normalized pump rate p . The above three SDEs and the analytical result predict the identical quantum discord, while the mean-field coupling approximation (MF-A) predicts no quantum discord. [29]
<details>
<summary>Image 1 Details</summary>

### Visual Description
\n
## Diagram: Quantum Optical Setup and Entanglement Analysis
### Overview
The image presents a diagram of a quantum optical setup involving an intracavity beam splitter (IBS) and an external beam splitter (XBS), alongside three plots analyzing entanglement and quantum discord as a function of normalized pump rate. The setup appears designed to generate and analyze non-classical states of light.
### Components/Axes
The diagram consists of four sub-images labeled (a), (b), (c), and (d).
**(a) Quantum Optical Setup:**
* **IBS:** Intracavity Beam Splitter.
* **XBS:** External Beam Splitter.
* **Main Cavity:** Represented by parallel lines.
* **Incident Vacuum State:** Labeled above the XBS.
* **Incident Anti-squeezed Vacuum State:** Labeled below the XBS.
* **Internal i-th pulse:** Injected into the IBS.
* **Injected j-th pulse:** Injected into the IBS.
* **Delay Line:** Connecting the IBS and XBS.
* **Positive Noise Correlation:** Arrow indicating correlation between pulses.
* **P & X axes:** Representing momentum and position quadratures, shown as ellipses.
**(b) Variance Plot:**
* **X-axis:** `<ΞXΒ²>` (approximately 0.2 to 1.0).
* **Y-axis:** `V = ΞΒ²P / <ΞXΒ²>` (approximately 0.2 to 0.9).
* **Curves:** Two curves are plotted, labeled with equations: `(ΞXΒ²)(ΞPΒ²) = 0.3` (green) and `(ΞXΒ²)(ΞPΒ²) = 0.25` (blue).
**(c) Entanglement Plot:**
* **X-axis:** Normalized Pump Rate Ο (approximately 0.1 to 10).
* **Y-axis:** Entanglement E/2 (approximately 0.85 to 1.0).
* **Curves:** Three curves are plotted:
* positive-P (red)
* truncated Wigner (blue)
* truncated Husimi (black)
**(d) Quantum Discord Plot:**
* **X-axis:** Normalized Pump Rate Ο (approximately 0.1 to 10).
* **Y-axis:** Quantum Discord D (approximately 0.0 to 0.2).
* **Curves:** Three curves are plotted:
* positive-P (red)
* truncated Wigner (blue)
* truncated Husimi (black) and positive-P (MF-A) (grey)
### Detailed Analysis or Content Details
**(a) Quantum Optical Setup:**
The setup involves injecting pulses into a cavity containing an intracavity beam splitter. The XBS splits the output, and the delay line introduces a time delay between the pulses. The diagram indicates a positive correlation between the noise in the pulses. The P and X axes represent the quadratures of the electromagnetic field.
**(b) Variance Plot:**
The green curve `(ΞXΒ²)(ΞPΒ²) = 0.3` starts at approximately V = 0.85 when `<ΞXΒ²>` = 0.2, and decreases to approximately V = 0.3 when `<ΞXΒ²>` = 1.0. The blue curve `(ΞXΒ²)(ΞPΒ²) = 0.25` starts at approximately V = 0.7 when `<ΞXΒ²>` = 0.2, and decreases to approximately V = 0.25 when `<ΞXΒ²>` = 1.0. Both curves exhibit a decreasing trend.
**(c) Entanglement Plot:**
All three curves (positive-P, truncated Wigner, truncated Husimi) start at approximately E/2 = 0.98 when Ο = 0.1. As Ο increases, all curves decrease. The positive-P curve (red) decreases more rapidly than the other two. At Ο = 10, the positive-P curve reaches approximately E/2 = 0.86, the truncated Wigner curve reaches approximately E/2 = 0.88, and the truncated Husimi curve reaches approximately E/2 = 0.90.
**(d) Quantum Discord Plot:**
The positive-P curve (red) starts at approximately D = 0.02 when Ο = 0.1 and increases to a peak of approximately D = 0.16 at Ο = 3. It then decreases to approximately D = 0.05 at Ο = 10. The truncated Wigner curve (blue) starts at approximately D = 0.01 when Ο = 0.1 and increases to a peak of approximately D = 0.12 at Ο = 3. It then decreases to approximately D = 0.03 at Ο = 10. The truncated Husimi curve (black) and positive-P (MF-A) (grey) are very close to zero across the entire range of Ο.
### Key Observations
* The variance plot (b) shows an inverse relationship between the variances of the position and momentum quadratures, consistent with the uncertainty principle.
* The entanglement plot (c) demonstrates that entanglement decreases as the normalized pump rate increases.
* The quantum discord plot (d) shows that quantum discord initially increases with the pump rate, reaches a maximum, and then decreases.
* The positive-P method appears to show the most significant decrease in entanglement with increasing pump rate.
* The truncated Husimi and positive-P (MF-A) methods show minimal quantum discord.
### Interpretation
The diagram illustrates a setup for generating and characterizing non-classical states of light. The plots reveal the interplay between entanglement and quantum discord as a function of the pump rate. The decrease in entanglement with increasing pump rate suggests that the system transitions from a highly entangled state to a more classical state. The behavior of quantum discord, which can be non-zero even in the absence of entanglement, indicates the presence of quantum correlations beyond entanglement. The differences between the curves obtained using different methods (positive-P, truncated Wigner, truncated Husimi) highlight the sensitivity of these measures to the specific state representation used. The setup and analysis likely aim to explore the limits of entanglement and quantum correlations in optical systems and to understand the role of noise and decoherence in these processes. The positive noise correlation is likely a key element in the dynamics of the system. The choice of the pump rate is critical in controlling the degree of entanglement and quantum discord.
</details>
Note that vacuum noise incident from an open port of XBS (See Fig. 1(a)) creates an opposite noise correlation between the internal and external OPO pulses, so that it always degrades the
preferred quantum noise correlation among the two OPO pulses after IBS. Thus, squeezing the vacuum noise at open port of XBS is expected to improve the quantum search performance of an ODL-CIM, which is indeed confirmed in the numerical simulation. [28]
The second generation of CIM demonstrated in 2016 employs a measurement-feedback circuit to all-to-all couple the N-OPO pulses (see Fig. 1 of [16]). The (quadrature) amplitude π₯π₯ ππ of a reflected OPO pulse j after XBS is measured by an optical homodyne detector and the measurement result (inferred amplitude) π₯π₯ οΏ½ ππ is multiplied against the Ising coupling coefficient Jij and summed over all j pulses in electronic digital circuitry, which produces an overall feedback signal β π½π½ππππ π₯π₯ οΏ½ ππ ππ for the i -th internal OPO pulse. This analog electrical signal is imposed on the amplitude of a coherent optical feedback signal, which is injected into the target OPO pulse ππ by IBS. In this MFB-CIM operating below threshold, if a homodyne measurement result π₯π₯ οΏ½ ππ is positive and incident vacuum noise from the open port of XBS is negligible, the average amplitude of the internal OPO pulse j is shifted (jumped) to a positive direction by the projection property of such an indirect quantum measurement [45] , as shown in Fig. 2. Depending on the value of a feedback signal π½π½ ππππ π₯π₯ οΏ½ ππ , we can introduce either positive or negative displacement for the center position of the target OPO pulse i . In this way, depending on the sign of π½π½ ππππ , we can implement either positive correlation or negative correlation between the two average amplitudes β©π₯π₯ ππ βͺ and β©π₯π₯ ππ βͺ for ferromagnetic or antiferromagnetic coupling, respectively. Note that a MFB-CIM does not produce entanglement among OPO pulses but generates quantum discord if the density operator is defined as an ensemble over many measurement records. [46] A normalized correlation function ππ = β©βππ οΏ½ 1 βππ οΏ½ 2 βͺ οΏ½β©βππ οΏ½ 1 2 βͺβ©βππ οΏ½ 2 2 βͺ οΏ½ is an appropriate metric for quantifying such measurement-feedback induced search performance, the degree of which is shown to govern final success probability of MFB-CIM more directly than the quantum discord. In general, a MFB-CIM has a larger normalized correlation function and higher success probability than an ODL-CIM. [46]
FIG. 2. Formation of a ferromagnetic correlation between two OPO pulses in MFB-CIM. [15][16] This example illustrates the noise distributions of the two OPO pulses when the Ising coupling is ferromagnetic ( π±π± ππππ > ππ ) and the measurement result for the j -th pulse is πΏπΏ οΏ½ ππ > ππ .
<details>
<summary>Image 2 Details</summary>

### Visual Description
\n
## Diagram: Quantum Measurement Scheme
### Overview
This diagram illustrates a quantum measurement scheme involving optical pulses, beam splitters, and a homodyne detector. It depicts the flow of pulses through a main cavity and the process of state reduction based on measurement results. The diagram focuses on the interaction of pulses and the feedback loop involved in the measurement process.
### Components/Axes
The diagram features the following components:
* **Main Cavity:** A horizontal line representing the main cavity where the pulses propagate.
* **IBS (Input Beam Splitter):** A diagonal line with an arrow indicating the splitting of the displaced i-th pulse.
* **XBS (Output Beam Splitter):** A diagonal line with an arrow indicating the splitting of the post-measurement j-th pulse.
* **EOM (Electro-Optic Modulator):** A rectangular block representing the electro-optic modulator.
* **Optical Homodyne Detector:** A rectangular block representing the detector.
* **Displaced i-th pulse:** Represented by an elliptical shape with axes labeled 'P' and 'X', indicating momentum and position.
* **Post-measurement j-th pulse:** Similar to the displaced pulse, but labeled as post-measurement.
* **Incident j-th pulse:** Similar to the displaced pulse, but labeled as incident.
* **Arrows:** Indicate the direction of pulse propagation and feedback.
* **Labels:** Text annotations describing the components and processes.
The diagram uses coordinate axes labeled 'P' (momentum) and 'X' (position) to represent the state of the pulses.
### Detailed Analysis / Content Details
The diagram shows the following flow and interactions:
1. **Displaced i-th pulse:** An elliptical shape is positioned to the left of the IBS. The ellipse is oriented diagonally, with axes labeled 'P' and 'X'. A curved arrow above the ellipse indicates "ferromagnetic correlation".
2. **IBS:** The displaced i-th pulse enters the IBS, which splits the pulse.
3. **Main Cavity:** The split pulses propagate through the main cavity (horizontal line).
4. **XBS:** A pulse enters the XBS, which splits the pulse. A dashed line indicates "vacuum noise".
5. **Post-measurement j-th pulse:** An elliptical shape is positioned above the XBS, representing the post-measurement j-th pulse. The ellipse is labeled with "<X> > 0".
6. **Incident j-th pulse:** An elliptical shape is positioned to the right of the XBS, representing the incident j-th pulse. The ellipse is labeled with "<X> = 0".
7. **Optical Homodyne Detector:** The output of the XBS is directed to an optical homodyne detector.
8. **State Reduction:** A dashed arrow indicates "state reduction" from the post-measurement pulse to the detector.
9. **Measurement Result:** The detector output is labeled "measurement result (if XΜβ±Ό > 0)".
10. **Feedback Loop:** The detector output is fed back to an EOM via a line labeled "optical feedback field". The EOM output is represented by the equation "Jα΅’β±Ό XΜα΅’ (if Jα΅’β±Ό > 0)".
11. **Displacement Operation:** A line labeled "displacement operation" connects the EOM to the input of the IBS.
### Key Observations
* The diagram illustrates a feedback loop where the measurement result influences the subsequent pulse displacement.
* The state of the pulses changes after measurement, as indicated by the different labels on the post-measurement and incident pulses.
* The ferromagnetic correlation suggests a specific type of interaction between the pulses.
* The presence of vacuum noise indicates the inherent quantum fluctuations in the system.
### Interpretation
This diagram depicts a quantum measurement scheme designed to manipulate and measure the state of optical pulses. The feedback loop, involving the EOM and the optical homodyne detector, suggests a process of continuous measurement and state control. The ferromagnetic correlation and vacuum noise highlight the quantum nature of the system. The state reduction indicates that the measurement process collapses the wave function of the pulse, resulting in a definite measurement outcome. The condition "if Jα΅’β±Ό > 0" suggests that the feedback is only active when a certain interaction strength is met. The diagram demonstrates a sophisticated approach to quantum measurement, potentially used for quantum information processing or precision sensing. The diagram is a schematic representation of a complex physical setup, and the specific details of the components and interactions would require further information to fully understand.
</details>
In both ODL-CIM and MFB-CIM, anti-squeezed noise below threshold makes it possible to search for a lowest-loss ground state as well as low-loss excited states before the OPO network reaches threshold. The numerical simulation result shown in Fig. 3 demonstrates the three step computation of CIM. [28] We study a ππ = 16 one-dimensional lattice with a nearest-neighbor antiferromagnetic coupling and periodic boundary condition ( π₯π₯1 = π₯π₯17 ) , for which the two degenerate ground states are |0 β© 1 | ππβ©2β―β― |0 β© 15 | ππβ© 16 and | ππβ© 1 |0 β©2β―β― | ππβ© 15 |0 β© 16 . Before a pump field β° is switched on ( π‘π‘ β€ 0 i n Fig. 3), all OPOs are in vacuum states, for which optical homodyne measurement results provide a simple random guess for each spin and the corresponding success rate is ππ π π = 1 2 16 / βΌ 10 -5 as shown by the horizontal solid line in Fig. 3. We assume that vacuum noise incident from the open port of XBS is squeezed by 10 dB in ODL-CIM. When the external pump rate is linearly increased from below to above threshold, the probability of finding the two degenerate ground states is increased by two orders of magnitude above the initial success probability. This enhanced success probability stems from the formation of quantum noise correlation among 16 OPO pulses at below threshold. The probability of finding high-loss excited states, which are not shown in Fig. 3, is deceased to below the initial value. This 'quantum preparation' is rewarded at the threshold bifurcation point. When the pump rate reaches threshold, one of the ground states ( |0 β© 1 | ππβ©2β―β― | ππβ© 16 ) in the case of Fig. 3 is selected as a single oscillation mode, while the other ground state ( | ππβ© 1 |0 β©2β―β― |0 β© 16 ) as well as all excited states are not selected. This is not a standard
single oscillator bifurcation but a collective phenomenon among ππ = 16 OPO pulses due to the existence of anti-ferromagnetic noise correlation. Above threshold, the probability of finding the selected ground state is exponentially increased, while those of finding the unselected ground state as well as all excited states are exponentially suppressed in a time scale of the order of signal photon lifetime. Such exponential amplification and attenuation of the probabilities is a unique advantage of a gain-dissipative computing machine, which is absent in a standard (unitary gate based) quantum computing system. For example, the Grover search algorithm utilizes a unitary rotation of state vectors and can amplify the target state amplitude only linearly. [25] However, this difference does not mean the computational time of CIM is sub-exponential for hard instances. Recent numerical studies suggest that CIM has an improved but still exponentially scaling time. [30] Note that if we stop increasing the pump rate just above threshold, the probability of finding either one of the ground states is less than 1%. Pitchfork bifurcation followed by exponential amplitude amplification plays a crucial role in realizing high success probability in a short time.
For hard instances of combinatorial optimization problems, in which excited states form numerous local minima, the above quantum search alone is not sufficient to guarantee a high success probability. [30] In the next section, another CIM with error correction feedback is introduced to cope with such hard instances. [26] An alternative approach has been recently proposed. [42] If a pump rate is held just below threshold (corresponding to π‘π‘ β½ 60 in Fig. 3), the lowest-loss ground states and lowloss excited states (fine solutions) have enhanced probabilities while high-loss excited states have suppressed probabilities. By using a MFB-CIM, the optimum as well as good sub-optimal solutions are selectively sampled through an indirect measurement in each round trip of the OPO pulses. This latter approach is particularly attractive if the computational goal is to sample not only optimum solutions but also semi-optimum solutions.
FIG. 3. The probabilities of finding two degenerate ground states in ODL-CIM for a one-dimension lattice with nearest-neighbor anti-ferromagnetic coupling and periodic boundary condition ( ππ ππ = ππ ππππ ). [28] Many trajectories produced by the numerical simulation with truncated-Wigner SDE are post-selected by the final state | ππβ© ππ | π
π
β© ππβ―β― | ππβ© ππππ | π
π
β© ππππ and ensemble averaged.
<details>
<summary>Image 3 Details</summary>

### Visual Description
\n
## Chart: Quantum Search Success Probability vs. Computation Time
### Overview
The image presents a chart illustrating the success probability of a quantum search algorithm as a function of computation time. Two curves are plotted, representing different configurations of the algorithm (Οβ = β and Οβ = β). The chart demonstrates the concept of quantum parallel search and exponential amplitude amplification, contrasting it with the performance of a random guess.
### Components/Axes
* **X-axis:** Computation time *t* (ranging from 0 to approximately 200).
* **Y-axis:** Success probability (logarithmic scale, ranging from 10β»βΆ to 10β°).
* **Curves:**
* Blue solid line: Οβ = β (ΞΎ=0.4, r=1.0) - Represents the quantum search with a specific configuration.
* Green solid line: Οβ = β (ΞΎ=0.4, r=1.0) - Represents the quantum search with a different configuration.
* **Horizontal dashed line:** P<sub>s</sub> = 1/2ΒΉβΆ β 10β»β΅ (random guess) - Indicates the success probability of a random guess.
* **Annotations (Red Arrows):**
* "Quantum parallel search" pointing to the initial rise of the green line.
* "Exponential amplitude amplification" pointing to the upward slope of the blue line.
* "Spontaneous symmetry breaking" pointing to the decline of the green line.
### Detailed Analysis
* **Blue Line (Οβ = β):** The blue line starts at approximately 2 x 10β»β΄ at t=0 and exhibits a generally upward trend. It increases relatively slowly until around t=100, then shows a steeper increase, reaching approximately 0.8 at t=180. The trend is exponential.
* **Green Line (Οβ = β):** The green line begins at approximately 3 x 10β»Β³ at t=0. It initially decreases slowly, then experiences a more rapid decline around t=120. It reaches a minimum of approximately 1 x 10β»βΆ around t=160, then exhibits several oscillations before stabilizing at approximately 1 x 10β»βΆ.
* **Random Guess Line:** The horizontal dashed line remains constant at approximately 1 x 10β»β΅ throughout the entire computation time.
* **Intersection Point:** The blue line crosses the random guess line at approximately t=70, indicating that the quantum search algorithm surpasses the performance of a random guess at this point.
* **Oscillations:** The green line exhibits several small oscillations between t=150 and t=180, with peaks reaching approximately 3 x 10β»β΅.
### Key Observations
* The blue line (Οβ = β) demonstrates exponential amplitude amplification, leading to a significantly higher success probability compared to the random guess.
* The green line (Οβ = β) exhibits spontaneous symmetry breaking, resulting in a decrease in success probability.
* The random guess line provides a baseline for comparison, highlighting the advantage of the quantum search algorithm.
* The initial phase of the green line suggests a period of quantum parallel search before symmetry breaking occurs.
### Interpretation
The chart illustrates the fundamental principles of a quantum search algorithm. The blue line demonstrates how exponential amplitude amplification can significantly enhance the probability of finding a solution, surpassing the performance of a classical random search. The green line represents a scenario where spontaneous symmetry breaking occurs, leading to a decrease in success probability. This suggests that the algorithm's performance is sensitive to initial conditions or configuration parameters (ΞΎ and r). The horizontal line representing the random guess provides a crucial benchmark, demonstrating the quantum advantage achieved by the algorithm. The oscillations in the green line after the initial decline could indicate interference effects or the algorithm settling into a suboptimal state. The chart effectively visualizes the trade-offs and dynamics involved in quantum search, highlighting the potential for significant speedups but also the challenges of maintaining coherence and avoiding symmetry breaking.
</details>
## Destabilization of local minima
The measurement-feedback coherent Ising machine has been previously described as a quantum analog device that finishes computation in a classical digital device, in which the amplitude of a selected low energy spin configuration is exponentially amplified. [22][23] During computation, the sign of the measured in-phase component, noted π₯π₯ οΏ½ππ with π₯π₯ οΏ½ππ β β , is associated with the boolean variable ππ ππ of an Ising problem (whereas the quadrature-phase component decays to zero). A detailed model of the system's dynamics is given by the master equation of the density operator Ο that is conditioned on measurement results [47][48] which describes the processes of parametric amplification (exchange of one pump photon into two signal photons), saturation (signal photons are converted back into pump photons), wavepacket reduction due to measurement, and feedback injection that is used for implementing the Ising coupling. For the sake of computational tractability, truncated Wigner [28] or the positive-P representation [49] can be used with Itoh calculus for approximating the quantum state to a probability distribution ππ ( π₯π₯ ππ ) with ππ ( π₯π₯ ππ ) β ππππ [| π₯π₯ ππ β©β¨π₯π₯ ππ | ππ ] from which the measured in-phase component π₯π₯ οΏ½ππ can be calculated with π₯π₯ οΏ½ππ = β©π₯π₯ ππ βͺ + πΎπΎπΎπΎ ππ where β©π₯π₯ ππ βͺ = β« π₯π₯ ππ ππ ( π₯π₯ ππ )dx ππ and πΎπΎ ππ are uncorrelated increments with amplitude πΎπΎ > 0 .
Although gain saturation and dissipation can, in principle, induce squeezing and non-Gaussian states [50] that would justify describing the time-evolution of the higher moments of the probability distribution P, it is insightful to limit our description to its first moment (the average β©π₯π₯ ππ βͺ ) in order to
explain computation achieved by the machine in the classical regime. This approximation is justified when the state of each OPO remains sufficiently close to a coherent state during the whole computation process. In this case, the effect of gain saturation and dissipation on the average β©π₯π₯ ππ βͺ can be modeled as a non-linear function π₯π₯ β¦ ππ ( π₯π₯ ) and the feedback injection is given as π½π½ ππ π΄π΄ ππ π½π½ ππππ ππ ( β©π₯π₯ ππ βͺ + πΎπΎπΎπΎ ππ ) where ππ and ππ are sigmoid functions, π½π½ ππππ the Ising couplings, and π½π½ ππ represents the amplitude of the coupling. When the amplitudes | β©π₯π₯ ππ βͺ | of OPO signals are much larger that the noise amplitude πΎπΎ , the system can be described by simple differential equations given as (d/dt) β©π₯π₯ ππ βͺ = ππ ( β©π₯π₯ ππ βͺ ) + π½π½π΄π΄ ππ ππ ππππ πποΏ½β©π₯π₯ ππ βͺοΏ½ for which we set π½π½ ππ = π½π½ , βππ , and it can be shown that the time-evolution of the system is a motion in state space that seeks out minima of a potential function (or Lyapunov function) ππ given as ππ = ππ ππ ( ππ ) + π½π½π»π» ( ππ ) where ππ ππ is a bistable potential with
ππ
ππ
(
ππ
) =
-π΄π΄ππβ«
ππ
(
ππ
-1
(
π¦π¦
))d
π¦π¦
and
π»π»
(
ππ
) =
-
(1/2)
π΄π΄
ππππ
ππ
ππππ
π¦π¦
ππ
π¦π¦
ππ
is the
extension of the
Ising
Hamiltonian in the real space with π¦π¦ ππ = ππ ( β©π₯π₯ ππ βͺ ) . [21][51] The connection between such nonlinear differential equations and the Ising Hamiltonian has been used in various models such as in the 'soft' spin description of frustrated spin systems [52] or the Hopfield-Tank neural networks [51] for solving NPhard combinatorial optimization problems. Moreover, an analogy with the mean-field theory of spin glasses can be made by recognizing that the steady-states of these nonlinear equations correspond to the solution of the 'naive' Thouless-Anderson-Palmer (TAP) equations [53] which arise from the meanfield description of Sherrington-Kirkpatrick spin glasses in the limit of large number of spins ππ and are given as β©ππ ππ βͺ = tanh((1/ ππ ) π΄π΄ ππ π½π½ ππππ β©ππ ππ βͺ ) with β©ππ ππ βͺ the thermal average at temperature ππ of the Ising spin (by setting ππ ( π₯π₯ ) = ππ tanh( π₯π₯ ) and ππ ( π₯π₯ ) = π₯π₯ ). This analogy suggests that the parameter π½π½ can be interpreted as inverse temperature in the thermodynamic limit when the Onsager reaction term is discarded. [53] At π½π½ = 0 ( ππ β β ) , the only stable state of the CIM is β©π₯π₯ ππ βͺ = 0 , for which any spin configuration is equiprobable, whereas at π½π½ β β ( ππ = 0) , the state remains trapped for an infinite time in local minima. We will discuss in much more detail analogies between CIM dynamics and TAP equations, and also belief and survey propagation, in the special case of the SK model in the next section.
In the case of spin glasses, statistical analysis of TAP equations suggests that the free energy landscape has an exponentially large number of solutions near zero temperature [54] and we can expect similar statistics for the potential ππ when π½π½ β β . In order to reduce the probability of the CIM to get trapped in one of the local minima of ππ , it has been proposed to gradually increase π½π½ , the coupling strength, during computation. [16] This heuristic, that we call open-loop CIM in the following, is similar to mean-field annealing [55] and consists in letting the system seeks out minima of a potential
π¦π¦
ππ
function ππ that is gradually transformed from monostable to multi-stable (see Fig. 4(a) and (b1)). Contrarily to the quantum adiabatic theorem [56] or the convergence theorem of simulated annealing, [57] there is however no guarantee that a sufficiently slow deformation of ππ will ensure convergence to the configuration of lowest Ising Hamiltonian. In fact, linear stability analysis suggests on the contrary that the first state other than vacuum state ( β©π₯π₯ ππ βͺ = 0 , β ππ ) to become stable as π½π½ is increased does not correspond to the ground-state. Moreover, added noise πΎπΎ ππ may not be sufficient for ensuring convergence: [58] it is possible to seek for global convergence to the minima of the potential ππ by reducing gradually the amplitude of the noise πΎπΎ (with πΎπΎ ( π‘π‘ ) 2 ~ ππ /log(2 + π‘π‘ ) and ππ real constant sufficiently large, [59] but the global minima of the potential ππ ( π¦π¦ ) do not generally correspond to that of the Ising Hamiltonians π»π» ( ππ ) at a fixed π½π½ . [13][21] This discrepancy between the minima of the potential ππ and Ising Hamiltonian H can be understood by noting that the field amplitudes β©π₯π₯ ππ βͺ are not all equal (or homogeneous) at the steady-state, that is β©π₯π₯ ππ βͺ = ππ ππβππ + πΏπΏ ππ where πΏπΏ ππ is the variation of the i -th OPO amplitude with πΏπΏ ππ β πΏπΏ ππ and βππ a reference amplitude defined such that π΄π΄ ππ πΏπΏ ππ = 0 . Because of the heterogeneity in amplitude, the minima of ππ ( β©π₯π₯βͺ ) = ππ ( ππβππ + πΉπΉ ) do not correspond to that of π»π» ( ππ ) in general. Consequently, it is necessary in practice to run the open-loop CIM from multiple initial conditions in order to find the ground-state configuration.
FIG. 4. (a) Schematic representation of the open- and closed-loop measurement-feedback coherent Ising machine which computational principle in the mean-field approximation are based on two different types of
<details>
<summary>Image 4 Details</summary>

### Visual Description
\n
## Diagram: Open-Loop and Closed-Loop CIM with Potential and Error Space
### Overview
The image presents a diagram illustrating the functional components of an open-loop and closed-loop Computational Ising Machine (CIM), alongside visualizations of the potential energy landscape and error space. The diagram shows the flow of signals and interactions between components, and how feedback is used in the closed-loop system.
### Components/Axes
The diagram is divided into three main sections: (a) Open-loop and Closed-loop CIM, (b1) Potential V(x), and (b2) Error space e.
**Section (a): Open-Loop and Closed-Loop CIM**
* **Open-loop CIM:** Enclosed in a blue dashed rectangle. Contains components labeled:
* OPO 1: X<sub>1</sub>
* OPO 2: X<sub>2</sub>
* OPO N-1: X<sub>N-1</sub>
* OPO N: X<sub>N</sub>
* Injection: x -> x + I (connected to a DAC - Digital to Analog Converter)
* Measurement: x -> x + Ξ³n (connected to an ADC - Analog to Digital Converter)
* Ising coupling: I<sub>1</sub> = Ξ²Ξ£<sub>j</sub>Ο<sub>ij</sub>g(x<sub>j</sub>)
* Amplitude stabilization: I<sub>1</sub> -> e<sub>1</sub>
* **Closed-loop CIM:** Enclosed in a red dashed rectangle. Contains the same components as the open-loop CIM, with an additional feedback loop from amplitude stabilization (e<sub>1</sub>) to Injection.
* **Arrows:** Red arrows indicate signal flow.
**Section (b1): Potential V(x)**
* **Axes:** X<sub>i</sub> (horizontal) and Potential V(x) (vertical).
* **Visualizations:** Multiple curved lines representing potential energy contours. Red dots are placed along a trajectory labeled t<sub>0</sub>, t<sub>1</sub>, t<sub>2</sub>, t<sub>3</sub>. Several curves are labeled Ξ²<sub>0</sub>(t), Ξ²<sub>1</sub>(t), Ξ²<sub>2</sub>(t), Ξ²<sub>3</sub>(t).
* **Text:** "state-space" is written near the origin.
**Section (b2): Error space e**
* **Axes:** X<sub>i</sub> (horizontal) and Error space e (vertical).
* **Visualizations:** Contour lines representing error. A circular trajectory is shown, labeled t<sub>0</sub>, t<sub>1</sub>, t<sub>2</sub>.
* **Text:** "state-space" is written near the origin.
### Detailed Analysis or Content Details
**Section (a): CIM**
The open-loop CIM receives an input 'x' and adds an injection current 'I' via a DAC. The output 'x' is measured with added noise 'Ξ³n' via an ADC. The Ising coupling calculates I<sub>1</sub> based on the weighted sum of g(x<sub>j</sub>) with coupling constants Ο<sub>ij</sub> and a parameter Ξ². The amplitude stabilization adjusts I<sub>1</sub> to produce e<sub>1</sub>.
The closed-loop CIM adds a feedback loop where e<sub>1</sub> from amplitude stabilization is fed back to the injection stage, creating a closed control system.
**Section (b1): Potential V(x)**
The potential energy landscape shows a multi-dimensional potential with several local minima. The trajectory from t<sub>0</sub> to t<sub>3</sub> indicates a path through the state space, potentially representing the system's evolution towards a lower energy state. The curves Ξ²<sub>0</sub>(t) to Ξ²<sub>3</sub>(t) likely represent the evolution of a parameter (Ξ²) over time.
**Section (b2): Error space e**
The error space shows the error 'e' as a function of X<sub>i</sub>. The circular trajectory from t<sub>0</sub> to t<sub>2</sub> suggests the system oscillates around a stable point, with the error fluctuating within a certain range.
### Key Observations
* The closed-loop CIM incorporates feedback to stabilize the amplitude, suggesting an attempt to improve the system's performance and reduce errors.
* The potential energy landscape (b1) is complex, with multiple local minima, indicating the possibility of getting trapped in suboptimal solutions.
* The error space (b2) shows a relatively stable oscillation, suggesting the feedback mechanism is effective in controlling the error.
* The time labels (t<sub>0</sub>, t<sub>1</sub>, t<sub>2</sub>, t<sub>3</sub>) indicate a temporal evolution of the system's state.
### Interpretation
The diagram illustrates the principles of a Computational Ising Machine and the benefits of using a closed-loop control system. The open-loop CIM is susceptible to noise and instability, while the closed-loop CIM uses feedback to mitigate these issues. The potential energy landscape and error space visualizations provide insights into the system's dynamics and performance. The diagram suggests that the closed-loop CIM is more robust and capable of finding better solutions compared to the open-loop CIM. The trajectory in the potential energy landscape shows the system attempting to minimize its energy, while the trajectory in the error space shows the feedback mechanism keeping the error within acceptable bounds. The diagram is a conceptual representation of the system's functionality and does not provide specific numerical data. It is a high-level overview of the system's architecture and behavior.
</details>
dynamical systems: gradual deformation of a potential function π½π½ (b1) and chaotic-like dynamics (b2), respectively.
Because the benefits of using an analog state for finding the ground-state spin configurations of the Ising Hamiltonian is offset by the negative impact of its improper mapping to the potential function ππ , we have proposed to utilize supplementary dynamics that are not related to the gradient descent of a potential function but ensure that the global minima of π»π» are reached rapidly. In Ref [26], an error correction feedback loop has been proposed whose role is to reduce the amplitude heterogeneity πΏπΏ ππ by forcing squared amplitudes β©π₯π₯ ππ βͺ 2 to become all equal to a target value ππ , thus forcing the measurement-feedback coupling { π΄π΄ ππ π½π½ ππππ ππ ( β©π₯π₯ ππ βͺ )} ππ to be colinear with the Ising internal field ππ with βππ = π΄π΄ ππ π½π½ ππππ ππ ππ . This can notably be achieved by introducing error signals, noted ππ ππ with ππ ππ β β , that modulate the coupling strength π½π½ ππ (or 'effective' inverse temperature) of the i -th OPO such that π½π½ ππ = π½π½ππ ππ ( π‘π‘ ) and the time-evolution of ππ ππ given as (d/dt) ππ ππ = -ππ ( ππ ( β©π₯π₯ ππ βͺ ) 2 -ππ ) ππ ππ where ππ is the rate of change of error variables with respect to the signal field. This mode of operation is called closed-loop CIM and can be realized experimentally by simulating the dynamics of the error variables ππ ππ using the FPGA used in the measurement-feedback CIM for calculation of the Ising coupling [16] (see Fig. 4(a)). Note that the concept of amplitude heterogeneity error correction has also been recently proposed in [20] and extended to other systems such as the XY model. [20][60][61]
In the case of the closed-loop CIM, the system exhibits steady-states only at the local minima of π»π» . [26] The stability of each local minima can be controlled by setting the target amplitude a as follows: the dimension of the unstable manifold ππ π’π’ (where ππ π’π’ is the number of unstable directions) at fixed points corresponding to local minima ππ of the Ising Hamiltonian is equal to the number of eigenvalues ππ ππ ( ππ ) that are such that ππ ππ ( ππ ) > πΉπΉ ( ππ ) where ππ ππ ( ππ ) are the eigenvalues of the matrix { π½π½ ππππ /| βππ |} ππ (with internal field βππ = π΄π΄ ππ π½π½ ππππ ππ ππ ) and πΉπΉ a function shown in Fig. 5(a). The parameter ππ can be set such that all local minima (including the ground-state) are unstable such that the dynamics cannot become trapped in any fixed point attractors. The system then exhibits chaotic dynamics that explores successively local minima. Note that the use of chaotic dynamics for solving Ising problems has been discussed previously, [24][62] notably in the context of neural networks, and it has been argued that chaotic fluctuations may possess better properties than Brownian noise for escaping from local minima traps. In the case of the closed-loop CIM, the chaotic dynamics is not merely used as a replacement to noise. Rather, the interaction between nonlinear gain saturation and error-correction allows a greater reduction of the unstable manifold dimension of states associated with lower Ising Hamiltonian (see Fig. 5(b)). Comparison between Fig. 5(c1,d1,e1) and (c2,d2,e2) indeed shows that the dynamics of closed-loop CIM samples more efficiently from lower-energy states when the gain
saturation is nonlinear compared to the case without nonlinear saturation, respectively.
<details>
<summary>Image 5 Details</summary>

### Visual Description
\n
## Charts/Graphs: Bifurcation Analysis and Dynamical System Behavior
### Overview
The image presents a series of plots illustrating the bifurcation behavior of a dynamical system as a parameter (ΞΌ) is varied. It includes a bifurcation diagram, plots of the number of fixed points, time series of the average value of x, plots of the Lyapunov exponent, and plots of the entropy (H). The plots appear to be generated from numerical simulations.
### Components/Axes
The image consists of six subplots labeled (a) through (f2).
* **(a):** Bifurcation Diagram. X-axis: ΞΌ (ranging from approximately 0 to 5). Y-axis: `f(βn)` (ranging from approximately 0 to 4). Two curves are plotted: `F(βn)` (red, sigmoid) and `f(βn)` (black, linear). 'Stable' and 'Unstable' regions are indicated. 'x' marks are scattered along the black curve.
* **(b):** Number of Fixed Points. X-axis: `βn` (ranging from approximately 0 to 4). Y-axis: `N` (number of fixed points, ranging from approximately 0 to 10). Multiple lines are plotted, each representing a different value of H (labeled in the legend).
* **(c1):** Time Series of `<x>`. X-axis: t (time, ranging from approximately 0 to 250). Y-axis: `<x>` (average value of x, ranging from approximately -4 to 4). Multiple lines are plotted, each representing a different trajectory.
* **(c2):** Time Series of `<x>`. X-axis: t (time, ranging from approximately 0 to 250). Y-axis: `<x>` (average value of x, ranging from approximately -4 to 4). Multiple lines are plotted, each representing a different trajectory.
* **(d1):** Lyapunov Exponent. X-axis: t (time, ranging from approximately 0 to 250). Y-axis: `eβ` (Lyapunov exponent, ranging from approximately 0 to 20). Multiple lines are plotted, each representing a different trajectory.
* **(d2):** Lyapunov Exponent. X-axis: t (time, ranging from approximately 0 to 250). Y-axis: `eβ` (Lyapunov exponent, ranging from approximately 0 to 20). Multiple lines are plotted, each representing a different trajectory.
* **(e1):** Entropy (H). X-axis: t (time, ranging from approximately 0 to 250). Y-axis: H (entropy, ranging from approximately -100 to 0). The plot is filled with color, representing the entropy value.
* **(e2):** Entropy (H). X-axis: t (time, ranging from approximately 0 to 250). Y-axis: H (entropy, ranging from approximately -100 to 0). The plot consists of vertical lines, each representing the entropy value at a given time.
**Legend (b):**
* H = -96.4870 (dark red)
* H = -73.5530 (red)
* H = -69.2760 (orange)
* H = -63.2790 (yellow)
* H = -56.8990 (light yellow)
### Detailed Analysis or Content Details
* **(a):** The red sigmoid curve represents a stable state for low values of ΞΌ. As ΞΌ increases, the black linear curve shows instability, and 'x' marks indicate multiple fixed points.
* **(b):** The number of fixed points (N) initially decreases with increasing `βn` for all H values. The lines then become relatively flat, indicating a stable number of fixed points. The dark red line (H = -96.4870) has the highest number of fixed points across the range.
* **(c1):** The time series shows a relatively stable oscillation around `<x>` = 2 for the initial time period. After approximately t=100, the oscillations become more complex and varied.
* **(c2):** The time series shows chaotic oscillations with a wide range of values for `<x>`. The oscillations appear to be aperiodic.
* **(d1):** The Lyapunov exponent `eβ` is initially close to zero, indicating stability. As time progresses, `eβ` increases for some trajectories, suggesting a transition to chaos.
* **(d2):** The Lyapunov exponent `eβ` shows a more pronounced positive value for many trajectories, confirming chaotic behavior.
* **(e1):** The entropy (H) is initially low (red color) and then increases (towards yellow) as time progresses, indicating increasing complexity and disorder.
* **(e2):** The entropy (H) fluctuates over time, with periods of high and low entropy. The vertical lines indicate discrete entropy values at each time step.
### Key Observations
* The bifurcation diagram (a) suggests a transition from stable to unstable behavior as ΞΌ increases.
* The number of fixed points (b) decreases with increasing `βn` and is influenced by the value of H.
* The time series plots (c1 and c2) demonstrate a transition from stable oscillations to chaotic behavior.
* The Lyapunov exponent plots (d1 and d2) confirm the presence of chaos, as indicated by positive values of `eβ`.
* The entropy plots (e1 and e2) show an increase in complexity and disorder as the system evolves.
### Interpretation
The data suggests a system undergoing a bifurcation from a stable state to a chaotic state as the parameter ΞΌ is increased. The bifurcation diagram (a) visually represents this transition. The number of fixed points (b) reflects the system's stability, with fewer fixed points indicating instability. The time series plots (c1 and c2) and Lyapunov exponent plots (d1 and d2) provide evidence of chaotic behavior, characterized by aperiodic oscillations and positive Lyapunov exponents. The entropy plots (e1 and e2) quantify the increasing complexity and disorder associated with chaos. The different values of H in plot (b) likely represent different initial conditions or parameter settings that influence the system's behavior. The plots collectively demonstrate a typical route to chaos, where a system initially exhibits stable behavior, then undergoes a bifurcation, and eventually settles into a chaotic state. The system is sensitive to initial conditions, as evidenced by the diverging trajectories in the time series and Lyapunov exponent plots.
</details>
t
FIG. 5. (a) Stability of local minima in the space { βππ , ππ } in the case of f sigmoid and linear, i.e., with and without gain saturation, respectively. The red and black crosses correspond to the maximum values of ππ ππ ( ππ ) for the ground-state and excited states, respectively, of an example spin-glass instance with π΅π΅ = ππππ spins. (b) Dimension of the unstable manifold vs. the target amplitude βππ for the various local minima with Ising Hamiltonian H shown by the color gradient in the case of f sigmoid. (c1), (d1), and (e1) show the time-evolution of in-phase components β©ππ ππ βͺ , error variables ππ ππ , and Ising Hamiltonian in the case of f sigmoid. (c2,d2,e2) same as (c1,d1,e1) in the case of f linear. In (e1,e2), the red lines show the Ising Hamiltonian of the local minima.
Generally, the asymmetric coupling between in-phase components and error signals possibly results in the creation of limit cycles or chaotic attractors that can trap the dynamics in a region that does not include the global minima of the Ising Hamiltonian. A possible approach to prevent the system from getting trapped in such non-trivial attractors is to dynamically modulate the target amplitude such that the rate of divergence of the velocity vector field remains positive. [26] This implies that volumes along the flow never contract which, in turn, prevents the existence of any attractor.
Figure 6(a) shows that the closed-loop CIM can find the ground-states of Sherrington-
Kirkpatrick spin glass problems with high success probability using a single run even for larger system size. Moreover, the correction of amplitude heterogeneity allows for a significant decrease in the time-to-solution compared to the open-loop case which is evaluated by calculating the number of cavity round-trip of the OPO pulses, called number of iterations and noted ππ π π , to find the groundstate configurations with 99% success probability (see Fig. 6(b)). Because there is no theoretical guarantee that the system will find configuration with Ising Hamiltonian at a ratio of the ground-state after a given computational time and the closed-loop CIM is thus classified as a heuristic method. In order to compare it with other state-of-the-art heuristics, the proposed scheme has been applied to solving instances of standard benchmarks (such as the G-set) by comparing time-to-solutions for reaching a predefined target such as the ground-state energy, if it is known, or the smallest energy known (i.e., published), otherwise. The amplitude heterogeneity error correction scheme can in particular find lower energy configurations of MAXCUT problems from the G-set of similar quality as the state-of-the-art solver, called BLS [63] (see the supplementary material of ref [26] for details). Moreover, the averaged time-to-solution obtained using the proposed scheme are similar to the ones obtained using BLS when simulated on a desktop computer, but are expected to be 100-1000 times smaller in the case of an implementation on the coherent Ising machine.
FIG. 6. (a) Success probability β©ππ ππ ( ππ ) βͺ of finding the ground-state configuration after ππ iterations of a single run averaged over 100 randomly generated Sherrington-Kirkpatrick spin glass instances and (b) 50th, 80th, and 90th percentiles of the iteration-to-solution distribution where ns of a given instance is given as ππ ππ = π¦π¦π¦π¦π¦π¦ π¦π¦ { ππ ππ ( ππ )} and ππ ππ ( ππ ) = ππ π₯π₯π₯π₯π₯π₯ ( ππ - ππ . ππππ ) / π₯π₯π₯π₯π₯π₯ ( ππ - ππ ππ ( ππ )) for the open-loop (in red) and closed-loop CIM (in green).
<details>
<summary>Image 6 Details</summary>

### Visual Description
\n
## Charts: CIM Performance vs. Problem Size
### Overview
The image presents two charts (labeled (a) and (b)) illustrating the performance of a Computational Intelligence Method (CIM) as a function of problem size (N). Chart (a) shows the average success probability, while chart (b) displays the q-th percentile iteration-to-solution. Both charts compare open-loop and closed-loop CIM approaches.
### Components/Axes
**Chart (a): Average Success Probability**
* **X-axis:** N (problem size), ranging from approximately 100 to 700.
* **Y-axis:** <p0(n)> (average success probability), on a logarithmic scale from 10<sup>-2</sup> to 10<sup>0</sup>.
* **Data Series:** Multiple lines representing different values of 'n' (10<sup>2.3</sup>, 10<sup>2.7</sup>, 10<sup>3.1</sup>, 10<sup>3.5</sup>, 10<sup>3.9</sup>, 10<sup>4.3</sup>, 10<sup>4.7</sup>, 10<sup>5.1</sup>, 10<sup>5.5</sup>, 10<sup>5.9</sup>, 10<sup>6.3</sup>). All lines are green.
**Chart (b): Iteration-to-Solution**
* **X-axis:** N (problem size), ranging from approximately 200 to 800.
* **Y-axis:** n<sub>s</sub> (q-th percentile iteration-to-solution), on a logarithmic scale from 10<sup>3</sup> to 10<sup>8</sup>.
* **Data Series:**
* Open-loop CIM (red): Lines representing q = 50th, 80th, and 90th percentiles.
* Closed-loop CIM (green): Lines representing q = 50th, 80th, and 90th percentiles.
### Detailed Analysis or Content Details
**Chart (a): Average Success Probability**
* The lines generally slope downwards as N increases, indicating decreasing success probability with larger problem sizes.
* The lines are spaced out, representing different values of 'n'. Higher values of 'n' correspond to lines that remain higher on the graph, indicating better success probability.
* Approximate data points (reading from the graph):
* n = 10<sup>2.3</sup>: At N=100, <p0(n)> β 0.9; At N=700, <p0(n)> β 0.01
* n = 10<sup>2.7</sup>: At N=100, <p0(n)> β 0.9; At N=700, <p0(n)> β 0.03
* n = 10<sup>3.1</sup>: At N=100, <p0(n)> β 0.9; At N=700, <p0(n)> β 0.1
* n = 10<sup>3.5</sup>: At N=100, <p0(n)> β 0.9; At N=700, <p0(n)> β 0.3
* n = 10<sup>3.9</sup>: At N=100, <p0(n)> β 0.9; At N=700, <p0(n)> β 0.6
* n = 10<sup>4.3</sup>: At N=100, <p0(n)> β 0.9; At N=700, <p0(n)> β 0.8
* n = 10<sup>4.7</sup>: At N=100, <p0(n)> β 0.9; At N=700, <p0(n)> β 0.9
* n = 10<sup>5.1</sup>: At N=100, <p0(n)> β 0.9; At N=700, <p0(n)> β 0.9
* n = 10<sup>5.5</sup>: At N=100, <p0(n)> β 0.9; At N=700, <p0(n)> β 0.9
* n = 10<sup>5.9</sup>: At N=100, <p0(n)> β 0.9; At N=700, <p0(n)> β 0.9
* n = 10<sup>6.3</sup>: At N=100, <p0(n)> β 0.9; At N=700, <p0(n)> β 0.9
**Chart (b): Iteration-to-Solution**
* The open-loop CIM lines (red) generally increase more rapidly with N than the closed-loop CIM lines (green).
* For both open-loop and closed-loop CIM, higher percentile values (q=90th) result in higher iteration-to-solution values.
* Approximate data points (reading from the graph):
* Open-loop CIM (q=50th): At N=200, n<sub>s</sub> β 10<sup>4</sup>; At N=800, n<sub>s</sub> β 10<sup>6</sup>
* Open-loop CIM (q=80th): At N=200, n<sub>s</sub> β 10<sup>5</sup>; At N=800, n<sub>s</sub> β 10<sup>7</sup>
* Open-loop CIM (q=90th): At N=200, n<sub>s</sub> β 10<sup>6</sup>; At N=800, n<sub>s</sub> β 10<sup>8</sup>
* Closed-loop CIM (q=50th): At N=200, n<sub>s</sub> β 10<sup>3</sup>; At N=800, n<sub>s</sub> β 10<sup>5</sup>
* Closed-loop CIM (q=80th): At N=200, n<sub>s</sub> β 10<sup>4</sup>; At N=800, n<sub>s</sub> β 10<sup>6</sup>
* Closed-loop CIM (q=90th): At N=200, n<sub>s</sub> β 10<sup>5</sup>; At N=800, n<sub>s</sub> β 10<sup>7</sup>
### Key Observations
* In Chart (a), increasing 'n' consistently improves the average success probability, especially for larger problem sizes.
* In Chart (b), the open-loop CIM requires significantly more iterations to reach a solution compared to the closed-loop CIM, particularly at higher percentile values.
* The gap between the 50th, 80th, and 90th percentile lines widens as N increases in Chart (b), indicating greater variability in iteration counts for larger problems.
### Interpretation
The data suggests that the closed-loop CIM is more efficient than the open-loop CIM, requiring fewer iterations to find a solution, especially as the problem size increases. The success probability (Chart a) is heavily influenced by the parameter 'n', with larger values of 'n' leading to higher probabilities of success. The logarithmic scales used in both charts highlight the exponential relationship between problem size and performance metrics. The increasing spread of the percentile lines in Chart (b) suggests that the computational time becomes more unpredictable as the problem size grows, potentially due to increased complexity or the need to explore a larger solution space. The choice of 'n' is a critical parameter for the CIM, and its value should be carefully selected based on the desired success probability and the acceptable computational cost. The data demonstrates a trade-off between success probability and computational effort, and the optimal CIM configuration will depend on the specific application requirements.
</details>
## Qualitative parallels between the CIM, belief propagation and survey propagation
As we have noted above, the CIM approach to solving combinatorial optimization problems over binary valued spin variables ππ ππ = Β±1 can be understood in terms of two key steps. First, in the classical limit of the CIM, the binary valued spin variables ππ ππ are promoted to analog variables π₯π₯ ππ reflecting the (quadrature) amplitude of the ππ π‘π‘β OPO mode and the classical CIM dynamics over the variables π₯π₯ ππ can be described by a nonlinear differential equation (Eq. 1). Second, in a more quantum regime, the CIM implements a quantum parallel search over this space that focuses quantum amplitudes on the ground state. A qualitatively similar two step approach of state augmentation and then parallel search has also been pursued in statistics and computer science based approaches to combinatorial optimization, specifically in the forms of algorithms known as belief propagation (BP) [64] and survey propagation (SP). [65] Here we outline similarities and differences between CIM, BP and SP. Forming a bridge between these fields can help progress through the cross-pollination of ideas in two distinct ways. First, our theoretical understanding of BP and SP may provide further tools, beyond the dynamical systems theory approaches described above, to develop a theoretical understanding of CIM dynamics. Second, differences between CIM dynamics and BP and SP dynamics may provide further inspiration for the rational engineering design of modified CIM dynamics that could lead to improved performance. Indeed there is a rich literature connecting BP and SP to other ideas in statistical physics, such as the Bethe approximation, the replica method, the cavity method, and TAP equations. [66][67][68][69][70] It may also be interesting to explore connections between these ideas and the theory of CIM dynamics.
We begin by discussing BP, which is a general method for computing the marginal distribution ππ ( ππ ππ ) = β ππ ( ππ1 , β¦ , ππ / ππ ππ ππ ) from a complex joint distribution over N variables ππ ππ for ππ = 1β¦ ππ . Here the sum over ππ / ππ denotes a sum over all variables ππ ππ for ππ β ππ . For binary spin variables ππ ππ = Β±1 , and for general joint distributions ππ ( ππ1 , β¦ ππ ππ ) , the computation of this marginal distribution is intractable, as it involves a sum over ππ (2 ππ ) spin configurations. However, consider the case where the joint distribution ππ ( ππ1 , β¦ ππ ππ ) has a locally factorizable structure of the form ππ ( ππ1 , β¦ ππ ππ ) =
1 ππ β ππ ππ ππ ππ=1 ( ππ ππ ). Here we have ππ interaction terms indexed by ππ = 1, β¦ , ππ , and each interaction term, or factor ππ ππ depends on a subset of the spins denoted by ππ ππ . Note that a parameter 'a' represents an interaction term in this section, while it means a target intensity in the previous section. For example, in the case of Ising spin systems with pairwise interactions at inverse temperature π½π½ , each subset ππ corresponds to a pair of spins ππ = { ππ , ππ } , the corresponding factor is given by ππ ππ ( ππ ππ ) = ππ π½π½π½π½ ππππ ππ ππ ππ ππ , and Z is the usual partition function that normalizes the distribution. The
structure of the joint distribution in equation (3) below can be visualized as a factor graph, with ππ circular nodes denoting the variables ππ ππ and ππ square factor nodes denoting the interactions ππ ππ (Fig. 7(a)). A variable node ππ is connected to a factor node ππ if and only if variable ππ belongs to the subset ππ , or equivalently if the interaction term ππ ππ depends on ππ ππ .
FIG 7 . (a) A factor graph representation of a joint probability distribution where each square is a factor node encoding an interaction ππ ππ ( ππ ππ ). Each factor node is connected to a subset ππ of variable nodes (circles) where each variable ππ ππ is connected to factor ππ if and only if ππ β ππ , or equivalently if variable ππ ππ participates in the interaction ππ ππ . (b) The flow of messages contributing to the BP update of the message ππ ππβππ π‘π‘+1 ( ππ ππ ). (c) For the special case of binary Ising variables ππ ππ , the BP messages ππ ππβππ π‘π‘ ( ππ ππ ) and ππ ππβππ π‘π‘+1 ( ππ ππ ) can be parameterized in terms of the magnetizations ππ ππβππ π‘π‘ and ππ ππβππ π‘π‘+1 . Furthermore, in the special case of pairwise interactions, ππ ππβππ π‘π‘+1 is solely a function of ππ ππβππ π‘π‘ (in addition to the coupling constant π½π½ ππππ and the temperature), so we can write the BP updates solely in terms of ππ ππβππ π‘π‘ , which we rename to ππ ππβππ π‘π‘ , which can be thought of as the magnetization of spin ππ ππ in a cavity system with the coupling π½π½ ππππ removed. (d) The flow of messages contributing to the BP update of the cavity magnetization ππ ππβππ π‘π‘+1 in Eq. 3, again specialized to the case of Ising spins with pairwise interactions.
<details>
<summary>Image 7 Details</summary>

### Visual Description
\n
## Diagram: Illustration of Message Passing and State Updates
### Overview
The image presents a series of four diagrams (labeled a, b, c, and d) illustrating a process involving message passing and state updates within a network or system. The diagrams appear to represent different stages or aspects of an iterative algorithm, likely related to inference or optimization. The diagrams use nodes (circles and squares) connected by directed edges (arrows) to represent entities and the flow of information.
### Components/Axes
The diagrams do not have traditional axes. Instead, they utilize nodes and directed edges to represent relationships and flow. Key elements include:
* **Nodes:** Circles labeled 'Ο<sub>i</sub>', 'i', 'j', 'k', and squares labeled 'Ο<sub>a</sub>'. Rectangles labeled 'b'.
* **Edges (Arrows):** Red arrows labeled with 'M<sup>t</sup><sub>jβb</sub>(Ο<sub>j</sub>)', 'M<sup>t+1</sup><sub>bβi</sub>(Ο<sub>i</sub>)', 'M<sup>t+1</sup><sub>iβa</sub>(Ο<sub>i</sub>)', 'm<sup>t</sup><sub>iβb</sub>', 'm<sup>t+1</sup><sub>bβi</sub>', 'm<sup>t</sup><sub>mjβi</sub>', 'm<sup>t+1</sup><sub>iβj</sub>'. Blue arrows represent connections without explicit labels.
* **Labels:** 'Ο<sub>a</sub>', 'Ο<sub>i</sub>', 'a', 'b', 'i', 'j', 'k', 't', 't+1'.
* **Diagram Labels:** (a), (b), (c), (d) indicating the sequence of diagrams.
### Detailed Analysis or Content Details
**Diagram (a):**
* A set of circles labeled 'Ο<sub>i</sub>' are connected to a set of squares labeled 'Ο<sub>a</sub>' via blue lines. The circles are arranged horizontally, and the squares are arranged above them. The connections appear to be many-to-many.
**Diagram (b):**
* Node 'j' sends a red arrow labeled 'M<sup>t</sup><sub>jβb</sub>(Ο<sub>j</sub>)' to node 'b'.
* Node 'b' sends a red arrow labeled 'M<sup>t+1</sup><sub>bβi</sub>(Ο<sub>i</sub>)' to node 'i'.
* Node 'i' sends a red arrow labeled 'M<sup>t+1</sup><sub>iβa</sub>(Ο<sub>i</sub>)' to node 'a'.
* Node 'j' also sends a red arrow directly to node 'b' (duplicate path).
* Node 'b' also sends a red arrow directly to node 'i' (duplicate path).
* Blue lines connect 'j' to 'b' and 'b' to 'i' and 'i' to 'a'.
**Diagram (c):**
* Node 'i' sends a red arrow labeled 'm<sup>t</sup><sub>iβb</sub>' to node 'b'.
* Node 'b' sends a red arrow labeled 'm<sup>t+1</sup><sub>bβi</sub>' to node 'i'.
* A separate path shows 'm<sup>t</sup><sub>mjβi</sub>' from an unspecified node 'mj' to node 'i'.
**Diagram (d):**
* Node 'k' sends a red arrow labeled 'm<sup>t</sup><sub>kβi</sub>' to node 'i'.
* Node 'i' sends a red arrow labeled 'm<sup>t+1</sup><sub>iβj</sub>' to node 'j'.
* Nodes 'j<sub>ik</sub>' and 'J<sub>ij</sub>' are placed along the blue lines connecting 'i' to 'j'.
### Key Observations
* The diagrams illustrate iterative updates (indicated by 't' and 't+1' in the labels).
* The red arrows represent message passing, with the labels indicating the source, destination, and content of the message.
* The blue lines likely represent underlying connections or dependencies.
* Diagrams (b) and (d) show a more complex flow with multiple paths and intermediate nodes.
* The presence of 'Ο' in the message labels suggests that the messages are related to some state variable.
### Interpretation
These diagrams likely represent a message-passing algorithm used in a probabilistic graphical model or a similar framework. The nodes represent variables or factors, and the arrows represent the exchange of information between them. The iterative updates (t and t+1) suggest that the algorithm is converging towards a solution.
* **Diagram (a)** could represent the initial state or the structure of the model.
* **Diagram (b)** illustrates a message-passing step, where messages are sent between nodes to update their beliefs. The duplication of paths suggests redundancy or multiple ways to reach a node.
* **Diagram (c)** shows a specific message-passing pattern, potentially related to a particular type of update.
* **Diagram (d)** introduces intermediate nodes (j<sub>ik</sub> and J<sub>ij</sub>) which could represent auxiliary variables or intermediate results.
The labels 'M' and 'm' likely denote different types of messages, potentially representing different aspects of the inference process. The 'Ο' variable suggests that the messages are related to the state of the system. The overall process appears to be an iterative algorithm for approximating a posterior distribution or finding a maximum a posteriori (MAP) estimate. The diagrams are abstract representations of a computational process and do not provide specific numerical data. They are conceptual illustrations of the flow of information and the relationships between different components.
</details>
BP can then be viewed as an iterative dynamical algorithm for computing a marginal ππ ( ππ ππ ) by
passing messages along the factor graph. In the case of combinatorial optimization, we can focus on the zero temperature π½π½ββ limit. We will first describe the BP algorithm intuitively, and later give justification for it. BP employs two types of messages: one from variables to factors and another from factors to variables. Each message is a probability distribution over a single variable. We denote by ππ ππβππ π‘π‘ ( ππ ππ ) the message from variable ππ to factor ππ at iteration time π‘π‘ . It can be thought of intuitively as an approximation to the marginal distribution of ππ ππ induced by all other interactions ππ β ππ . Thus ππ ππβππ π‘π‘ ( ππ ππ ) models the distribution over ππ ππ in a cavity system in which interaction ππ has been removed. Furthermore, we denote by ππ ππβππ π‘π‘ ( ππ ππ ) the message from factor ππ to variable ππ at iteration time π‘π‘ . Intuitively, we can think of ππ ππβππ π‘π‘ ( ππ ππ ) as the distribution on ππ ππ induced by the direct influence of interaction ππ . These messages are called beliefs, as they indicate various probabilities that different interactions believe a single variable should assume. The BP equations amount to updating these beliefs to make them self-consistent with each other. For example, the (unnormalized) update equation for messages from factors to variables (see Fig. 7b) takes the form ππ ππβππ π‘π‘+1 ( ππ ππ ) = β ππ ππ ( ππ ππ ) ππ ππ / ππ β ππ ππβππ π‘π‘ ( ππ ππ ) ππ β ππ / ππ . Here, ππ / ππ denotes the set of all variables in set ππ with variable ππ removed. Intuitively, the direct influence ππ ππβππ π‘π‘+1 ( ππ ππ ) of interaction ππ on variable ππ is computed by integrating out of the factor ππ ππ ( ππ ππ ) all variables ππ participating in interaction ππ other than ππ , supplemented by accounting for the effect of all other interactions besides ππ by the product of beliefs ππ ππβππ π‘π‘ ( ππ ππ ) over all variables ππ β ππ / ππ . This product structure essentially encodes an implicit assumption that the variables ππ β ππ would be independent of each other if interaction ππ were removed. This would be exactly true if the factor graph were a tree with no loops. Similarly, the messages from variables to factors are updated via ππ ππβππ π‘π‘+1 ( ππ ππ ) = β ππ ππβππ π‘π‘+1 ( ππ ππ ) ππ β ππ / ππ (see Fig. 7b). Intuitively, the belief ππ ππβππ π‘π‘+1 ( ππ ππ ) on variable ππ induced by all other interactions besides ππ is simply the product of the direct influences ππ ππβππ π‘π‘+1 ( ππ ππ ) of all interactions ππ that involve variable ππ besides interaction ππ (this set of interactions is denoted by ππ / ππ ). Belief propagation involves randomly initializing the messages and repeating these iterations until convergence. If the messages converge, the marginal distribution of any variable can be computed as P( ππ ππ ) = β ππ ππβππ π‘π‘ ( ππ ππ ) ππ β ππ . In essence the marginal is the product of all direct influences ππ ππβππ π‘π‘ ( ππ ππ ) over all interactions ππ that contain variable ππ .
For a general factor graph, there is no guarantee that the BP update equations will converge in finite time, and even if they do, there is no guarantee the converged messages will yield accurate marginal distributions. However, if the factor graph is a tree, then it can be proven that the BP update equations do indeed converge, and moreover they converge to the correct marginals. [64] Moreover, even in graphs with loops, the fixed points of the BP update equations were shown to be in one to one
correspondence with extrema of a certain Bethe free energy approximation to the true free energy associated with the factor graph distribution. [71] This observation yielded a seminal connection between BP in computer science, and the Bethe approximation in statistical physics. The exactness of BP on tree graphs, as well as the variational connection between BP and Bethe free energy on graphs with loops, motivated the further study of BP updates in sparsely connected random factor graphs in which loops are of size O(log N). In many such settings BP updates converge and yield good approximate marginals. [66] In particular, if correlations between variables ππ β ππ adjacent to a factor ππ are weak upon removal of that factor, then BP is thought to work well.
Now specializing to the case of Ising spins in which each variable ππ = Β±1 , every message ππ ( ππ ) is a distribution over a single spin, and can be uniquely characterized by a field β via the relation ππ ( ππ ) β ππ π½π½β or equivalently through the magnetization ππ = π‘π‘ππππβ ( π½π½β ). Thus the BP update equations for the beliefs ππ ππβππ π‘π‘ ( ππ ππ ) and ππ ππβππ π‘π‘ ( ππ ππ ) can be viewed as a discrete time dynamical system on a collection of real valued fields βππβππ π‘π‘ and βππβππ π‘π‘ or equivalent magnetizations ππ ππβππ π‘π‘ and ππ ππβππ π‘π‘ . Furthermore, in the case of pairwise interactions, where between any pair of spins there is at most one interaction, we can further simplify the BP equations. For instance, along any directed edge from spin ππ ππ to spin ππ ππ passing through an interaction ππ with coupling constant π½π½ ππππ , BP maintains two messages, parameterized by ππ ππβππ π‘π‘ and ππ ππβππ π‘π‘+1 where ππ ππβππ π‘π‘+1 is solely a function of ππ ππβππ π‘π‘ , π½π½ ππππ , and π½π½ (Fig. 7(c)). Thus we can write the BP update equations for Ising systems solely in terms of one of the messages, which we rename to be ππ ππβππ π‘π‘ β‘ ππππβππ π‘π‘ . Thus for each connection in the Ising system, there are now two magnetizations: ππ ππβππ π‘π‘ and ππ ππβππ π‘π‘ corresponding to messages flowing along the two directions of the connection. Intuitively, ππ ππβππ π‘π‘ is the magnetization of spin ππ ππ in a cavity system where the coupling π½π½ ππππ has been removed. Similarly, ππ ππβππ π‘π‘ is the magnetization of spin ππ ππ in the same cavity system with coupling π½π½ ππππ removed. Some algebra reveals [66][68] that the BP equations in terms of the cavity magnetizations ππ ππβππ π‘π‘ are given by
$$m _ { i \rightarrow j } ^ { t + 1 } \, = \, \tanh \left [ \sum _ { k \in i / j } \tanh ^ { - 1 } \{ \tanh \left ( \beta J _ { i k } \right ) m _ { k \rightarrow i } ^ { t } \} \right ] .$$
Here the sum over ππ β ππ / ππ denotes a sum over all neighbors of spin ππ other than spin ππ . See Fig. 7d for a visualization of the flow of messages underlying Eq. 3. These BP equations maintain two magnetizations associated with each connection in the Ising system, corresponding to the two directions of flow across each edge. The messages are initialized randomly and updated according to
equation (3), with the magnetizations thus flowing bi-directionally through the Ising network, hopefully converging to a set of fixed point magnetizations. The marginal of a spin ππ ππ at iteration π‘π‘ is then given by the magnetization ππ ππ π‘π‘ = tanh οΏ½β tanh -1 οΏ½ tanh ( π½π½π½π½ ππππ ) ππ ππβππ π‘π‘ οΏ½ ππ β ππ οΏ½ . While this dynamics promotes the spin variables to analog variables, like the classical CIM dynamics in Eq. 1, it also bears three salient differences from the CIM dynamics: it operates in discrete time, maintains two analog variables per edge, rather than one analog variable per spin, and uses a different nonlinearity. Of course, the specialized BP dynamics are expected to work well specifically for classes of sparsely connected tree-like Ising systems in which removing an interaction π½π½ ππππ from the system makes the spins ππ ππ and ππ ππ approximately independent, but are not guaranteed to yield accurate marginals in more general settings.
The BP equations for Ising systems can also be used to derive the famous TAP equations [72] for the Sherrington Kirkpatrick (SK) model, [35] where each coupling constant π½π½ ππππ is chosen i.i.d from a zero mean Gaussian distribution with variance 1/ ππ . Because the couplings are now ππ (1/ βππ ) , we can Taylor expand Eq. 3 to obtain ππ ππβππ π‘π‘+1 = tanh οΏ½β π½π½π½π½ ππππ ππ ππβππ π‘π‘ ππ β ππ / ππ οΏ½ . Now this update equation is written in terms of cavity magnetizations ππ ππβππ π‘π‘+1 in a system in which a single coupling π½π½ ππππ is removed. Because the SK model connectivity is dense with each individual coupling ππ (1/ βππ ) , each cavity magnetization ππ ππβππ π‘π‘+1 with π½π½ ππππ removed, is close to the actual magnetization ππ ππ π‘π‘+1 when π½π½ ππππ is present. By Taylor expanding in the small difference between ππ ππβππ π‘π‘+1 and ππ ππ π‘π‘+1 , one can write the BP update equations for the case of dense mean field connectivity solely in terms of the variables ππ ππ π‘π‘+1 (see [68] for a derivation of the TAP equations from this BP perspective):
$$m _ { i } ^ { t + 1 } \, = \, \tanh \left [ \sum _ { j } \beta J _ { i j } \, m _ { j } ^ { t } \, - \, \beta ^ { 2 } \, m _ { i } ^ { t - 2 } \, \sum _ { j } J _ { i j } ^ { 2 } \{ 1 - ( m _ { j } ^ { t - 1 } ) ^ { 2 } \} \right ]$$
This achieves a dramatic simplification in the dynamics of Eq. 3 from tracking 2 ππ 2 variables to only tracking N variables, and as such is more similar to the CIM dynamics in Eq. 1. Again there are still several differences: the dynamics in Eq. 4 is discrete time, uses a different nonlinearity, and has an interesting structured history dependence extending over two time steps. Remarkably, although BP was derived with the setting of sparse random graphs in mind, the particular form of the approximate BP equations for the dense mean field SK model can be proven to converge to the correct magnetizations as long as the SK model is outside of the spin glass phase. [73]
So far, we have seen a set of analog approaches to solving Ising systems in specialized cases
(sparse random and dense mean field connectivities). However, these local update rules do not work well when such connectivities exhibit spin glass behavior. It is thought that the key impediment to local algorithms working well in the spin glass regime is the existence of multiple minima in the free energy landscape over spin configurations. [66] This multiplicity yields a high reactivity of the spin system to the addition or flip of a single spin. For example, if a configuration is within a valley with low free energy, and one forces a single spin flip, this external force might slightly raise the energy of the current valley and lower the energy of another valley that is far away in spin configuration space but nearby in energy levels, thereby making these distant spin configurations preferable from an optimization perspective. In such a highly reactive situation, flipping one spin at a time will not enable one to jump from valleys that were optimal (lower energy) before the spin flip, to a far away valley that is now more optimal (even lower energy) after the spin flip. This physical picture of multiple valleys that are well separated in spin configuration space, but whose energies are near each other, and can therefore reshuffle their energy orders upon the flips of individual spins, motivated the invention of alternative algorithms that extend belief propagation to survey propagation. The key idea, in the context of an Ising system, is that the magnetizations ππππβππ of BP now correspond to the magnetizations of spin configurations in a single free energy valley (still in a cavity system with the coupling π½π½ ππππ removed). SP goes beyond this to keep track of the distribution of BP messages across all the free energy valleys. We denote this distribution at iteration π‘π‘ by ππ π‘π‘ ( ππππβππ ) . The distribution over BP beliefs is called a survey . SP propagates these surveys, or distributions over the BP messages across different valleys, taking into account changes in the free energy of the various valleys before and after the addition of a coupling π½π½ ππππ . This more nonlocal SP algorithm can find solutions to hard constraint satisfaction problems in situations where the local BP algorithm fails. [65] Furthermore, recent work going beyond SP, but specialized to the SK model, yields message passing equations that can probably find near ground state spin configurations of the SK model (under certain widely believed assumptions about the geometry of the SK model's free energy landscape) but with a time that grows with the energy gap between the found solution and the ground state. [36]
Interestingly, the promotion of the analog magnetizations ππ ππβππ π‘π‘+1 of BP to distributions ππ π‘π‘ ( ππππβππ ) over these magnetizations is qualitatively reminiscent of the promotion of the classical analog variables of the CIM to quantum wavefunctions over these variables. However this is merely an analogy to be used as a potential inspiration for both understanding and augmenting current quantum CIM dynamics. Moreover, the SP picture cannot account for quantum correlations. Overall, much further theoretical and empirical work needs to be done in obtaining a quantitative understanding the behavior of the CIM in the quantum regime, and the behavior of SP for diverse
combinatorial Ising spin systems beyond the SK model, as well as potential relations between the two approaches. An intriguing possibility is that the quantum CIM dynamics enables a nonlocal parallel search over multiple free energy valleys in a manner that may be more powerful than the SP dynamics due to the quantum nature of the CIM.
## Future Outlook
While current MFB-CIM hardware implementations would not seem capable of sustaining even limited transient entanglement because of their continual projection of each spin-amplitude on each round trip, it is possible that near-term prototypes could probe quantum-perturbed CIM dynamics at least in the smallππ regime. A recent analysis [74] of a modified MFB-CIM architecture utilizing entanglement swapping-type measurements shows that it should be possible to populate entangled states (of specific structure determined by the measurement configuration) of the spin-amplitudes, if the round-trip optical losses can be made sufficiently small. This type of setup could be used to enable certain entanglement structures to be created by transient non-local flow of quantum states through phase space, or to create specific entangled initial states for future CIM algorithms that exploit quantum interference in some more directed way. One may speculate that the impact of quantum phenomena could become more pronounced in CIMs with extremely low pump threshold, for which quantum uncertainties could potentially be larger relative to the scale of topological structures in the mean-field (in a quantum-optical sense) phase space in the critical near-threshold regime. Prospects for realizing such low-threshold CIM hardware have recently been boosted by progress towards the construction of optical parametric oscillators using dispersion-engineered nanophotonic lithium niobate waveguides and ultra-fast pump pulses. [75]
For methods that rely on the relaxation of a potential function, either a Lyapunov function for dynamical systems or free energy landscape for Monte Carlo simulations, it is generally believed that the exponential increase in the number of local minima is responsible for the difficulty in finding the ground-states. It has been suggested that the presence of an even greater number of critical points may prevent the dynamics from descending rapidly to lower energy states. [76] On the other hand, several recently proposed methods that rely on chaotic dynamics instead of a potential function have achieved good performance in solving hard combinatorial problems, [24][26][62][77][78][79] but the theoretical description of the number of non-trivial traps (limit-cycles or chaotic attractors) in their dynamics is lacking. It is of great interest to extend the study of complexity [76] (that is, the enumeration of local minima and critical points) to the case of chaotic dynamics for identifying the mechanisms that prevent these heuristics to find optimal solutions of combinatorial optimization problems and to
derive convergence theorems and guarantees of returning solutions within a bounded ratio of the ground-state energy.
The closed-loop CIM has been proposed for improving the mapping of the Ising Hamiltonian when the time-evolution of the system is approximated to the first moment of the in-phase component distribution. Because the CIM has the potential of quantum parallel search [22] if dissipation can be reduced experimentally, it is important to extend the description of the closed-loop CIM to higher moments in order to identify possible computational benefits of squeezed or non-Gaussian states. In order to investigate this possibility but abstain from the difficulties of reaching a sufficiently low dissipation experimentally, the simulation of the CIM in digital hardware is necessary.
Another interesting prospect of the CIM is its extension to neuroscience research. One possibility is about merged quantum and neural computing concept. In the quantum theory of CIM, we start with a density operator master equation which takes into account a parametric gain, linear loss, gain saturation (or back conversion loss) and dissipative mutual coupling. By expanding the density operator with either a positive P -function (off-diagonal coherent state expansion), truncated Wigner-function or Husimi-function, we can obtain the quantum mechanical Fokker-Planck equations. Using the Ito rule in the Fokker-Planck equations, we finally derive the c -number stochastic differential equations (c-SDE). [27][28][49] We can use them for numerical simulation of the CIM on classical digital computers. This phase space method of quantum optics can be readily modified for numerical simulation of an open-dissipative classical neural network embedded in thermal reservoirs, where vacuum noise is replaced by thermal noise. We note that an ensemble average over many identical classical neural networks driven by independent thermal noise can reproduce the analog of quantum dynamics (entanglement and quantum discord) across bifurcation point. This scenario suggests a potential 'quantum inspired computation' might be already implemented in the brain. Using the c-SDE of CIM as heuristic algorithm in classical neural network platform, we can perform a virtual quantum parallel search in cyber space. In order to compute the dynamic evolution of the density operator, we have to generate numerous trajectories by c -SDE. This can be done by ensemble averaging or time averaging.
However, what we need in the end is only the CIM final state, which is one of degenerate ground states, and in such a case, producing just one trajectory by c-SDE is enough. This is the unique advantage of the CIM approach and provided by the fact that this system starts computation as a quantum analog device and finishes it as a classical digital device. It is an interesting open question if the classical neural network in the brain implements such c-SDE dynamics driven by thermal reservoir noise. One of the important challenges in theoretical neuro-science is to answer how large
number of neurons collectively interact to produce a macroscopic and emergent order such as decision making, cognition and consciousness via noise injected from thermal reservoirs and critical phenomena at phase transition point. [80][81][82][83] The quantum theory of the CIM may shed a light on this interesting frontier at physics and neuro-science interface.
Above we also reviewed a set of qualitative analogies connecting the CIM approach to combinatorial optimization with other approaches in computer science. In particular, we noted that just as the CIM dynamics involves a promotion of the original binary spin variables to classical analog variables and then quantum wave functions associated with these classical variables, computer science based approaches to combinatorial optimization also involve a promotion of the spin variables to analog variables (cavity magnetizations in BP for sparse random connectivities and magnetizations in TAP for dense mean field connectivities), and then distributions over magnetizations in SP. These analogies form a bridge between two previously separate strands of intellectual inquiry, and the crosspollination of ideas between these strands could yield potential insights in both fields. In particular such cross-pollination may both advance the scientific understanding of and engineering improvements upon CIM dynamics.
For example, can convergence proofs of BP or TAP equations for special classes of Ising systems shed light on CIM dynamics in these same systems? Can differences between BP, TAP or SP dynamics and CIM dynamics suggest the rational design of better hardware modifications to the CIM, and would such modifications yield improved performance in Ising systems where BP, TAP and SP are known to converge? How would such modifications fare in more difficult settings where BP/TAP and even SP do not converge? Can the success of the CIM in problems other than random Ising systems for which BP, TAP and SP are specialized, suggest more general algorithms that work in other classes of Ising systems? Can the success of the CIM motivate other types of annealing schedules to the energy landscape that could serve as improvements on existing BP, TAP or SP algorithms? Overall the parallels and differences between BP, TAP, SP and CIM thus motivate many intriguing directions for future research that are as of yet completely unexplored.
Recently, a variety of different experimental platforms have been proposed and demonstrated for the implementation of Ising spin models. [84][85][86][87][88][89][90] It is expected that theoretical and experimental studies of coherent Ising machines mutually motivate and accelerate the advancement of the field synchronously.
More generally, we hope this article provides a sense of the rich possibilities for future interdisciplinary research focused around a multifaceted theoretical and experimental approach to combinatorial optimization uniting perspectives from statistics, computer science, statistical physics,
and quantum optics, and making contact with diverse topics like dynamical systems theory, chaos, spin glasses, and belief and survey propagation.
## ACKOWLEDGEMENTS
The authors wish to thank Z. Toroczkai, R. L. Byer, M. Fejer, B. Lev, A. Safavi-Naeini, A. Marandi, P. McMahon, D. Englund, W. Oliver, E. Rieffel, P. Ronagh, P. Drummond and M. Reid for their useful discussions.
## DATA AVAILABILITY
The data that support the finding of this study are available from the correspondi ng author upon reasonable request.
## REFERENCES
- [1] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, Science 220 , 671 (1983).
- [2] W. Wang, J. Machta, and H. G. Katzgraber, Phys. Rev. E 92 , 013303 (2015).
- [3] G. ZarΓ‘nd, F. PΓ‘zmΓ‘ndi, K. F. PΓ‘l, and G. T. ZimΓ‘nyi, Phys. Rev. Lett. 89 , 150201 (2002).
- [4] E. Farhi, J. Goldstone, S. Gutmann, J. Lapan, A. Lundgren, and D. Preda, Science 292 , 472 (2001).
- [5] T. Kadowaki and H. Nishimori, Phys. Rev. E 58 , 5355 (1998).
- [6] T. Wang and J. Roychowdhury, arXiv:1709.08102 (2017).
- [7] G. Csaba, A. Raychowdhury, S. Datta, and W. Porod, in 2018 IEEE International Symposium on Circuits and Systems (2018) pp. 1-5.
- [8] F. Cai, S. Kumar, T. Van Vaerenbergh, R. Liu, C. Li, S. Yu, Q. Xia, J. J. Yang, R. Beausoleil, W. Lu, and J. P. Strachan, arXiv:1903.11194 (2019).
- [9] J. J. Hopfield and D. W. Tank, Biol. Cybern. 52 , 141 (1985).
- [10] M. Mezard and A. Montanari, Information, physics, and computation (Oxford University Press, 2009).
- [11] S. Geman and D. Geman, IEEE Trans. Pattern Anal. Mach. Intell., 721 (1984).
- [12] S. Utsunomiya, K. Takata, and Y. Yamamoto, Optics Express 19 , 18091 (2011).
- [13] Z. Wang, A. Marandi, K. Wen, R. L. Byer, and Y. Yamamoto, Phys. Rev. A 88 , 063853 (2013).
- [14] A. Marandi, Z. Wang, K. Takata, R. L. Byer, and Y. Yamamoto, Nature Photonics 8 , 937 (2014).
- [15] T. Inagaki, Y . Haribara, K. Igarashi, T. Sonobe, S. Tamate, et el., Science 354 , 603 (2016).
- [16] P. L. McMahon, A. Marandi, Y. Haribara, R. Hamerly, C. Langrock, S. Tamate, T. Inagaki, H. Takesue, S. Utsunomiya, K. Aihara, et al., Science 354 , 614 (2016).
- [17] Y. Takeda, S. Tamate, Y. Yamamoto, H. Takesue, T. Inagaki, and S. Utsunomiya, Quantum Science and Technology 3 , 014004 (2017).
- [18] M. Nixon, E. Ronen, A. A. Friesem, and N. Davidson, Phys. Rev. Lett. 110 , 184102 (2013).
- [19] N. G. Berloff, M. Silva, K. Kalinin, A. Askitopoulos, J. D. TΓΆpfer, P. Cilibrizzi, W. Langbein, and P. G. Lagoudakis, Nature Materials 16 , 1120 (2017).
- [20] K. P. Kalinin and N. G. Berloff, New Journal of Physics 20 , 113023 (2018).
- [21] T. Leleu, Y. Yamamoto, S. Utsunomiya, and K. Aihara, Phys. Rev. E 95 , 022118 (2017).
- [22] Y. Yamamoto, K. Aihara, T. Leleu, K. Kawarabayashi, S. Kako, M. Fejer, K. Inoue, and H. Takesue, npj Quantum Inf. 3 , 49 (2017).
- [23] A. Yamamura, K. Aihara, and Y. Yamamoto, Phys. Rev. A 96 , 053834 (2017).
- [24] H. Goto, K. Tatsumura, and A. R. Dixon, Science advances 5 , eaav2372 (2019).
- [25] L. K. Grover, Phys. Rev. Lett. 79 , 325 (1997).
- [26] T. Leleu, Y. Yamamoto, P. L. McMahon, and K. Aihara, Phys. Rev. Lett. 122 , 040607 (2019).
- [27] K. Takata, A. Marandi, and Y . Yamamoto, Phys. Rev. A 92 , 043821 (2015).
- [28] D. Maruo, S. Utsunomiya, and Y. Yamamoto, Phys. Scr. 91 , 083010 (2016).
- [29] Y. Inui and Y. Yamamoto, arXiv:1905.12348 (2019)
- [30] R. Hamerly, T. Inagaki, P. L. McMahon, D. Venturelli, A. Marandi, T. Onodera, E. Ng, C. Langrock, K. Inaba, T. Honjo et al., Science advances 5 , eaau0823 (May 2019).
- [31] N. Tezak and H. Mabuchi, EPJ Quantum Technology 2 , 10 (2015).
- [32] H. Goto, J. Phys. Soc. Jpn. 88 , 061015 (2019).
- [33] M. Stern, H. Sompolinsky, and L. F. Abbott, Phys. Rev. E 90 , 062710 (2014).
- [34] Y. N. Dauphin, R. Pascanu, C. Gulcehre, K. Cho, S. Ganguli, and Y . Bengio, in Advances in Neural Information Processing Systems 27 (2014).
- [35] D. Sherrington and S. Kirkpatrick, Phys. Rev. Lett. 35 , 1792 (1975).
- [36] A. Montanari, Symposium on Foundations of Computer Science (FOCS), 1417 (2019).
- [37] A. Yamamura and H. Mabuchi, in preparation.
- [38] D. Wennberg, S. Ganguli and H. Mabuchi, in preparation.
- [39] A. J. Bray and D. S. Dean, Phys. Rev. Lett. 98 , 150201 (2007).
- [40] M. Di Ventra, F. L. Traversa, and I. V . Ovchinnikov, Ann. Phys. (Berlin) 529 , 1700123 (2017).
- [41] J. Kempe, Contemporary Physics 44 , 307 (2003).
- [42] S. Kako, T. Leleu, Y . Inui, F. Khoyratee, and Y . Yamamoto, Advanced Quantum Technologies (2020).
- [43] L-M. Duan, G. Giedke, J. I. Cirac, and P. Zoller, Phys. Rev. Lett. 84 , 2722 (2000).
- [44] G. Adesso and A. Datta, Phys. Rev. Lett. 105 , 030501 (2001).
- [45] V. B. Braginsky, F. Y . Khalili, and K. S. Thorne, Quantum Measurement (Cambridge University Press, 1992).
- [46] Y. Inui and Y. Yamamoto, in preparation.
- [47] H. Carmichael, S. Singh, R. Vyas, and P. Rice, Phys. Rev. A 39 , 1200 (1989).
- [48] H. M. Wiseman and G. J. Milburn, Phys. Rev. A 47 , 642 (1993).
- [49] P. Drummond, K. McNeil, and D. Walls, Journal of Modern Optics 28 , 211 (1981).
- [50] S. D. Bartlett, B. C. Sanders, S. L. Braunstein, and K. Nemoto, Phys. Rev. Lett. 88 , 097904 (2002).
- [51] J. J. Hopfield and D. W. Tank, Biol. Cybern. 52 , 141 (1985).
- [52] H. Sompolinsky and A. Zippelius, Phys. Rev. Lett. 47 , 359 (1981).
- [53] D. J. Thouless, P. W. Anderson, and R. G. Palmer, Philosophical Magazine 35 , 593 (1977).
- [54] F. Tanaka and S. Edwards, Journal of Physics F: Metal Physics 10 , 2769 (1980).
- [55] G. Bilbro, R. Mann, T. K. Miller, W. E. Snyder, D. E. van den Bout, and M. White, in Advances in Neural Information Processing Systems 1 (1988), pp. 91-98.
- [56] E. Farhi, J. Goldstone, S. Gutmann, and M. Sipser, arXiv preprint quant-ph/0001106 (2000).
- [57] V. Granville, M. Krivanek, and J.-P. Rasson, IEEE Transactions on Pattern Analysis and Machine Intelligence 16 , 652 (1994).
- [58] D. Pierangeli, G. Marcucci, D. Brunner, and C. Conti, arXiv:2004.02208 (2020).
- [59] S. Geman, and C. R. Hwang, SIAM Journal on Control and Optimization 24 , 1031 (1986).
- [60] I. Gershenzon, G. Arwas, S. Gadasi, C. Tradonsky, A. Friesem, O. Raz, and N. Davidson, arXiv:2003.14312 (2020).
- [61] K. P. Kalini and N. G. Berloff, Scientific Reports 8 , 17791 (2018).
- [62] S. Kumar, J. P. Strachan, and R. S. Williams, Nature 548 , 318 (2017).
- [63] U. Benlic, and J.-K. Hao, Eng. Appl. Artif. Intell. 26 , 1162 (2013).
- [64] J. Pearl, 'Probabilistic reasoning in intelligent systems: networks of plausible inference,' Morgan Kaufmann Pub. (1988).
- [65] A. Braunstein and M. MΓ©zard, and R. Zecchina, Random Struct. Algorithms 27 , 201 (2005),
- [66] M. MΓ©zard, and A. Montanari, 'I nformation, physics, and computation,' Oxford University Press, USA (2009).
- [67] M. Advani, S. Lahiri, and S. Ganguli, J. Stat. Mech: Theory Exp. 2013 , 030134 (2013).
- [68] L. ZdeborovΓ‘ and F. Krzakala, Advances in Physics 65 , 453 (2016).
- [69] M. Advani and S. Ganguli, Phys. Rev. X 6 , 031034 (2016).
- [70] Y. Bahri, J. Kadmon, J. Pennington, S. S. Schoenholz, J. Sohl-Dickstein, and S. Ganguli, Annual Review of Condensed Matter Physics 11 , 501 (2020),
- [71] J. S. Yedidia, W. T. Freeman, and Y. Weiss, IEEE Trans. Inf. Theory 51 , 2282 (2005).
- [72] D. J. Thouless, P. W. Anderson, and R. G. Palmer, Philosophical Magazine 35 , 593 (1977).
- [73] E. Bolthausen, Communications in Mathematical Physics 325 , 333 (2014)
- [74] R. Yanagimoto, P. L. McMahon, E. Ng, T. Onodera and H. Mabuchi, arXiv:1906.04902.
- [75] T. Onodera, E. Ng, N. Lorch, A. Yamamura, R. Hamerly, P. L. McMahon, A. Marandi, and H. Mabuchi, arxiv:1811.10583.
- [76] G. Biroli, Journal of Physics A: Mathematical and General 32 , 8365 (1999).
- [77] M. Ercsey-Ravasz, and Z. Toroczkai, Nature Physics, 7 , 966 (2011).
- [78] M. Ercsey-Ravasz and Z. Toroczkai, Sci. Rep. 2 , 725 (2012).
- [79] B. MolnΓ‘r, F. MolnΓ‘r, M. Varga, Z. Toroczkai, and M. Ercsey-Ravasz, Nature Commun. 9 , 4864 (2018).
- [80] A. Levina, J. M. Herrman, and T. Geisel, Nature Physics 3 , 857 (2007).
- [81] D. R. Chialvo, Nature Physics 6 , 744 (2010).
- [82] T. Mora and W. Bialek, J. Statistical Physics 144 , 268 (2011).
- [83] J. Beggs, Phys. Rev. Lett. (Editorial) 114 , 220001 (2015).
- [84] M. Babaeian, D. T. Nguyen, V. Demir, M. Akbulut, P-A Blanche, Y. Kaneda, S. Guha, M. A. Neifeld, and N. Peyghambarian, Nat. Commun. 10, 3516 (2019).
- [85] D. Pierangeli, G. Marcucci, and C. Conti, Phys. Rev. Lett. 122, 213902 (2019).
- [86] F. BΓΆhm, G. Verschaffelt, and G. Van der Sande, Nat. Commun. 10, 3538 (2019).
- [87] Y. Okawachi, M. Yu, J. K. Jang, X. Ji, Y . Zhao, B. Y , Kim, M. Lipson, and A. L. Gaeta, arXiv:2003.11583 (2020).
- [88] F. Cai, S. Kumar, T. Van Vaerenbergh, X. Sheng, R. Liu, C. Li, Z. Liu, M. Foltin, S. Yu, Q. Xia, et al., Nat. Electron. 3, 409 (2020).
- [89] C. Roques-Carmes, Y. Shen, C. Zanoci, M. Prabhu, F. Atieh, L. Jing, T. DubΔek, C. Mao, M. R. Johnson, V. ΔeperiΔ, et al., Nat. Commun. 11, 249 (2020).
- [90] M. Prabhu, C. Roques-Carmes, Y. Shen, N. Harris, L. Jing, J. Carolan, R. Hamerly, T. Baehr-Jones, M. Hochberg, V. ΔeperiΔ, et al., Optica 7, 551 (2020).