2508.17403
Model: nemotron-free
# Mutual Information Surprise: Rethinking Unexpectedness in Autonomous Systems
**Authors**: Yinsong Wang, Quan Zeng, Xiao Liu, Yu Ding, H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta & 30332, USA
Abstract
Recent breakthroughs in autonomous experimentation have demonstrated remarkable physical capabilities, yet their cognitive control remains limitedâoften relying on static heuristics or classical optimization. A core limitation is the absence of a principled mechanism to detect and adapt to the unexpectedness. While traditional surprise measuresâsuch as Shannon or Bayesian Surpriseâoffer momentary detection of deviation, they fail to capture whether a system is truly learning and adapting. In this work, we introduce Mutual Information Surprise (MIS), a new framework that redefines surprise not as anomaly detection, but as a signal of epistemic growth. MIS quantifies the impact of new observations on mutual information, enabling autonomous systems to reflect on their learning progression. We develop a statistical test sequence to detect meaningful shifts in estimated mutual information and propose a mutual information surprise reaction policy (MISRP) that dynamically governs system behavior through sampling adjustment and process forking. Empirical evaluationsâon both synthetic domains and a dynamic pollution map estimation taskâshow that MISRP-governed strategies significantly outperform classical surprise-based approaches in stability, responsiveness, and predictive accuracy. By shifting surprise from reactive to reflective, MIS offers a path toward more self-aware and adaptive autonomous systems.
1 Introduction
In July 2020, Nature published a cover story (?) about an autonomous robotic chemistâlocked in a lab for a week with no external communicationâindependently conducting experiments to search for improved photocatalysts for hydrogen production from water. In the years that followed, Nature featured three more articles (?, ?, ?) highlighting the transformative role of autonomous systems in materials discovery, experimentation, and even manufacturing, each reporting orders-of-magnitude improvements in efficiency. These reports spotlighted the intensifying global race to advance autonomous technologies beyond the already well-established domain of self-driving cars (?, ?, ?, ?). Nature was not alone; numerous other outlets have documented the surge in autonomous research and innovation (?, ?, ?). This rapid expansion is a natural consequence of recent advances in robotics and artificial intelligence, which continue to push the boundaries of what autonomous systems can accomplish.
The systems featured in the Nature publications demonstrate highly capable bodies that can perform complex tasks. Recall that an autonomous system comprises two fundamental components: a brain and a bodyâcolloquial terms for its control mechanism and its sensing-action capabilities, respectively. Unlike traditional automation systems, which follow predefined instructions to execute simple, repetitive tasks, true autonomy requires a higher level of cognitive capacityâan autonomous system is supposedly capable of making decisions with minimal human intervention. However, their brain function, while more sophisticated than rigid pre-programmed instructions, remains relatively limited.
Surveying the literature over the past decade, we found that (?), (?), and (?) rely on classical Bayesian optimization to guide system decisionsâa technique that, although effective, does not constitute full autonomy, i.e., completely eliminating human involvement. More recent works in Nature (?, ?) continue in a similar vein, adopting active learning frameworks akin to Bayesian optimization, without fundamentally enhancing the cognitive capabilities of these systems. The conceptual limitations of their decision-making mechanisms continue to impede progress toward genuine autonomy. (?) argue that a core deficiency of current autonomous systems is the absence of a âsurpriseâ mechanismâthe capacity to detect and adapt to unforeseen situations. Without this capability, true autonomy remains out of reach.
What is a âsurprise,â and how does it differ from existing measures governing automation? Surprise is a fundamental psychological trigger that enables humans to react to unexpected events. Intuitively, it arises when observations deviate from expectations. Traditionally, unexpectedness has been loosely equated with anomaliesâquantifying inconsistencies between new observations and historical data. Common approaches to anomaly detection include statistical methods such as z-scores (?) and hypothesis testing (?, ?); distance-based techniques (?), including Euclidean (?) and Mahalanobis distances (?, ?); and machine learning-based models (?, ?), which learn patterns to identify and filter out anomalous data. However, researchers increasingly recognize that simply detecting and discarding unexpected events is insufficient for achieving higher levels of autonomy. In human cognition, unexpectedness is not inherently undesirable; in fact, surprise often signals opportunities for discovery rather than error. Although mathematically similar to anomaly measures, surprise is conceptually distinct: it is not merely a deviation to be rejected, but a valuable learning signal that can enhance adaptation and decision-making.
This shift in perspective aligns with formal definitions of surprise in information theory and computational psychology, such as Shannon surprise (?), Bayesian surprise (?), Bayes Factor surprise (?), and Confidence-Corrected surprise (?). These surprise definitions quantify unexpectedness by modeling deviations from prior beliefs or probability distributions. In the following section, we will delve deeper into these existing measures and evaluate whether they truly serve the intended role of identifying opportunities, as human surprise does, more than merely flagging anomalies. Using current surprise definitions, (?) demonstrated that treating surprising events not as noise to be removed but as catalysts for learning can significantly enhance a systemâs learning speed. Additional empirical evidence shows that incorporating surprise as a learning mechanism can improve autonomy in domains such as autonomous driving (?, ?, ?) and manufacturing (?, ?).
In our research, we find that existing definitions of surprise require significant improvement. Their close resemblance to anomaly detection measures suggests that they may not effectively support higher levels of autonomy. Specifically, a robust surprise measure should emphasize knowledge acquisition and adaptability, rather than treating unexpectedness merely as a deviation from the normâan approach that current surprise definitions tend to adopt. We therefore argue that it is essential to develop a novel surprise metric that inherently fosters learning and deepens an autonomous systemâs understanding of the underlying processes it encounters. To capture this dynamic capability, we introduce the Mutual Information Surprise (MIS)âa new framework that redefines how autonomous systems interpret and respond to unexpected events. MIS quantifies the degree of both frustration and enlightenment associated with new observations, measuring their impact on refining the systemâs internal understanding of its environment. We also demonstrate the differences that arise when applying mutual information surprise, as opposed to relying solely on classical surprise definitions, highlighting MISâs potential to meaningfully enhance autonomous learning and decision-making.
The paper is organized as follows. In Section 2, we revisit the concept of surprise by presenting a taxonomy of existing surprise measures and introducing the intuition, mathematical formulation, and limitations of classical definitions. In Section 3, we formally define the Mutual Information Surprise (MIS) and derive a testing sequence for detecting multiple types of system changes in autonomous systems. We also design an MIS reaction policy (MISRP) that provides high-level guidance to complement existing exploration-exploitation active learning strategies. In Section 4, we compare MIS with classical surprise measures to illustrate its numerical stability and enhanced cognitive capability. We further demonstrate the effectiveness of the MIS reaction policy through a pollution map estimation simulation. In Section 5, we conclude the paper.
2 Current Surprise Definitions and Their Limitations
Classical definitions of surprise, such as Shannon and Bayesian Surprise, provide elegant mathematical frameworks for quantifying unexpectedness. However, these approaches often fall short in capturing the core mechanisms driving adaptive behavior: continuous learning and flexible model updating. This section revisits and analyzes existing formulations, elaborating on their conceptual foundations and outlining both their strengths and limitations.
Before proceeding with our discussion, we introduce the notation used throughout this paper. Scalars are denoted by lowercase letters (e.g., $x$ ), vectors by bold lowercase letters (e.g., $\mathbf{x}$ ), and matrices by bold uppercase letters (e.g., $\mathbf{X}$ ). Distributions in the data space are represented by uppercase letters (e.g., $P$ ), probabilities by lowercase letters (e.g., $p$ ), and distributions in the parameter space by the symbol $\pi$ . The $L_{2}$ norm is denoted by $\|·\|_{2}$ , and the absolute value or $L_{1}$ norm is denoted by $|·|$ . We use $\mathbb{E}[·]$ to denote the expectation operator and $\text{sgn}(·)$ for the sign operator. Estimators are denoted with a hat, as in $\hat{·}$ .
The Family of Shannon Surprises
The family of Shannon Surprise metrics emphasizes the improbability of observed data, typically independent of explicit model parameters. This class broadly aligns with âobservationâ and âprobabilistic-mismatchâ surprises as categorized in (?). The central question which the Shannon family of surprises tries to answer is straightforward: How unlikely is the observation?
The most widely recognized measure is Shannon Surprise (?), formally defined as:
$$
S_{\text{Shannon}}(\mathbf{x})=-\log p(\mathbf{x}), \tag{1}
$$
interpreting surprise directly through event rarity. Although conceptually clear and mathematically elegant, this definition has a significant limitation: encountering a Shannon Surprise does not inherently imply knowledge acquisition. Consider, for instance, a uniform dartboardâa stochastic yet entirely understood system. Each outcome has an equally low probability, thus appearing âsurprisingâ under Shannonâs definition, despite humans neither genuinely finding these outcomes surprising nor gaining any additional knowledge by observing them. In other words, the focus of Shannon Surprise is statistical rarity rather than genuine knowledge gain.
To address this limitation, particularly in highly stochastic scenarios, Residual Information Surprise (?) has been introduced, which measures surprise by quantifying the gap between the minimally achievable and observed Shannon Surprises:
$$
S_{\text{Residual}}(\mathbf{x})=|\underset{\mathbf{x}^{\prime}}{\min}\{-\log p(\mathbf{x}^{\prime})\}-(-\log p(\mathbf{x}))|=\underset{\mathbf{x}^{\prime}}{\max}\log p(\mathbf{x}^{\prime})-\log p(\mathbf{x}).
$$
In the dartboard example, Residual Information Surprise becomes zero for all outcomes, as $p(\mathbf{x}^{\prime})$ remains constant for every $\mathbf{x}^{\prime}$ , accurately reflecting an absence of genuine surprise. However, this formulation introduces a conceptual challenge, as determining $\underset{\mathbf{x}^{\prime}}{\max}\log p(\mathbf{x}^{\prime})$ implicitly presumes an omniscient oracle, an assumption typically infeasible in practice.
Interestingly, Shannon Surprise serves as a foundation for various anomaly measures. For example, under Gaussian assumptions, Shannon Surprise becomes proportional to squared error:
$$
S_{\text{Shannon}}(\mathbf{x})\propto\|\mathbf{x}-\mu_{\mathbf{x}}\|_{2}^{2},
$$
thus linking surprise with deviation from the mean. Similarly, assuming a Laplace distribution recovers an absolute error interpretation, termed Absolute Error Surprise in (?):
$$
S_{\text{Shannon}}(\mathbf{x})\propto|\mathbf{x}-\mu_{\mathbf{x}}|.
$$
We note that both Squared Error Surprise and Absolute Error Surprise are commonly utilized metrics in anomaly detection literature (?, ?, ?).
The Family of Bayesian Surprises
Bayesian Surprises, by contrast, explicitly model belief updates. These measures quantify the degree to which a new observation alters the internal model, shifting the focus from event rarity to epistemic impact. This concept parallels the âbelief-mismatchâ surprise in the taxonomy by (?).
The canonical formulation, introduced in (?), defines Bayesian Surprise as the KullbackâLeibler divergence between the prior and posterior distributions over parameters:
$$
S_{\text{Bayes}}(\mathbf{x})=D_{\text{KL}}\left(\pi(\boldsymbol{\theta}\mid\mathbf{x})\,\|\,\pi(\boldsymbol{\theta})\right).
$$
This measure offers a principled approach to belief revision and naturally aligns with learning mechanisms. In theory, it encourages agents to reduce surprise through model updates, providing a pathway toward adaptive autonomy.
However, Bayesian Surprise is not without limitations. As data accumulates, new observations exert diminishing influence on the posterior, rendering the agent increasingly âstubborn.â This behavior can result in Bayesian Surprise overlooking rare but meaningful anomalies. For example, consider the discovery by S. S. Ting of the $J$ particle, characterized by an unusually long lifespan compared to other particles in its class. Under standard Bayesian updating, scientistsâ beliefs about particle lifespans would barely shift due to this single observation. Consequently, Bayesian Surprise would classify such an event as merely an anomaly, potentially disregarding it.
To mitigate this posterior overconfidence, Confidence-Corrected (CC) Surprise (?) compares the current informed belief against that of a naĂŻve learner with a flat prior:
$$
S_{\text{CC}}(\mathbf{x})=D_{\text{KL}}\left(\pi(\boldsymbol{\theta})\,\|\,\pi^{\prime}(\boldsymbol{\theta}\mid\mathbf{x})\right),
$$
where $\pi^{\prime}(\boldsymbol{\theta}\mid\mathbf{x})$ represents the updated belief assuming a uniform prior. This confidence-corrected formulation remains sensitive to new data irrespective of prior history. In the $J$ particle example, employing Confidence-Corrected Surprise would trigger a genuine surprise, as the posterior remains responsive to the novel observation without the inertia introduced by extensive historical data.
A related idea emerges with Bayes Factor (BF) Surprise (?), which compares likelihoods under naĂŻve and informed beliefs:
$$
S_{\text{BF}}(\mathbf{x})=\frac{p(\mathbf{x}\mid\pi^{0}(\boldsymbol{\theta}))}{p(\mathbf{x}\mid\pi^{t}(\boldsymbol{\theta}))},
$$
where $\pi^{0}(\boldsymbol{\theta})$ represents the naĂŻve (untrained) prior and $\pi^{t}(\boldsymbol{\theta})$ the informed belief based on all prior observations up to time $t$ (before observing $\mathbf{x}$ ). This ratio quantifies how strongly the current observation supports the naĂŻve prior over the informed prior. In practice, the effectiveness of both Confidence-Corrected and Bayes Factor Surprises heavily depends on constructing appropriate priorsâa task often challenging and subjective.
Another variant within the Bayesian Surprise family is Postdictive Surprise (?), which operates in the output space rather than parameter space as in the original Bayesian Surprise:
$$
S_{\text{Postdictive}}(\mathbf{x})=D_{\text{KL}}\left(P(\mathbf{y}\mid\boldsymbol{\theta}^{\prime},\mathbf{x})\,\|\,P(\mathbf{y}\mid\boldsymbol{\theta},\mathbf{x})\right), \tag{2}
$$
where $\boldsymbol{\theta}$ and $\boldsymbol{\theta}^{\prime}$ denote parameters before and after the update, respectively. (?) argue that computing KL divergence in the output space is more computationally tractable for variational models but potentially less expressive when output variance depends on the input (e.g., under heteroskedastic conditions).
Reflection
We acknowledge the presence of alternative categorizations of surprise definitions, notably the taxonomy in (?), which classifies surprise measures into three groups: observation surprises, probabilistic-mismatch surprises, and belief-mismatch surprises. As discussed previously, the Shannon Surprise family aligns closely with the first two categories, whereas the Bayesian Surprise family corresponds to the last.
These categorizations are not strictly delineated. For instance, Residual Information Surprise incorporates a conceptual element common to the Bayesian Surprise familyâproviding a baseline against which the observed data is contrasted with. On the other hand, Bayes Factor Surprise, despite being explicitly Bayesian in its formulation, closely resembles a Shannon Surprise conditioned on alternative priors. Furthermore, notwithstanding their philosophical distinctions, Bayesian and Shannon Surprises often behave similarly in practice; we provide further details on this observation in Section 4.
It is understandable that researchers initially explored these two foundational surprise definitions, each possessing inherent limitations: Shannon Surprise conflates probability with knowledge gain, while Bayesian Surprise suffers from increasing posterior stubbornness. Subsequent refinements emerged to address these shortcomings, primarily through adjusting the choice of prior to create more meaningful contrasts. The Residual Information Surprise assumes an oracle-like prior, whereas Confidence-Corrected and Bayes Factor Surprises rely on a non-informative prior. Regardless of the priors chosen, defining a suitable prior remains a challenging and unresolved issue in the research community.
Both surprise families share other critical limitations: they are single-instance measures and inherently one-sided measures. Being single instance means that they assess surprise based solely on the marginal impact of individual observations, without explicitly modeling cumulative learning dynamics over time, whereas being one-sided means that they have a decision threshold on one single side, offering limited expressiveness since human perceptions of surprise can range from positive to negative.
3 Mutual Information Surprise
In this section, we introduce the concept of Mutual Information Surprise (MIS). We first explore the intuition and motivation underlying this concept, followed by the development of a novel, theoretically grounded testing sequence. We then discuss the implications when this test sequence is violated and propose a reaction policy contingent on different types of violations. Table 1 summarizes the perspective differences between Mutual Information Surprise vs Shannon and Bayesian family of surprises.
Table 1: The perspective differences among Shannon family surprises, Bayesian family surprises, and Mutual Information Surprise.
| Surprise | Single Instance Focused | Capture Transient Changes | Aware of Learning Progression | Parametric Predictive Modeling |
| --- | --- | --- | --- | --- |
| Shannon Family | â | â | â | â |
| Bayesian Family | â | â | â | â |
| MIS | â | â | â | â |
3.1 What Do We Expect from a Surprise?
In human cognition, surprise often triggers reflection and adaptation. A computational analog should similarly prompt deeper examination and enhanced understanding, transcending mere statistical rarity and indicating an opportunity for learning.
To formalize this perspective, consider a system governed by a functional mapping $f:\mathbf{x}â\mathbf{y}$ , with observations drawn from a joint distribution $P(\mathbf{x},\mathbf{y})$ . This system is well-regulated, meaning the input distribution $P(\mathbf{x})$ , output distribution $P(\mathbf{y})$ , and joint distribution $P(\mathbf{x},\mathbf{y})$ are time-invariant. This definition expands the traditional notion of time-invariance by explicitly including consistent exposure $P(\mathbf{x})$ , aligning closely with human trust in persistent patterns across rules and experiences.
To quantify system understanding, we use mutual information (MI) (?), defined as
$$
I(\mathbf{x},\mathbf{y})=\mathbb{E}_{\mathbf{x},\mathbf{y}}\left[\log\frac{p(\mathbf{y}\mid\mathbf{x})}{p(\mathbf{y})}\right]=H(\mathbf{x})+H(\mathbf{y})-H(\mathbf{x},\mathbf{y})=H(\mathbf{y})-H(\mathbf{y}\mid\mathbf{x}), \tag{3}
$$
where $H(·)$ denotes entropy, measuring uncertainty or chaos of a random variable. Mutual information quantifies the reduction in uncertainty about $\mathbf{y}$ given knowledge of $\mathbf{x}$ . A high $I(\mathbf{x},\mathbf{y})$ indicates strong comprehension of $f$ , whereas stagnation or a decrease in $I(\mathbf{x},\mathbf{y})$ signals stalled learning. For the aforementioned well-regulated system, $I(\mathbf{x},\mathbf{y})$ remains constant.
Typically, mutual information $I(\mathbf{x},\mathbf{y})$ is estimated via maximum likelihood estimation (MLE) (?); details of the MLE estimator are provided in the Appendix. Empirical estimation of $I(\mathbf{x},\mathbf{y})$ is, however, downward biased for clean data with a low noise level (?):
$$
\mathbb{E}[\hat{I}(\mathbf{x},\mathbf{y})]\leq I(\mathbf{x},\mathbf{y}).
$$
Interestingly, this bias can serve as an informative feature: As experience accumulates, $\mathbb{E}[\hat{I}(\mathbf{x},\mathbf{y})]$ should increase and approach the true value $I(\mathbf{x},\mathbf{y})$ , determined by $p(\mathbf{x})$ and function $f$ . Thus, a monotonic growth in mutual information estimate signals learning.
Returning to our core questionâwhat do we expect from a surprise? Unlike classical surprise measures (Shannon or Bayesian), which focus narrowly on conditional distributions and rarity, we posit that a surprise measure should reflect whether learning occurred. Noticing the connection between mutual information growth and learning, we define surprise as a deviation from expected mutual information growth. Specifically, we define Mutual Information Surprise (MIS) as the difference in mutual information estimates after incorporating new observations:
$$
\text{MIS}\triangleq\hat{I}_{n+m}-\hat{I}_{n}, \tag{4}
$$
where $\hat{I}_{n}$ is the estimation of mutual information $I_{n}$ at the time of the first $n$ observations, and $\hat{I}_{n+m}$ for $I_{m+n}$ after observing $m$ additional points. Starting from here, we omit the variable terms, $\mathbf{x}$ and $\mathbf{y}$ , in the notations of mutual information and its estimation for the sake of simplicity. A large (relative to sample size $m$ and $n$ ) positive MIS signals enlightenment, indicating significant learning, whereas a near-zero or negative MIS indicates frustration, suggesting stalled progress. Hence, MIS provides operational insight into whether a system evolves as expected, transforming it into a practical autonomy test. Significant deviations from the expected MIS trajectory indicate meaningful changes or system stagnation.
3.2 Bounding MIS
Mutual information estimation is inherently challenging: it is high-dimensional, nonlinear, and exhibits complex variance. The standard method, though principled, is a computationally expensive permutation test (?, ?), involving repeatedly shuffling $m+n$ observations into two groups, calculating MI differences, and evaluating rejection probabilities:
$$
p=\frac{1}{B}\sum_{i=1}^{B}\mathbf{1}(|\Delta\hat{I}|>|\Delta\hat{I}|_{i}),
$$
where $\Delta\hat{I}=\hat{I}_{n}-\hat{I}_{m}$ represents the actual differences between mutual information estimations, and $\Delta\hat{I}_{i}$ represents the $i$ th permuted difference. $\mathbf{1}(·)$ is the indicator function. In real-time streaming scenarios, however, permutation tests become impractical due to their computational load. Moreover, when $m\ll n$ , permutation tests lose effectiveness, yielding noisy outcomes.
An alternative is standard deviation-based testing. For MLE mutual information estimator $\hat{I}_{n}$ , its estimation standard deviation satisfies (?):
$$
\sigma\lesssim\frac{\log n}{\sqrt{n}}, \tag{5}
$$
where $\lesssim$ stands for less or equal to (in terms of order), which yields an analytical test on the mutual information change when omitting the bias term (brief derivation provided in the Appendix),
$$
\hat{I}_{m+n}-\hat{I}_{n}\in\pm\sqrt{\frac{\log^{2}(m+n)}{m+n}+\frac{\log^{2}n}{n}}\cdot z_{\alpha}\asymp\mathcal{O}\left(\frac{\log n}{\sqrt{n}}\right), \tag{6}
$$
where $z_{\alpha}$ represents the standard normal random variable at confidence level $\alpha$ and $\asymp$ represents equal in order. But this test too is unsatisfying, because the above bound is so loose that it rarely gets violated. The root cause is the loose upper bound shown in Eq. (5), where empirical evidence suggests the true estimation standard deviation is usually much smaller than the theoretical bound. We provide the empirical evidence in the Appendix.
So, we turn to a new path for bounding MIS as follows. First, we impose several mild assumptions on the observations and the physical process.
**Assumption 1**
*We impose the following assumptions on the sampling process and physical system. 1. We assume that the existing observations are typical in the sense of the Asymptotic Equipartition Property (?), meaning that empirical statistics computed from the data are representative of their corresponding expected values under the experimental designâs intended distribution, i.e., $\hat{I}_{n}â\mathbb{E}[\hat{I}_{n}]$ . This is true when we regard the initial observations as true system information.
1. The number of existing observations $n$ is much smaller than cardinality of space $\mathcal{X},\mathcal{Y}$ . $n\ll|\mathcal{X}|,|\mathcal{Y}|$
1. The number of new observations $m$ is much smaller than the number of existing observations. $m\ll n$ .*
**Theorem 1**
*Consider a well-regulated autonomous system defined in Section 3.1, which satisfies the conditions in Assumption 1. With probability at least $1-\rho$ , the change in MLE-based mutual information estimates satisfies:
$$
\hat{I}_{n+m}-\hat{I}_{n}\in\left(\log(m+n)-\log n\right)\pm\frac{\sqrt{2m\log\frac{2}{\rho}}\log(m+n)}{m+n}\triangleq MIS_{\pm}.
$$
$MIS_{±}$ denotes the upper and lower bound for the test sequence.*
The proof of Theorem 1 is shown in the Appendix. These bounds are both tighter ( $\mathcal{O}(\frac{\log n}{n})$ instead of $\mathcal{O}(\frac{\log n}{\sqrt{n}})$ ) and more efficient (analytical test sequence) than previous methods. The bounds offer theoretically grounded thresholds within which we expect MI to evolve. When these bounds $MIS_{±}$ are breachedâeither from below or from aboveâwe know the system has encountered something.
Some may argue that for an oversampled system, Assumption 2 does not hold. That is true, and as a result, the expectation term in Theorem 1, $\log(m+n)-\log n$ , needs to be adjusted. For a noise-free system with limited outcome and large number of existing observations, one needs to replace the expectation term with $(|\mathcal{Y}|-1)(\frac{1}{n}-\frac{1}{m+n})$ and the bounds in Theorem 1 still works.
3.3 What Does MIS Actually Tell Us?
When the quantity $\text{MIS}=\hat{I}_{n+m}-\hat{I}_{n}$ falls outside the established bounds $MIS_{±}$ âeither exceeding the upper bound or falling below the lower boundâthe system is considered to be surprised, thereby triggering a Mutual Information Surprise (MIS). Essentially, Theorem 1 functions as a statistical hypothesis test: the null hypothesis posits that the underlying system remains well-regulated, implying $\Delta I=I_{n+m}-I_{n}=0$ , where $I_{n}$ denotes the true mutual information at the time of $n$ observations. Any violation indicates a significant shift, with negative deviations ( $\Delta I<0$ ) and positive deviations ( $\Delta I>0$ ) each carrying distinct implications.
Recall that mutual information can be expressed in terms of entropy, as shown in Eq. (3), so changes in $\Delta I$ may result from variations in $H(\mathbf{x})$ , $H(\mathbf{y})$ , and $H(\mathbf{y}\mid\mathbf{x})$ . In this subsection, we examine the implications of MIS under different driving forces.
Violation from Below: Learning Has Stalled or Regressed
If
$$
\text{MIS}<\text{MIS}_{-},
$$
this implies $\Delta I(\mathbf{x},\mathbf{y})<0$ , signifying a downward shift in mutual information. A negative surprise indicates diminished or stalled learning, potentially due to:
1. Stagnation in Exploration: A downward shift driven by a decrease in input entropy $\Delta H(\mathbf{x})<0$ suggests the system repeatedly samples in a limited region, thus gathering redundant data with minimal new information.
1. Increased Noise or Process Drift: A downward shift could also result from increased conditional entropy $\Delta H(\mathbf{y}\mid\mathbf{x})>0$ , indicating greater uncertainty in predicting $\mathbf{y}$ given $\mathbf{x}$ . Practically, this often signifies increased external noise or a fundamental change in the underlying process.
Violation from Above: Sudden Growth in Understanding
If
$$
\text{MIS}>\text{MIS}_{+},
$$
this implies $\Delta I(\mathbf{x},\mathbf{y})>0$ , indicating an upward shift in mutual information. This positive surprise can result from:
1. Aggressive Exploration: If the increase is driven by higher input entropy $\Delta H(\mathbf{x})>0$ , the system is likely exploring previously unvisited regions aggressively, potentially inflating knowledge gains without sufficient validation.
1. Reduction in Noise: An increase due to reduced conditional entropy $\Delta H(\mathbf{y}\mid\mathbf{x})<0$ signals a desirable decrease in uncertainty, thus generally representing a beneficial development.
1. Novel Discovery: An increase in output entropy $\Delta H(\mathbf{y})>0$ suggests discovery of novel and previously rare outputsâparticularly valuable in exploratory or scientific contexts.
Summary Table
| Violation from Below | Stagnation in exploration | $\downarrow H(\mathbf{x})\Rightarrow\downarrow I(\mathbf{x},\mathbf{y})$ |
| --- | --- | --- |
| Increased noise / process drift | $\uparrow H(\mathbf{y}\mid\mathbf{x})\Rightarrow\downarrow I(\mathbf{x},\mathbf{y})$ | |
| Violation from Above | Aggressive exploration | $\uparrow H(\mathbf{x})\Rightarrow\uparrow I(\mathbf{x},\mathbf{y})$ |
| Noise reduction | $\downarrow H(\mathbf{y}\mid\mathbf{x})\Rightarrow\uparrow I(\mathbf{x},\mathbf{y})$ | |
| Novel discovery | $\uparrow H(\mathbf{y})\Rightarrow\uparrow I(\mathbf{x},\mathbf{y})$ | |
The table above summarizes potential causes for MIS violations and their implications. These patterns help the system differentiate between meaningful learning and misleading deviations, expanding beyond the capacity of classical surprise measures and providing a road map to corrective or adaptive responses for higher level autonomy. We purposely omit the case where a decrease in $H(\mathbf{y})$ causes violation from below, as this scenario typically lacks independent significance. Instead, its happening is generally caused by changes in sampling strategy or underlying processes, which we have already discussed.
3.4 Reaction Policy: A Three-Pronged Approach
Following the identification of potential causes behind MIS triggers (Section 3.3), the next question is how the system should respond. Naturally, the systemâs reaction should align with the dominant entropy component contributing to the change. In practice, we identify the dominant entropy change by computing and ranking the ratios
$$
\frac{\text{sgn}(\text{MIS})\Delta\hat{H}(\mathbf{x})}{|\text{MIS}|},\quad\frac{\text{sgn}(\text{MIS})\Delta\hat{H}(\mathbf{y})}{|\text{MIS}|},\quad\text{and}\quad\frac{\text{sgn}(\text{MIS})\Delta\hat{H}(\mathbf{y}\mid\mathbf{x})}{|\text{MIS}|},
$$
where $\Delta\hat{H}(·)=\hat{H}_{m+n}(·)-\hat{H}_{n}(·)$ denotes the estimated entropy change.
We do not prescribe a specific reaction when $\Delta\hat{H}(\mathbf{y})$ dominates the MIS, as an increase in $H(\mathbf{y})$ is typically a passive consequence of changes in $H(\mathbf{x})$ and $H(\mathbf{y}\mid\mathbf{x})$ . When both $H(\mathbf{x})$ and $H(\mathbf{y}\mid\mathbf{x})$ remain relatively stable, a rise in $H(\mathbf{y})$ indicates that the current sampling strategy is effectively uncovering novel information; thus, no change in action is required.
For $\Delta\hat{H}(\mathbf{x})$ and $\Delta\hat{H}(\mathbf{y}\mid\mathbf{x})$ , situations may arise where their contributions are similar, i.e., no clear dominant entropy component exists and we need a resolution mechanism to break the tie. To address all these scenarios, we propose a three-pronged reaction policy that serves as a supervisory layer, compatible with existing explorationâexploitation sampling strategies:
1. Sampling Adjustment. The first policy addresses variations in input entropy $H(\mathbf{x})$ . If $\Delta\hat{H}(\mathbf{x})>0$ dominates MIS, indicating overly aggressive exploration, the system should moderate exploration and emphasize exploitation to prevent fitting to noise. Conversely, if $\Delta\hat{H}(\mathbf{x})<0$ , suggesting redundant sampling, the system should enhance exploration to restore sample diversity.
2. Process Forking. The second policy responds to variations in conditional entropy $H(\mathbf{y}\mid\mathbf{x})$ , i.e., changes in function mapping. Upon surprise triggered by $\Delta\hat{H}(\mathbf{y}\mid\mathbf{x})$ , the system forks into two subprocesses, each consisting of $n$ existing observations and $m$ new observations divided at the surprise moment (Theorem 1). The two subprocesses represent the prior process (existing observations) and the likely altered process (new observations), and will continue their sampling separately. The subprocess first encountering a $\Delta\hat{H}(\mathbf{y}\mid\mathbf{x})$ -triggered surprise is discarded, and the remaining subprocess continues as the main process. In the extremely rare case when both subprocesses trigger a $\Delta\hat{H}(\mathbf{y}\mid\mathbf{x})$ dominated MIS surprise at the same time, we discard the process with fewer observations, and continues with the subprocess with more observations.
3. Coin Toss Resolution. There are occasions where changes in $\Delta\hat{H}(\mathbf{x})$ and $\Delta\hat{H}(\mathbf{y}\mid\mathbf{x})$ are comparable, making selecting a reaction policy challenging. Instead of arbitrarily favoring the slightly larger change, we always use a biased coin toss approach, stochastically selecting which entropy to address based on the magnitude of changes:
$$
p_{\text{adjust}}=\frac{|\Delta\hat{H}(\mathbf{x})|}{|\Delta\hat{H}(\mathbf{x})|+|\Delta\hat{H}(\mathbf{y}\mid\mathbf{x})|},\quad p_{\text{fork}}=1-p_{\text{adjust}}.
$$
The decision variable $z$ is sampled as $z\sim\text{Bernoulli}(P_{\text{adjust}})$ , with $z=1$ indicating sampling adjustment and $z=0$ indicating process forking. This mechanism ensures balanced reactions, robustness, and prevents overreactions to marginal signals.
The description above provides a brief summary of the MIS reaction policy. In the remaining portion of this subsection, we will present the specific MIS reaction policy in an algorithm. To do that, we first need to define a sampling process formally and then present the detailed algorithmic implementation of this reaction policy in Algorithm 1.
**Definition 1**
*A sampling process $\mathcal{P}(\mathbf{X},g(·))$ consists of two components: existing observations $\mathbf{X}$ and a sampling function $g(·)$ , where the next sample location is determined by
$$
\mathbf{x}_{\text{next}}\sim g(\mathbf{X}),
$$
with $\mathbf{x}_{\text{next}}$ drawn from the stochastic oracle $g(\mathbf{X})$ . If $g(·)$ is deterministic, $\sim$ is replaced by equality ( $=$ ). For clarity, a sampling process with $n$ existing observations is denoted $\mathcal{P}_{n}$ .*
Algorithm 1 Mutual Information Surprise Reaction Policy (MISRP)
1: A sampling process $\mathcal{P}(\mathbf{Z},g(·))$ , where $\mathbf{Z}$ consists of $k$ pairs of input $\mathbf{X}$ and output $\mathbf{Y}$ ; A maximum reflection threshold $T$ ; Reflection period $m=2$
2: while $mâ€\min(T,\frac{k}{2})$ do
3: Set $n=k-m$ ; Compute $MIS=\hat{I}_{m+n}-\hat{I}_{n}$ ; Record $\Delta\hat{H}(\mathbf{x})$ , $\Delta\hat{H}(\mathbf{y})$ , and $\Delta\hat{H}(\mathbf{y}\mid\mathbf{x})$
4: if $MIS\notâ MIS_{±}$ and $\frac{\text{sgn}(\text{MIS})\Delta\hat{H}(\mathbf{y})}{|\text{MIS}|}â \max\big{\{}\frac{\text{sgn}(\text{MIS})\Delta\hat{H}(\mathbf{x})}{|\text{MIS}|},\frac{\text{sgn}(\text{MIS})\Delta\hat{H}(\mathbf{y})}{|\text{MIS}|},\frac{\text{sgn}(\text{MIS})\Delta\hat{H}(\mathbf{y}\mid\mathbf{x})}{|\text{MIS}|}\big{\}}$ then
5: Compute bias: $pâ\frac{|\Delta\hat{H}(\mathbf{x})|}{|\Delta\hat{H}(\mathbf{x})|+|\Delta\hat{H}(\mathbf{y}\mid\mathbf{x})|}$
6: Sample $z\sim\text{Bernoulli}(p)$
7: if $z=1$ then $\triangleright$ Sampling Adjustment
8: if $MIS>MIS_{+}$ then
9: Modify $g$ to reduce exploration and increase exploitation
10: else
11: Modify $g$ to increase exploration and reduce redundancy
12: end if
13: break while
14: else $\triangleright$ Process Forking
15: if $\mathcal{P}$ is forked and the other process is not requesting Process Forking then
16: Delete $\mathcal{P}$ ; Merge the other process as the main process
17: break while
18: end if
19: if $\mathcal{P}$ is forked and the other process is requesting Process Forking then
20: Delete the $\mathcal{P}$ with fewer data; Merge the other one as the main process
21: break while
22: end if
23: Fork process into two branches: $\mathcal{P}_{n}$ and $\mathcal{P}_{m}$
24: Call $\text{MISRP}(\mathcal{P}_{n},t)$ and $\text{MISRP}(\mathcal{P}_{m},t)$
25: break while
26: end if
27: else
28: No action required (surprise within expected bounds)
29: end if
30: $m=m+1$
31: end while
We offer several remarks on the MIS reaction policy $\text{MISRP}(\mathcal{P},t)$ :
- In the pseudocode, we introduce two additional notations: the maximum reflection threshold $T$ and the total number of observations $k$ . In practice, MIS is computed retroactively, that is, given a sequence of $k$ observations, we partition them into $m$ recent observations and $n=k-m$ older observations to compute the MIS. We term the $m$ recent observation as the reflection period and we increment $m$ to iterate over different partition points. The reflection period $m$ is constrained to be no greater than $\min(T,\frac{k}{2})$ . This constraint is motivated by the comparative behavior of test statistics derived from Theorem 1 and the variance-based test in Eq. (6). Specifically, when $m=n$ , both our proposed test and the variance-based test yield statistics of order $\mathcal{O}\left(\frac{\log n}{\sqrt{n}}\right)$ . As discussed in Section 3.2, such statistics are typically too loose to be violated in practice, thereby diminishing the sensitivity advantage of our method. Consequently, evaluating MIS beyond $m=\frac{k}{2}$ is unnecessary and computationally inefficient. The reflection threshold $T$ is introduced to ensure computational feasibility, and we recommend selecting $T$ as large as computational resources permit.
- Note that the reflection period $m$ starts at $2$ . This implies that the reaction policy does not respond to a single-instance surprise. Mathematically, this is because the derivation of the bound in Theorem 1 is ill-defined for $m=1$ . Intuitively, MIS measures the progression of learning in a sampling process, and it is impossible to determine whether a single observation is informative or erroneous without additional verification. Therefore, the MIS policy always take at least two additional samples to start to react. One may argue that this requirement for an extra sample imposes additional cost in conducting experiments. That is true. But recall one insight from the study in (?) is the benefit of â the extra resources spent on deciding the nature of an observation â in the long run.
- It is important to emphasize that bot the sampling adjustment and process forking approaches are rooted in the active learning literature and practice. Balancing exploration and exploitation, i.e., sampling adjustment, has long been a key topic in Bayesian optimization and active learning (?), whereas discarding irrelevant observations, as we do in process forking, is a common practice in the dataset drift literature (?, ?, ?, ?, ?). Our Mutual Information Surprise reaction framework provides a principled mechanism for autonomous systems to determine how to balance exploration versus exploitation and when or what to discard (i.e., forget).
4 Numerical Analysis
In this section, we illustrate the merits of Mutual Information Surprise (MIS). Section 4.1 demonstrates the strength of MIS compared to classical surprise measures. Section 4.2 showcases the advantages of the MIS reaction policy in the context of dynamically estimating a pollution map using data generated from a physics-based simulator.
4.1 Putting Surprise to the Test
To compare MIS with classical surprise measuresâprincipally Shannon and Bayesian Surprisesâwe conduct a series of controlled simulations using a simple yet interpretable system, designed to reveal how each measure behaves under varying conditions. The system is governed by the mapping
$$
y=x\mod 10, \tag{7}
$$
chosen for its simplicity, modifiability, and clarity of interpretation. The first four scenarios are fully deterministic, while the final two introduce noise and perturbations, enabling an assessment of whether each surprise measure responds meaningfully to new observations, structural changes, or stochastic disturbances. Each simulation begins with $100$ samples drawn uniformly from $xâ[0,30]$ to establish the systemâs initial knowledge. We then progressively introduce new data under different conditions, recording the response of each surprise measure. As the magnitudes of MIS, Shannon Surprise, and Bayesian Surprise differ in scale, our analysis focuses on behavioral trends âhow each measure changes, spikes, or saturatesârather than on their absolute values.
The surprise measures are computed as follows. Shannon Surprise is calculated using its classical definition in Eq. (1), as the negative log-likelihood of the true label under a Gaussian Process predictive model. Bayesian Surprise is computed as Postdictive Surprise, defined in Eq. (2), using the KL divergence between the prior and posterior predictive distributions of $y$ at each input $x$ . The same Gaussian Process predictive model is used for both, using a Matérn $\nu=2.5$ kernel with a constant noise level set to $0.1$ . After each surprise computation, the model is re-trained with all currently available data.
For MIS, we treat the initial $100$ observations as the initial sample size $n=100$ , as defined in Section 3.1. As sampling continues, the number of new observations $m$ increases (represented in the ticks of the X-axis in the figures). The output space has cardinality $|\mathcal{Y}|=10$ , corresponding to the ten possible outcomes of the modulus function, except in Scenario 6 where $|\mathcal{Y}|=20$ . MIS is calculated as defined in Eq. (4). When the theoretical bound in Theorem 1 is used, the probability level is set to $\rho=0.1$ . The bias term is adjusted as discussed in Section 3.2, since $n\gg|\mathcal{Y}|$ in this setting.
Scenario 1: Standard Exploration.
New data is randomly sampled from $xâ[30,100]$ , expanding the domain without altering the underlying function or aggressively exploring unfamiliar regions. This represents a system exploring new yet consistent areas of its environment.
Expected behavior: A well-calibrated surprise measure should indicate ongoing learning without abrupt fluctuations. We do not expect MIS to be violated.
As shown in Figure 1, MIS progresses steadily within its expected bounds, reflecting a stable and well-regulated learning process. In contrast, Shannon and Bayesian Surprises fluctuate erratically, often spiking without clear justification.
<details>
<summary>x1.png Details</summary>

### Visual Description
## Line Graph: Shannon and Bayesian Surprises
### Overview
The image is a line graph comparing two metrics, "Shannon Surprise" and "Bayesian Surprise," plotted against the "Number of Explorations (m)" on the x-axis. The graph includes two y-axes: the left axis (blue dashed line) represents "Shannon Surprise" (0â8), and the right axis (red solid line) represents "Bayesian Surprise" (0â20). The legend is positioned in the top-right corner, with blue dashed lines for Shannon Surprise and red solid lines for Bayesian Surprise.
### Components/Axes
- **Title**: "Shannon and Bayesian Surprises" (centered at the top).
- **X-axis**: "Number of Explorations (m)" with a linear scale from 0 to 100.
- **Y-axes**:
- Left: "Shannon Surprise" (0â8, increments of 1).
- Right: "Bayesian Surprise" (0â20, increments of 2.5).
- **Legend**: Top-right corner, with:
- Blue dashed line: "Shannon Surprise"
- Red solid line: "Bayesian Surprise"
### Detailed Analysis
- **Shannon Surprise (Blue Dashed Line)**:
- Peaks occur at approximately 10m, 20m, 30m, 40m, 50m, 60m, 70m, 80m, and 90m.
- Maximum value: ~7.5 (at ~20m and ~40m).
- Minimum value: ~0 (at ~0m, ~60m, and ~100m).
- Trend: Irregular, with sharp spikes and troughs, suggesting high variability.
- **Bayesian Surprise (Red Solid Line)**:
- Peaks occur at approximately 10m, 30m, 50m, 70m, and 90m.
- Maximum value: ~15 (at ~30m and ~70m).
- Minimum value: ~0 (at ~0m, ~60m, and ~100m).
- Trend: More consistent peaks compared to Shannon Surprise, with smoother transitions between values.
### Key Observations
1. **Scale Disparity**: The Bayesian Surprise metric operates on a scale 2.5Ă larger than Shannon Surprise, despite both being labeled as "surprise" measures.
2. **Peak Correlation**: Both metrics share peak positions at ~10m, ~30m, ~50m, ~70m, and ~90m, suggesting a shared underlying pattern in exploration milestones.
3. **Shannon Variability**: Shannon Surprise exhibits more frequent and erratic fluctuations (e.g., ~20m, ~40m, ~80m), while Bayesian Surprise remains relatively stable between peaks.
4. **Axis Independence**: The dual y-axes imply the metrics are not directly comparable in magnitude but may represent different dimensions of "surprise."
### Interpretation
The graph illustrates two distinct methods of quantifying "surprise" during exploration. The Shannon Surprise metric (blue) appears more sensitive to short-term fluctuations, with sharp peaks and troughs, possibly reflecting immediate uncertainties or information gains. In contrast, the Bayesian Surprise metric (red) shows broader, more sustained peaks, potentially capturing cumulative or probabilistic surprises over exploration intervals. The shared peak positions suggest that certain exploration milestones (e.g., 10m, 30m) are critical for both metrics, though their magnitudes differ significantly. The scale disparity raises questions about normalization or unit differences between the two methods. This could indicate that Bayesian Surprise is designed to aggregate or smooth data, while Shannon Surprise prioritizes granular, real-time variability. The absence of data beyond 100m explorations may imply a cutoff in the study or a stabilization of surprise metrics at later stages.
</details>
<details>
<summary>x2.png Details</summary>

### Visual Description
## Line Graph: Mutual Information Surprise
### Overview
The image depicts a line graph titled "Mutual Information Surprise," illustrating the relationship between the number of explorations (x-axis) and mutual information surprise (y-axis). A green line represents the "Mutual Information Surprise" metric, while a gray shaded region labeled "MIS Bound" defines a theoretical boundary. The graph spans 100 explorations on the x-axis and ranges from -0.2 to 0.3 on the y-axis.
### Components/Axes
- **X-axis**: "Number of Explorations (m)" with increments of 20 (0, 20, 40, 60, 80, 100).
- **Y-axis**: "Mutual Information Surprise" with increments of 0.1 (-0.2, -0.1, 0.0, 0.1, 0.2, 0.3).
- **Legend**: Located in the top-right corner, with:
- **Green line**: "Mutual Information Surprise"
- **Gray shaded area**: "MIS Bound"
### Detailed Analysis
1. **Mutual Information Surprise (Green Line)**:
- Starts at approximately **0.0** when x=0.
- Dips slightly below 0.0 (~-0.05) around x=20.
- Rises to **~0.05** at x=40.
- Stabilizes between **0.05â0.07** from x=60 to x=100.
2. **MIS Bound (Gray Shaded Area)**:
- Begins at **y=-0.2** (x=0) and curves upward.
- Intersects the green line at x=20 (y~-0.05).
- Extends horizontally to the right, covering the area above the green line up to **y=0.3**.
- The upper boundary of the shaded region plateaus at **y=0.3** from x=60 onward.
### Key Observations
- The green line exhibits a **slight dip at x=20** before increasing, suggesting an initial decrease in mutual information surprise followed by stabilization.
- The MIS Bound starts below the x-axis, meets the green line at x=20, and forms a theoretical upper limit for mutual information surprise.
- The shaded regionâs upper boundary (y=0.3) remains constant after x=60, indicating a theoretical cap on mutual information surprise.
### Interpretation
- **Trend Analysis**: The green lineâs dip at x=20 may reflect an initial phase where exploration reduces mutual information surprise (e.g., due to noise or suboptimal strategies), followed by recovery and stabilization. The plateau at higher x-values suggests diminishing returns in mutual information gain after ~60 explorations.
- **MIS Bound Significance**: The gray shaded area likely represents a confidence interval or theoretical maximum for mutual information surprise. The green line consistently stays within this bound, implying that the observed values are constrained by the systemâs inherent uncertainty or design limits.
- **Practical Implications**: The stabilization of mutual information surprise after ~60 explorations highlights a potential threshold for optimal exploration efficiency. The MIS Bound provides a benchmark for evaluating system performance or theoretical limits in information-theoretic models.
</details>
Figure 1: Surprise measures during standard exploration.
Scenario 2: Over-Exploitation.
In this scenario, the system repeatedly samples a previously seen point from $xâ[0,30]$ , specifically observing the pair $(x,y)=(7,7)$ one hundred times. This simulates stagnation.
Expected behavior: Surprise should diminish as no new information is gained. This mirrors the stagnation case in Section 3.3, and we expect MIS to violate its lower bound.
Figure 2 shows that MIS falls below its lower bound, signaling a lack of knowledge gain. While Shannon and Bayesian Surprises also trend downward, they lack a defined lower threshold, limiting their reliability for flagging such behavior. Recall that both Shannon and Bayesian Surprises are inherently one-sided, as noted in (?) and (?).
<details>
<summary>x3.png Details</summary>

### Visual Description
## Line Chart: Shannon and Bayesian Surprise
### Overview
The chart compares two mathematical measures of "surprise" (Shannon and Bayesian) as a function of the number of exploitations (m). Both metrics decline as m increases, but with distinct trajectories and scales.
### Components/Axes
- **Title**: "Shannon and Bayesian Surprise" (top center)
- **X-axis**: "Number of Exploitations (m)" (0â100, linear scale)
- **Y-axes**:
- Left: "Shannon Surprise" (-3.6 to -2.0, linear scale)
- Right: "Bayesian Surprise" (0.000 to 0.012, linear scale)
- **Legend**: Top-right corner, with:
- Dashed blue line: "Shannon Surprise"
- Solid red line: "Bayesian Surprise"
- **Grid**: Light gray gridlines for reference
### Detailed Analysis
1. **Shannon Surprise (Blue Dashed Line)**:
- Starts at **-2.0** when m=0.
- Declines sharply to **-3.6** by m=100.
- Slope: Approximately -0.016 per unit m (calculated from (Îy/Îx) = (-3.6 - (-2.0))/100).
- Early drop: Steeper decline in the first 20 exploitations (Îy â -0.8 over m=0â20).
2. **Bayesian Surprise (Red Solid Line)**:
- Starts at **0.012** when m=0.
- Declines gradually to **~0.000** by m=100.
- Slope: Approximately -0.00012 per unit m (Îy â -0.012 over m=0â100).
- Late flattening: Near-zero values after m=80.
### Key Observations
- **Divergent Scales**: Shannon operates on a negative scale (-3.6 to -2.0), while Bayesian uses a positive scale (0.000 to 0.012).
- **Rate of Change**: Shannon surprise decreases 133x faster than Bayesian surprise (0.016 vs. 0.00012 per m).
- **Asymptotic Behavior**: Bayesian surprise approaches zero but never reaches it, suggesting a theoretical lower bound.
- **No Intersection**: The lines never cross, indicating Shannon surprise remains "more negative" than Bayesian surprise throughout.
### Interpretation
- **Technical Insight**: The chart demonstrates that Shannon surprise is highly sensitive to early exploitations, dropping rapidly with initial data. Bayesian surprise, by contrast, stabilizes over time, implying robustness to early fluctuations.
- **Practical Implication**: In systems where early data is noisy or unreliable, Bayesian methods may provide more stable surprise estimates. Shannon's measure could be preferable in scenarios requiring sensitivity to initial conditions.
- **Anomaly Note**: The abrupt drop in Shannon surprise at m=0 suggests a discontinuity or special case at the origin (e.g., maximum uncertainty at zero exploitations).
</details>
<details>
<summary>x4.png Details</summary>

### Visual Description
## Line Chart: Mutual Information Surprise
### Overview
The image is a line chart titled "Mutual Information Surprise," depicting the relationship between the number of exploitations (m) and mutual information surprise. The chart includes a green line representing the "Mutual Information Surprise" and a gray shaded area labeled "MIS Bound." The x-axis ranges from 0 to 100 (number of exploitations), and the y-axis ranges from -0.6 to 0.2 (mutual information surprise).
### Components/Axes
- **Title**: "Mutual Information Surprise"
- **X-axis**: "Number of Exploitations (m)" with values from 0 to 100.
- **Y-axis**: "Mutual Information Surprise" with values from -0.6 to 0.2.
- **Legend**: Located in the upper-right corner, with two entries:
- **Green line**: "Mutual Information Surprise"
- **Gray shaded area**: "MIS Bound"
### Detailed Analysis
- **Green Line (Mutual Information Surprise)**:
- Starts at (0, 0) and decreases linearly to approximately (-0.6, 100).
- Slope: Approximately -0.006 per unit of m (calculated as (-0.6 - 0)/(100 - 0)).
- Equation: $ y = -0.006m $.
- **Gray Shaded Area (MIS Bound)**:
- Starts at (0, 0) and extends horizontally to the right, covering the upper portion of the chart.
- Upper boundary: Approximately 0.2 (y-axis limit).
- Shape: A trapezoidal region starting at (0, 0) and extending to the right, with the upper edge at y=0.2.
### Key Observations
1. The mutual information surprise decreases linearly as the number of exploitations increases.
2. The MIS Bound is a shaded region that starts at 0 and extends to the right, suggesting a theoretical or empirical upper limit for mutual information surprise.
3. The green line (actual mutual information surprise) remains below the MIS Bound throughout the range of exploitations.
### Interpretation
The chart illustrates that mutual information surprise diminishes with increasing exploitations, following a linear trend. The MIS Bound likely represents a theoretical or empirical upper limit, as the green line (actual surprise) never exceeds this bound. The shaded area may indicate a confidence interval or a range of expected values, but the exact relationship between the bound and the actual surprise requires further context. The linear decrease suggests that exploitations systematically reduce mutual information surprise, possibly due to diminishing returns or saturation effects in the system being studied.
</details>
Figure 2: Surprise measures under over-exploitation.
Scenario 3: Noisy Exploration.
We perform standard exploration over $xâ[30,100]$ but apply random corruption to the outputs $\mathbf{y}$ , replacing each with a uniformly random digit between $0$ and $9$ . This simulates exploration without informative feedback.
Expected behavior: Despite novel inputs, the system should register confusion if understanding fails to improve. This mirrors the noise-increase case in Section 3.3, and we expect MIS to violate its lower bound.
Figure 3 confirms the following: MIS drops below its expected range, accurately signaling lack of learning. In contrast, Shannon and Bayesian Surprises again display erratic behavior without consistent trends.
<details>
<summary>x5.png Details</summary>

### Visual Description
## Line Graph: Shannon and Bayesian Surprises
### Overview
The image is a line graph comparing two metrics, "Shannon Surprise" and "Bayesian Surprise," plotted against the "Number of Explorations (m)" on the x-axis. The graph uses two y-axes: the left for Shannon Surprise (0â8) and the right for Bayesian Surprise (0â5). Two lines are present: a dashed blue line for Shannon Surprise and a solid red line for Bayesian Surprise. The legend is positioned in the bottom-right corner.
### Components/Axes
- **Title**: "Shannon and Bayesian Surprises"
- **X-axis**: "Number of Explorations (m)" with markers at 0, 20, 40, 60, 80, and 100.
- **Y-axes**:
- Left: "Shannon Surprise" (0â8).
- Right: "Bayesian Surprise" (0â5).
- **Legend**: Located in the bottom-right corner, with:
- Dashed blue line: "Shannon Surprise"
- Solid red line: "Bayesian Surprise"
### Detailed Analysis
- **Shannon Surprise (Blue Dashed Line)**:
- Peaks occur at approximately 10, 30, 50, 70, and 90 explorations.
- Maximum value reaches ~8 on the y-axis.
- The line exhibits high variability, with sharp drops and rises between peaks.
- **Bayesian Surprise (Red Solid Line)**:
- Peaks occur at approximately 20, 40, 60, 80, and 100 explorations.
- Maximum value reaches ~5 on the y-axis.
- The line is more consistent, with regular intervals between peaks and less variability.
### Key Observations
1. **Shannon Surprise** shows higher magnitude values (up to 8) compared to Bayesian Surprise (up to 5).
2. **Shannon Surprise** has irregular, frequent peaks, while **Bayesian Surprise** peaks at regular intervals (every 20 explorations).
3. The two lines diverge significantly in both magnitude and pattern, suggesting distinct behaviors in their respective metrics.
### Interpretation
The graph illustrates a comparison between two measures of "surprise" as a function of exploration. The **Shannon Surprise** metric exhibits greater variability and higher peaks, indicating a higher degree of unpredictability or entropy in the system being measured. In contrast, **Bayesian Surprise** shows more structured, periodic peaks, suggesting a model that incorporates prior knowledge or constraints, leading to lower but more predictable outcomes. The divergence in trends implies that the two metrics capture different aspects of uncertainty or information gain, with Shannon emphasizing randomness and Bayesian reflecting a more controlled or probabilistic framework. The x-axis label "Number of Explorations (m)" suggests that the data may relate to iterative processes, such as machine learning or experimental design, where exploration depth influences the observed surprises.
</details>
<details>
<summary>x6.png Details</summary>

### Visual Description
## Line Graph: Mutual Information Surprise
### Overview
The image is a line graph titled "Mutual Information Surprise," depicting the relationship between the number of explorations (x-axis) and mutual information surprise (y-axis). A green line represents the "Mutual Information Surprise" metric, while a gray shaded area labeled "MIS Bound" provides a contextual range. The graph spans 0 to 100 explorations on the x-axis and -0.4 to 0.2 on the y-axis.
### Components/Axes
- **X-axis**: "Number of Explorations (m)" with increments from 0 to 100.
- **Y-axis**: "Mutual Information Surprise" with values ranging from -0.4 to 0.2.
- **Legend**: Located in the top-right corner, with two entries:
- **Green line**: "Mutual Information Surprise"
- **Gray shaded area**: "MIS Bound"
### Detailed Analysis
- **Green Line (Mutual Information Surprise)**:
- Starts at **0.0** when explorations = 0.
- Gradually declines, with minor fluctuations between 0 and 20 explorations.
- Steadily decreases to approximately **-0.4** by 100 explorations.
- Key data points (approximate):
- 0 explorations: 0.0
- 20 explorations: -0.1
- 40 explorations: -0.2
- 60 explorations: -0.3
- 80 explorations: -0.4
- 100 explorations: -0.5 (slightly beyond the y-axis range, suggesting extrapolation).
- **Gray Shaded Area (MIS Bound)**:
- Covers the entire y-axis range (-0.4 to 0.2), forming a broad, horizontal band.
- Positioned behind the green line, indicating it represents a theoretical or empirical bound for mutual information surprise.
### Key Observations
1. **Downward Trend**: The green line shows a consistent decline in mutual information surprise as explorations increase, suggesting diminishing returns or increased predictability with more data.
2. **Fluctuations**: Early fluctuations (0â20 explorations) may reflect variability in initial data collection or experimental noise.
3. **MIS Bound**: The gray area encompasses the entire y-axis range, implying it defines the theoretical or observed limits of mutual information surprise across all explorations.
### Interpretation
- **Trend Significance**: The negative slope of the green line indicates that mutual information surprise decreases as explorations grow, potentially reflecting reduced uncertainty or saturation of information gain.
- **MIS Bound Context**: The gray area likely represents a confidence interval, theoretical maximum/minimum, or empirical range for mutual information surprise. The line remains within this bound, confirming the metricâs stability under varying exploration counts.
- **Practical Implications**: The graph highlights the trade-off between exploration effort and information gain, suggesting that beyond a certain point (e.g., 60â80 explorations), additional data yields minimal improvements in mutual information surprise.
### Spatial Grounding
- **Legend**: Top-right corner, clearly separated from the main graph.
- **Line Placement**: Central, with the gray area serving as a background reference.
- **Axis Alignment**: Axes are labeled with numerical increments, ensuring precise spatial mapping of data points.
### Content Details
- **Y-axis Range**: -0.4 to 0.2 (no units specified, but labeled as "Mutual Information Surprise").
- **X-axis Range**: 0 to 100 explorations (labeled as "Number of Explorations (m)").
### Final Notes
The graph provides a clear visualization of how mutual information surprise evolves with exploration, emphasizing the importance of balancing exploration effort with information gain. The MIS Bound contextualizes the metricâs variability, while the green lineâs trend underscores the diminishing returns of excessive exploration.
</details>
Figure 3: Surprise measures under noisy exploration.
Scenario 4: Aggressive Exploration.
This scenario enforces strict exploration over $xâ[30,500]$ , where each new sample is far from all observed points (i.e., outside the $± 1$ neighborhood range).
Expected behavior: Aggressive exploration without verification can lead to overconfidence. This mirrors the aggressive exploration case in Section 3.3, and we expect MIS to exceed its upper bound.
Figure 4 shows MIS surpassing its upper bound, consistent with this expectation. Shannon and Bayesian Surprises again fluctuate unpredictably.
<details>
<summary>x7.png Details</summary>

### Visual Description
## Line Graph: Shannon and Bayesian Surprises
### Overview
The image is a line graph comparing two metricsâ**Shannon Surprise** (blue dashed line) and **Bayesian Surprise** (red solid line)âacross a range of "Number of Explorations (m)" from 0 to 100. The graph includes dual y-axes: the left axis measures Shannon Surprise (0â8), and the right axis measures Bayesian Surprise (0â20). Both lines exhibit periodic spikes, but with distinct patterns and magnitudes.
---
### Components/Axes
- **Title**: "Shannon and Bayesian Surprises" (centered at the top).
- **Legend**: Located in the top-left corner, with:
- **Blue dashed line**: Labeled "Shannon Surprise."
- **Red solid line**: Labeled "Bayesian Surprise."
- **X-axis**:
- Label: "Number of Explorations (m)."
- Scale: 0 to 100, with markers at 0, 20, 40, 60, 80, 100.
- **Y-axes**:
- **Left (Shannon Surprise)**: 0 to 8, with increments of 1.
- **Right (Bayesian Surprise)**: 0 to 20, with increments of 2.5.
---
### Detailed Analysis
1. **Shannon Surprise (Blue Dashed Line)**:
- **Trend**: Exhibits frequent, smaller spikes at regular intervals (~20m, 40m, 60m, 80m, 100m).
- **Peak Values**: Approximately 6â7 at 20m, 40m, and 100m; lower (~3â4) at 60m and 80m.
- **Baseline**: Fluctuates between 1â3 between spikes.
2. **Bayesian Surprise (Red Solid Line)**:
- **Trend**: Fewer, larger spikes, primarily at 60m and 100m.
- **Peak Values**: Reaches ~17.5 at 60m and ~15 at 100m.
- **Baseline**: Remains near 0â2.5 except during spikes.
3. **Divergence**:
- The two lines rarely overlap. Bayesian Surprise peaks later (60m and 100m) compared to Shannon Surprise (20m, 40m, etc.).
- Bayesian Surprise magnitudes are consistently higher (up to 20 vs. 8 for Shannon).
---
### Key Observations
- **Periodicity**: Both metrics show periodic behavior, but Shannon Surprise is more regular (~20m intervals), while Bayesian Surprise is irregular.
- **Magnitude**: Bayesian Surprise values are ~2â3x higher than Shannon Surprise at their peaks.
- **Anomalies**:
- At 60m, Bayesian Surprise spikes sharply (~17.5), while Shannon Surprise remains low (~2).
- At 100m, both lines peak, but Bayesian Surprise (~15) dominates.
---
### Interpretation
The graph suggests that **Shannon Surprise** and **Bayesian Surprise** measure different aspects of uncertainty or information gain during explorations:
- **Shannon Surprise** (blue) reflects frequent, smaller surprises, possibly tied to immediate or local changes in the system. Its regularity implies a stable, predictable pattern of uncertainty.
- **Bayesian Surprise** (red) captures larger, less frequent surprises, potentially linked to global or model-specific updates. Its delayed peaks (60m and 100m) suggest it integrates information over longer exploration periods.
The divergence in peak positions and magnitudes indicates that the two metrics respond differently to the number of explorations. Bayesian Surprise may prioritize cumulative evidence, while Shannon Surprise reacts to discrete events. This could inform decisions in fields like machine learning, where balancing local and global uncertainty is critical.
</details>
<details>
<summary>x8.png Details</summary>

### Visual Description
## Line Graph: Mutual Information Surprise
### Overview
The image depicts a line graph titled "Mutual Information Surprise," illustrating the relationship between the number of explorations (m) and mutual information surprise. A green line represents the "Mutual Information Surprise" metric, while a gray shaded semicircular area represents the "MIS Bound." The graph spans 0 to 100 explorations on the x-axis and -0.2 to 0.6 on the y-axis.
### Components/Axes
- **Title**: "Mutual Information Surprise" (centered at the top).
- **X-axis**: "Number of Explorations (m)" with ticks at 0, 20, 40, 60, 80, 100.
- **Y-axis**: "Mutual Information Surprise" with ticks at -0.2, 0.0, 0.2, 0.4, 0.6.
- **Legend**: Located in the bottom-right corner, with:
- Green line: "Mutual Information Surprise"
- Gray shaded area: "MIS Bound"
### Detailed Analysis
1. **Mutual Information Surprise (Green Line)**:
- Starts at (0, 0.0) and increases monotonically.
- Reaches approximately 0.6 by x=100.
- Key data points:
- x=0: 0.0
- x=20: ~0.25
- x=40: ~0.35
- x=60: ~0.45
- x=80: ~0.55
- x=100: ~0.6
2. **MIS Bound (Gray Shaded Area)**:
- A semicircular region spanning x=0 to x=100.
- Covers y-values from -0.2 to 0.6.
- Peaks at y=0.6 at x=50, then tapers to y=0.0 at x=0 and x=100.
### Key Observations
- The green line consistently lies **above** the gray shaded area, indicating that the observed mutual information surprise exceeds the theoretical MIS Bound across all exploration counts.
- The green lineâs growth rate accelerates slightly after x=40, suggesting diminishing returns in information gain as explorations increase.
- The MIS Boundâs semicircular shape implies a theoretical upper limit that is not strictly linear but constrained by exploration count.
### Interpretation
The data demonstrates that the systemâs mutual information surprise grows with exploration but remains bounded by the MIS framework. The green lineâs trajectory suggests that while exploration improves information gain, the rate of improvement slows as the system approaches the MIS Boundâs upper limit. The shaded areaâs semicircular form may represent a probabilistic or geometric constraint on mutual information, highlighting a trade-off between exploration depth and information efficiency. Notably, the green lineâs final value (0.6) aligns with the MIS Boundâs peak, implying that maximum theoretical surprise is achievable but requires near-complete exploration (x=100). This could indicate that the MIS Bound is asymptotically tight for this system.
</details>
Figure 4: Surprise measures during aggressive exploration.
Scenario 5: Noise Decrease.
To simulate noise reduction, we begin with $100$ initial observations from $xâ[0,30]$ , paired with a randomly assigned output $yâ[0,9]$ . New samples are drawn from the same $x$ range but the new $y$ is produced using the deterministic modulus function in Eq. (7).
Expected behavior: Reduced noise implies stronger input-output dependency and we thus expect MIS to exceed its upper bound.
Figure 5 confirms this: MIS grows beyond its bound. Shannon and Bayesian Surprises continue to spike erratically.
<details>
<summary>x9.png Details</summary>

### Visual Description
## Line Chart: Shannon and Bayesian Surprises
### Overview
The chart compares two statistical measures, Shannon Surprise and Bayesian Surprise, across 100 sampling intervals (m=0 to m=100). Both metrics exhibit periodic spikes, with distinct patterns in magnitude and frequency.
### Components/Axes
- **X-axis**: "Number of Samplings (m)" (0 to 100, linear scale).
- **Y-axes**:
- Left: "Shannon Surprise" (0 to 8, linear scale).
- Right: "Bayesian Surprise" (0 to 20, linear scale).
- **Legend**: Located in the bottom-left corner, with:
- Dashed blue line: Shannon Surprise.
- Solid red line: Bayesian Surprise.
### Detailed Analysis
1. **Shannon Surprise (Blue Dashed Line)**:
- Spikes occur at approximately m=10, 30, 50, 70, and 90.
- Peak values: ~7.5 (m=10), ~6.5 (m=30), ~5.5 (m=50), ~4.5 (m=70), ~3.5 (m=90).
- Trend: Decreasing amplitude over time, with irregular spacing between spikes.
2. **Bayesian Surprise (Red Solid Line)**:
- Spikes occur at m=10, 20, 30, 40, 50, 60, 70, 80, 90, and 100.
- Peak values: ~15 (m=10), ~17.5 (m=20), ~12.5 (m=30), ~10 (m=40), ~8 (m=50), ~6 (m=60), ~4 (m=70), ~2 (m=80), ~0.5 (m=90), ~0 (m=100).
- Trend: More frequent spikes with diminishing magnitude, suggesting decay or stabilization.
### Key Observations
- **Periodicity**: Both metrics show quasi-periodic behavior, with Bayesian Surprise exhibiting twice the spike frequency of Shannon Surprise.
- **Magnitude**: Bayesian Surprise values are consistently 2â3Ă higher than Shannon Surprise at corresponding spikes.
- **Decay Pattern**: Both metrics show reduced surprise toward m=100, but Bayesian Surprise declines more sharply.
- **Anomalies**: A notable outlier at m=20 for Bayesian Surprise (~17.5), exceeding all other peaks.
### Interpretation
The data suggests that Bayesian Surprise is more sensitive to sampling changes, capturing finer-grained variability in the system. The periodic spikes may reflect underlying cyclical processes or threshold effects in the data generation mechanism. The decay in surprise values toward m=100 implies adaptation or saturation of the system's response. The divergence in spike frequency and magnitude highlights fundamental differences in how Shannon (information-theoretic) and Bayesian (probabilistic) frameworks quantify uncertainty. The outlier at m=20 warrants further investigation, potentially indicating an exceptional event or data artifact.
</details>
<details>
<summary>x10.png Details</summary>

### Visual Description
## Line Graph: Mutual Information Surprise
### Overview
The image depicts a line graph titled "Mutual Information Surprise," illustrating the relationship between the number of samplings (m) and mutual information surprise. A green line represents the "Mutual Information Surprise" metric, while a gray shaded area labeled "MIS Bound" indicates a theoretical or empirical boundary. The graph spans 0 to 100 samplings on the x-axis and -0.2 to 0.4 on the y-axis.
### Components/Axes
- **X-axis**: "Number of Samplings (m)" (0 to 100, linear scale).
- **Y-axis**: "Mutual Information Surprise" (-0.2 to 0.4, linear scale).
- **Legend**: Located in the bottom-right corner, with:
- **Green line**: "Mutual Information Surprise."
- **Gray shaded area**: "MIS Bound."
### Detailed Analysis
1. **Mutual Information Surprise (Green Line)**:
- Starts at (0, 0.0) and increases gradually to ~0.15 at 20 samplings.
- Rises steadily to ~0.25 at 40 samplings, then accelerates to ~0.35 at 60 samplings.
- Peaks at ~0.42 at 100 samplings, showing a nonlinear upward trend.
- **Key trend**: Steeper increase after 60 samplings.
2. **MIS Bound (Gray Shaded Area)**:
- Forms a concave curve starting at (0, 0.0), peaking at ~0.3 at 60 samplings.
- Flattens to ~0.25 at 100 samplings, indicating diminishing returns.
- **Key trend**: Plateaus after 60 samplings.
3. **Intersection**:
- The green line crosses the MIS Bound near 60 samplings, surpassing it thereafter.
### Key Observations
- The mutual information surprise metric grows faster than the MIS Bound after 60 samplings.
- The MIS Bound suggests a theoretical limit (~0.3) that the actual data exceeds by ~10% at 100 samplings.
- No outliers or anomalies are visible; trends are consistent.
### Interpretation
The graph demonstrates that mutual information surprise increases with more samplings, but the MIS Bound implies a theoretical ceiling. The green line surpassing this bound after 60 samplings suggests either:
1. **Improved performance**: The system's mutual information surprise exceeds expectations, potentially indicating enhanced learning or data quality.
2. **Bound inaccuracy**: The MIS Bound may underestimate the true limit, highlighting a need for revised theoretical models.
3. **Nonlinear dynamics**: The steeper rise after 60 samplings could reflect a phase transition or saturation effect in the data.
This divergence between empirical data and theoretical bounds warrants further investigation into the system's behavior at higher sampling rates.
</details>
Figure 5: Surprise measures during noise decrease.
Scenario 6: Discovery of New Output Values.
We modify the function in the unexplored region ( $x>30$ ) to $y=x\mod 10-10$ , introducing a different behavior while keeping the original function unchanged in $[0,30]$ .
Expected behavior: A competent surprise measure should register this new structure as a meaningful discovery. This mirrors the novel discovery case in Section 3.3, and we expect MIS to exceed its upper bound.
Figure 6 shows MIS sharply exceeding its expected trajectory, signaling successful identification of a structural shift. Shannon and Bayesian Surprises again fail to provide consistent or interpretable responses.
<details>
<summary>x11.png Details</summary>

### Visual Description
## Line Graph: Shannon and Bayesian Surprises
### Overview
The image is a line graph comparing two metrics, "Shannon Surprise" (blue dashed line) and "Bayesian Surprise" (red solid line), across a range of "Number of Explorations (m)" from 0 to 100. The graph highlights fluctuations in both metrics, with distinct peaks and troughs.
### Components/Axes
- **Title**: "Shannon and Bayesian Surprises" (top center).
- **X-axis**: "Number of Explorations (m)" with markers at 0, 20, 40, 60, 80, and 100.
- **Y-axes**:
- Left: "Shannon Surprise" (scale 0â8).
- Right: "Bayesian Surprise" (scale 0â14).
- **Legend**: Located in the top-right corner, associating:
- Blue dashed line â "Shannon Surprise".
- Red solid line â "Bayesian Surprise".
### Detailed Analysis
- **Shannon Surprise (Blue Dashed Line)**:
- Peaks at approximately **x = 20** (y â 8), **x = 60** (y â 6), and **x = 100** (y â 4).
- Exhibits high variability, with sharp spikes and rapid declines.
- Smaller peaks observed near **x = 10** (y â 5) and **x = 30** (y â 3).
- **Bayesian Surprise (Red Solid Line)**:
- Peaks at approximately **x = 20** (y â 12), **x = 60** (y â 8), and **x = 100** (y â 2).
- Smoother trend with less variability compared to Shannon Surprise.
- Smaller peaks near **x = 10** (y â 4) and **x = 30** (y â 2).
### Key Observations
1. **Peak Alignment**: Both metrics peak at similar x-values (20, 60, 100), suggesting correlated events or thresholds.
2. **Magnitude Difference**: Bayesian Surprise consistently exceeds Shannon Surprise in magnitude (e.g., 12 vs. 8 at x = 20).
3. **Volatility**: Shannon Surprise shows sharper fluctuations, while Bayesian Surprise remains relatively stable.
4. **Troughs**: Both metrics dip to near-zero values between peaks (e.g., x = 40â50, x = 80â90).
### Interpretation
The graph demonstrates that **Bayesian Surprise** consistently registers higher values than **Shannon Surprise**, indicating a potentially more sensitive or robust measure of surprise in this context. The alignment of peaks suggests shared underlying factors driving both metrics, such as critical exploration milestones. The volatility in Shannon Surprise may reflect its reliance on entropy-based calculations, which are more sensitive to discrete changes, whereas Bayesian Surpriseâs smoother trend could imply probabilistic modeling that averages out noise. The troughs between peaks might represent periods of stability or predictable outcomes. This comparison could inform decisions in fields like machine learning or information theory, where balancing sensitivity and stability is critical.
</details>
<details>
<summary>x12.png Details</summary>

### Visual Description
## Line Graph: Mutual Information Surprise
### Overview
The image is a line graph titled "Mutual Information Surprise," depicting the relationship between the number of explorations (m) and mutual information surprise. A green line represents the "Mutual Information Surprise" metric, while a gray shaded region labeled "MIS Bound" spans the lower portion of the graph. The x-axis ranges from 0 to 100 (number of explorations), and the y-axis ranges from -0.2 to 0.6 (mutual information surprise).
### Components/Axes
- **X-axis**: "Number of Explorations (m)" with ticks at 0, 20, 40, 60, 80, and 100.
- **Y-axis**: "Mutual Information Surprise" with ticks at -0.2, 0.0, 0.2, 0.4, and 0.6.
- **Legend**: Located in the top-right corner, with:
- **Green line**: "Mutual Information Surprise"
- **Gray shaded area**: "MIS Bound"
- **Line**: A green curve starting at (0, 0) and increasing to approximately (100, 0.65).
- **Shaded Region**: A gray area spanning the entire x-axis (0â100) and y-axis (-0.2 to 0.2).
### Detailed Analysis
- **Mutual Information Surprise (Green Line)**:
- Starts at (0, 0) and rises sharply to ~0.4 by x=20.
- Continues to increase at a slower rate, reaching ~0.65 at x=100.
- The curve exhibits a concave upward trend, with diminishing returns as x increases.
- **MIS Bound (Gray Shaded Area)**:
- A constant horizontal band from y=-0.2 to y=0.2 across all x-values (0â100).
- The green line remains below the upper bound (y=0.2) until xâ40, after which it surpasses the bound.
### Key Observations
1. **Increasing Trend**: Mutual information surprise grows monotonically with the number of explorations, suggesting a positive correlation between exploration and surprise.
2. **Bound Violation**: The green line exceeds the MIS Bound (y=0.2) after ~40 explorations, indicating that the observed surprise surpasses the theoretical or expected limit defined by the bound.
3. **Diminishing Returns**: The slope of the green line decreases after x=40, implying that additional explorations yield smaller increments in surprise.
### Interpretation
The graph demonstrates that mutual information surprise increases with exploration, but the rate of growth slows over time. The MIS Bound likely represents a theoretical or empirical threshold for surprise, which the system initially adheres to but eventually surpasses. This could imply that the exploration process becomes more effective at generating novel or unexpected outcomes as it progresses, or that the bound itself is a conservative estimate. The violation of the bound after 40 explorations suggests a critical point where the system's behavior diverges from expected patterns, potentially highlighting a phase transition or emergent property in the data-generating process.
</details>
Figure 6: Surprise measures when exploring a new region with novel outputs.
Summary
Across all scenarios, MIS reliably indicates whether the system is genuinely learning, stagnating, or encountering degradation. It responds to the structure and value of observations, more than just novelty. In contrast, Shannon and Bayesian Surprises often react to superficial fluctuations and display numerical instability. Furthermore, the MIS progression bound remains consistent and interpretable across all scenarios, while Shannon and Bayesian Surprises lack a universal scale or threshold, as reflected by their inconsistent magnitudes across Figures 1 through 6. This inconsistency limits their effectiveness as a reliable trigger. Overall, this simulation study demonstrates MIS not only as a novel metric for quantifying surprise, but also as a more trustworthy indicator of learning dynamicsâmaking it a promising tool for autonomous system monitoring.
4.2 Pollution Estimation: A Case Study
To demonstrate the practical utility of our proposed MIS reaction policy, we apply it to a real-time pollution map estimation scenario. We evaluate the impact of integrating the MIS reaction policy on system performance in a dynamic, non-stationary environment. Specifically, we compare two approaches: a selection of baseline sampling strategies and the same strategies governed by our MIS reaction policy.
Dataset: Dynamic Pollution Maps
We utilize a synthetic pollution simulation dataset comprising $450$ time frames, each representing a $50Ă 50$ pollution grid. Initially, the environment contains $3$ pollution sources, each emitting high pollution at a fixed level. The rest of the field exhibits moderate and random pollution values. Over time, the pollution levels across the entire field evolve due to natural diffusion, decay, and wind effects. Moreover, every $50$ frames, a new pollution source is added to the field at a random location. These new sources elevate the overall pollution levels and alter the input-output relationship between the spatial coordinates and the pollution intensity. Figure 7 displays a snippet of the pollution map at two intermediate time points. The simulation details for the dynamic pollution map generation are provided in the Appendix.
<details>
<summary>x13.png Details</summary>

### Visual Description
## Heatmap: Pollution Maps at Time 150 and Time 350
### Overview
Two side-by-side heatmaps depict pollution levels across a 2D spatial grid at two distinct time points (150 and 350). Pollution intensity is represented by color gradients, with red indicating higher levels and blue/purple lower levels. The maps show spatial distribution patterns and temporal evolution of pollution.
### Components/Axes
- **X-axis**: Labeled 0 to 40 (spatial coordinate, no units specified).
- **Y-axis**: Labeled 0 to 40 (spatial coordinate, no units specified).
- **Color Legend**:
- **Time 150**: Ranges from 5.2 (purple) to 6.4 (red).
- **Time 350**: Ranges from 6.2 (purple) to 7.4 (red).
- **Legend Position**: Right-aligned for both heatmaps.
### Detailed Analysis
#### Time 150 Heatmap
- **Pollution Levels**:
- Central region (X=25, Y=20): Peak pollution at ~6.4 (red).
- Lower-left quadrant (X=10, Y=30): Moderate pollution (~5.8â6.0).
- Upper-right quadrant (X=35, Y=40): Moderate pollution (~5.8â6.0).
- **Distribution**: Three distinct hotspots with gradual decay toward cooler colors.
#### Time 350 Heatmap
- **Pollution Levels**:
- Central region (X=25, Y=20): Peak pollution at ~7.4 (dark red).
- Lower-left quadrant (X=10, Y=10): New hotspot (~6.8â7.0).
- Upper-right quadrant (X=35, Y=15): Moderate pollution (~6.6â6.8).
- Lower-center (X=15, Y=35): Moderate pollution (~6.6â6.8).
- **Distribution**: Four hotspots, with increased intensity and spread compared to Time 150.
### Key Observations
1. **Temporal Increase**: Pollution levels rise by ~1.0â1.2 units across all regions between Time 150 and 350.
2. **Hotspot Evolution**:
- The central hotspot (X=25, Y=20) intensifies significantly.
- A new hotspot emerges at (X=10, Y=10) in Time 350.
3. **Spatial Spread**: Pollution spreads to adjacent regions (e.g., X=15, Y=35 in Time 350).
### Interpretation
- **Pollution Dynamics**: The data suggests a growing pollution source at the central location (X=25, Y=20), possibly due to industrial activity or environmental factors. The emergence of a new hotspot at (X=10, Y=10) in Time 350 indicates either a new pollution source or diffusion from existing sources.
- **Temporal Correlation**: The uniform increase in pollution levels across the grid implies a systemic driver (e.g., policy changes, weather patterns, or industrial expansion).
- **Anomalies**: The rapid intensification of the central hotspot warrants further investigation into localized factors (e.g., waste disposal, emissions).
### Spatial Grounding
- **Legend Alignment**: Colors in the heatmaps match the legend gradients exactly (e.g., red at X=25, Y=20 in Time 350 corresponds to 7.4).
- **Axis Consistency**: Both heatmaps use identical spatial axes (0â40), enabling direct comparison.
### Content Details
- **Time 150**:
- Pollution levels: 5.2â6.4.
- Hotspots: (10,30), (25,20), (35,40).
- **Time 350**:
- Pollution levels: 6.2â7.4.
- Hotspots: (10,10), (15,35), (25,20), (35,15).
### Key Trends
- **Increasing Intensity**: All regions show higher pollution in Time 350.
- **Hotspot Migration**: The lower-left hotspot shifts from (10,30) to (10,10), suggesting downward movement.
- **New Source**: The (10,10) hotspot in Time 350 may indicate a new pollution origin.
### Final Notes
The heatmaps provide a clear visualization of pollution evolution over time, highlighting both quantitative increases and spatial redistribution. Further analysis could correlate these patterns with environmental or socioeconomic data to identify root causes.
</details>
Figure 7: Pollution maps at time $150$ and time $350$ .
Sampling Strategies
As discussed in Section 3.4, the MIS reaction policy is designed to complement existing explorationâexploitation strategies. To demonstrate the effectiveness of the Mutual Information Surprise Reaction Policy (MISRP), we integrate it with three well-established sampling strategies. These are: the surprise-reactive (SR) sampling method proposed by (?) using either Shannon or Bayesian surprises, the subtractive clustering/entropy (SC/E) active learning strategy proposed by (?), and the greedy search/query by committee (GS/QBC) active learning strategy used in (?).
1. SR: The surprise-reactive sampling method (?) switches between exploration and exploitation modes based on observed Shannon or Bayesian Surprise. By default, SR operates in an exploration mode guided by the widely used space-filling principle (?), selecting new sampling locations via the min-max objective:
$$
\mathbf{x}^{*}=\underset{\mathbf{x}}{\operatorname{argmax}}\>\underset{\mathbf{x}_{i}\in\mathbf{X}}{\min}\>\|\mathbf{x}-\mathbf{x}_{i}\|_{2},
$$
where $\mathbf{X}$ denotes the set of existing observations. Upon encountering a surprising event (in terms of either Shannon or Bayesian Surprise), SR switches to exploitation mode, performing localized verification sampling within the neighborhood of the surprise-triggering location. This continues either for a fixed number of steps defined by an exploitation limit $t$ , or until an unsurprising event occurs. If exploitation confirms that the surprise is consistent (i.e., persistent surprise until reaching the exploitation threshold), all corresponding observations are accepted and incorporated into the pollution map estimation. Conversely, if an unsurprising event arises before the threshold is reached, the surprising observations are deemed anomalous and discarded. For Shannon Surprise, we set the triggering threshold at $1.3$ , corresponding to a likelihood of $5\%$ . For Bayesian Surprise, we use the Postdictive Surprise and adopt the threshold of $0.5$ , following (?).
MISRP: The MISRP modifies SR by dynamically adjusting the exploitation limit $t$ . When increased exploitation is needed, $t$ is incremented by $1$ . For increased exploration, $t$ is decremented by $1$ , with a lower bound of $t=1$ .
1. SC/E: The subtractive clustering/entropy active learning strategy (?) selects the next sampling location by maximizing a custom acquisition function. For an unseen region $\mathcal{X}$ and a probabilistic predictive function $\hat{f}(\mathbf{x})$ trained on the observed data, the acquisition function is defined as:
$$
a(\mathbf{x})=(1-\eta)\mathbb{E}_{\mathbf{x}^{\prime}\in\mathcal{X}}[e^{-\|\mathbf{x}-\mathbf{x}^{\prime}\|_{2}}]+\eta H(\hat{f}(\mathbf{x})),
$$
where $\eta$ is the exploitation parameter, with a default value of $0.5$ , and $H(\hat{f}(\mathbf{x}))$ denotes the entropy of the predictive distribution at $\mathbf{x}$ . A larger value of $\eta$ emphasizes sampling at locations with high predictive uncertainty near previously seen points, promoting exploitation. A smaller value favors sampling at representative locations in the unseen region, promoting exploration (?).
MISRP: The MISRP modifies SC/E by adjusting the exploitation parameter $\eta$ . For increased exploitation, $\eta$ is increased by $0.1$ , up to a maximum of $1$ . For increased exploration, $\eta$ is decreased by $0.1$ , with a minimum of $0$ .
1. GS/QBC: The greedy search/query by committee active learning strategy (?) uses a different acquisition function. Given the set of seen observations $\{\mathbf{X},\mathbf{Y}\}$ and a model committee $\mathcal{F}$ composed of multiple predictive models trained on this data, the acquisition function is defined as:
$$
a(\mathbf{x})=(1-\eta)\underset{\mathbf{x}^{\prime},\mathbf{y}^{\prime}\in\mathbf{X},\mathbf{y}}{\min}\|\mathbf{x}-\mathbf{x}^{\prime}\|_{2}\|\hat{f}(\mathbf{x})-\mathbf{y}^{\prime}\|_{2}+\eta\underset{\hat{f}(\cdot),\hat{f}^{\prime}(\cdot)\in\mathcal{F}}{\max}\|\hat{f}(\mathbf{x})-\hat{f}^{\prime}(\mathbf{x})\|_{2}, \tag{8}
$$
where the first term encourages exploration by selecting points that are distant from existing observations in both input and output space. The second term promotes exploitation by targeting locations with high disagreement among models in the committee.
MISRP: The MISRP regulates the balance between exploration and exploitation in GS/QBC in the same manner as in SC/E, by adjusting the parameter $\eta$ .
Experimental Setup
The estimation process is initialized with $10$ observed locations uniformly sampled across the pollution field. Each time frame collects $10$ new samples according to the chosen sampling strategy, representing the operation of $10$ mobile pollution sensors. The pollution field is estimated using a Gaussian Process Regressor with a Matérn kernel ( $\nu=2.5$ ) and a noise prior of $10^{-2}$ , consistently applied across all strategies. The model predicts pollution levels at specified spatial locations and is updated using both current and historical data, with a maximum of $200$ observations retained to reduce computational cost.
For the GS/QBC strategy, the model committee additionally includes regressors with a Matérn $\nu=1.5$ kernel and a Gaussian kernel with bandwidth $0.1$ , both using a noise prior of $10^{-2}$ . These two additional models are used solely for calculating disagreement in Eq. (8) and are not employed in pollution map estimation.
Shannon and Bayesian Surprise are computed following the procedure described in Section 4.1. For MIS calculations, we discretize the range of pollution values observed in the data into $100$ bins to estimate entropy.
In process forking scenarios, two separate pollution map estimates, $\hat{f}_{m}$ and $\hat{f}_{n}$ , are produced for subprocesses $\mathcal{P}_{m}$ and $\mathcal{P}_{n}$ , respectively. The final pollution map estimate is formed as a weighted combination:
$$
\hat{f}=\frac{\sqrt{m}}{\sqrt{m}+\sqrt{n}}\hat{f}_{m}+\frac{\sqrt{n}}{\sqrt{m}+\sqrt{n}}\hat{f}_{n},
$$
accounting for generalization errors that scale as $\mathcal{O}(\frac{1}{\sqrt{m}})$ and $\mathcal{O}(\frac{1}{\sqrt{n}})$ , respectively (?).
Simulation Results
We assess performance using the mean squared error (MSE) between predicted and true pollution maps at each time step. Due to the dynamic nature of the pollution field, estimation errors exhibit substantial fluctuation. To smooth these variations, we compute a 20-frame moving average of the MSE for both vanilla and MISRP-governed strategies. The results are shown in Figure 8.
<details>
<summary>x14.png Details</summary>

### Visual Description
## Line Chart: Mean Error over Time
### Overview
The chart compares two methods of tracking mean error over time: "SR with Shannon" (blue dashed line) and "MISRP" (solid red line). The x-axis represents time steps (0â400), and the y-axis represents mean error (0â10). The blue line exhibits sharp, periodic spikes, while the red line shows smoother, smaller fluctuations.
### Components/Axes
- **X-axis (Time Step)**: Labeled "Time Step," ranging from 0 to 400 in increments of 100.
- **Y-axis (Mean Error)**: Labeled "Mean Error," ranging from 0 to 10 in increments of 2.
- **Legend**: Located in the top-right corner, with two entries:
- Blue dashed line: "Mean Error over Time (SR with Shannon)"
- Solid red line: "Mean Error over Time (with MISRP)"
### Detailed Analysis
- **Blue Line (SR with Shannon)**:
- Sharp spikes occur at approximately time steps 0, 100, 200, 300, and 400, reaching near-peak values of **~10**.
- Between spikes, the line dips to troughs of **~0.5â1**.
- Intermediate fluctuations (e.g., ~2â4) occur at irregular intervals.
- **Red Line (MISRP)**:
- Smooth, gradual fluctuations between **~0.5â3**.
- No sharp spikes; maximum peak observed near time step 400 at **~4.5**.
- Troughs consistently near **~0.5â1**.
### Key Observations
1. **SR with Shannon** exhibits periodic, high-magnitude errors (spikes up to ~10), suggesting instability or sensitivity to specific time steps.
2. **MISRP** maintains lower, more consistent errors (~0.5â4.5), indicating robustness.
3. Both methods share similar trough values (~0.5â1), but SR with Shannonâs spikes dominate its behavior.
### Interpretation
The data suggests that **MISRP significantly reduces mean error variability** compared to SR with Shannon. The periodic spikes in the blue line may reflect systemic issues (e.g., data collection artifacts, algorithmic instability) at specific time steps. MISRPâs smoother trend implies improved error mitigation, though its performance degrades slightly toward the end (time step 400). The shared troughs suggest both methods perform comparably during stable periods, but SR with Shannonâs spikes render it less reliable overall. This could highlight the importance of error-handling mechanisms like MISRP in dynamic systems.
</details>
<details>
<summary>x15.png Details</summary>

### Visual Description
## Line Graph: Mean Error over Time
### Overview
The image is a line graph comparing two methods for tracking mean error over time: "SR with Bayesian" (dashed blue line) and "MISRP" (solid red line). The x-axis represents time steps (0â400), and the y-axis represents mean error (0â10). The graph highlights significant fluctuations in error for both methods, with distinct patterns of spikes and stability.
### Components/Axes
- **Title**: "Mean Error over Time"
- **X-axis**: "Time Step" (0â400, linear scale)
- **Y-axis**: "Mean Error" (0â10, linear scale)
- **Legend**: Located in the top-left corner, with two entries:
- Dashed blue line: "Mean Error over Time (SR with Bayesian)"
- Solid red line: "Mean Error over Time (with MISRP)"
### Detailed Analysis
1. **SR with Bayesian (Blue Dashed Line)**:
- **Trend**: Exhibits frequent and pronounced spikes in error.
- **Key Peaks**:
- ~Time Step 50: Error ~4
- ~Time Step 150: Error ~7
- ~Time Step 250: Error ~6
- ~Time Step 400: Error ~10 (highest spike)
- **General Behavior**: Error oscillates between ~0.5 and ~10, with sharp increases and decreases.
2. **MISRP (Red Solid Line)**:
- **Trend**: Smoother, with smaller and less frequent spikes.
- **Key Peaks**:
- ~Time Step 50: Error ~1.5
- ~Time Step 200: Error ~1.2
- ~Time Step 350: Error ~1.8
- **General Behavior**: Error remains below 2 for most of the time steps, with minor fluctuations.
### Key Observations
- **SR with Bayesian** shows significantly higher error variability, with spikes exceeding 7 and reaching 10 at the final time step.
- **MISRP** maintains lower and more stable error values, with peaks rarely exceeding 2.
- The largest discrepancy occurs at Time Step 400, where SR with Bayesian spikes to 10 while MISRP remains near 1.8.
### Interpretation
The data suggests that the **SR with Bayesian** method is more sensitive to temporal variations, leading to higher and more erratic errors. This could indicate overfitting, sensitivity to noise, or reliance on probabilistic assumptions that amplify uncertainty over time. In contrast, **MISRP** demonstrates greater stability, implying a more robust or regularized approach that mitigates error fluctuations. The spike at Time Step 400 for SR with Bayesian warrants further investigationâit may reflect a specific event, data anomaly, or model limitation. The consistent performance of MISRP highlights its potential advantage in scenarios requiring reliable, low-error predictions.
</details>
<details>
<summary>x16.png Details</summary>

### Visual Description
## Line Graph: Mean Error over Time
### Overview
The image is a line graph comparing two error metrics over time steps. It shows the mean error for two scenarios: "SC/E" (blue dashed line) and "with MISRP" (red solid line). The x-axis represents time steps (0â400), and the y-axis represents mean error (0â10). The graph highlights fluctuations in error rates, with notable spikes in both series.
### Components/Axes
- **X-axis**: "Time Step" (0â400, linear scale).
- **Y-axis**: "Mean Error" (0â10, linear scale).
- **Legend**: Located in the top-left corner, with two entries:
- Blue dashed line: "Mean Error over Time (SC/E)"
- Red solid line: "Mean Error over Time (with MISRP)"
- **Gridlines**: Horizontal and vertical gridlines at integer intervals.
### Detailed Analysis
1. **SC/E (Blue Dashed Line)**:
- **Trend**: Exhibits higher volatility, with multiple peaks and troughs.
- **Key Data Points**:
- Initial error ~1.0 at time step 0.
- First major peak ~4.0 at ~time step 100.
- Second peak ~3.0 at ~time step 200.
- Final spike to ~8.0 at ~time step 400.
- **Anomalies**: Sharp increase at the end (time step 400), suggesting a potential outlier or system instability.
2. **MISRP (Red Solid Line)**:
- **Trend**: More stable, with smaller fluctuations compared to SC/E.
- **Key Data Points**:
- Initial error ~1.5 at time step 0.
- Peaks ~2.0 at ~time step 100 and ~2.5 at ~time step 200.
- Final error ~4.0 at ~time step 400.
- **Anomalies**: Gradual increase toward the end but remains significantly lower than SC/E.
### Key Observations
- The SC/E series shows **higher mean error** overall, with a **dramatic spike** at the final time step (~8.0).
- The MISRP series demonstrates **lower and more consistent error rates**, with a maximum error of ~4.0.
- Both lines exhibit periodic fluctuations, but SC/Eâs variability is more pronounced.
### Interpretation
The graph suggests that **MISRP reduces mean error** compared to the baseline SC/E method, particularly in later time steps. The SC/Eâs final spike (~8.0) may indicate a failure mode or external factor not mitigated by MISRP. The red lineâs stability implies MISRP provides robustness against error accumulation over time. However, the red lineâs error still increases toward the end, hinting at potential limitations in MISRPâs effectiveness under prolonged or extreme conditions. The data underscores the importance of error mitigation strategies in dynamic systems.
</details>
<details>
<summary>x17.png Details</summary>

### Visual Description
## Line Chart: Mean Error over Time
### Overview
The chart compares two data series representing mean error over time steps (0â400). The blue dashed line represents "Mean Error over Time (GS/QBC)," while the red solid line represents "Mean Error over Time (with MISRP)." Both series exhibit fluctuations, but the GS/QBC series shows significantly higher peaks, particularly at later time steps.
### Components/Axes
- **X-axis**: "Time Step" (0â400, linear scale).
- **Y-axis**: "Mean Error" (0â10, linear scale).
- **Legend**:
- Blue dashed line: "Mean Error over Time (GS/QBC)".
- Red solid line: "Mean Error over Time (with MISRP)".
- **Grid**: Light gray grid lines for reference.
### Detailed Analysis
1. **GS/QBC (Blue Dashed Line)**:
- **Trend**: Starts near 1 at time step 0, with periodic spikes. Notable peaks:
- ~6 at time step 100.
- ~4.5 at time step 200.
- Sharp spike to ~7 at time step 400.
- **Fluctuations**: Smaller oscillations between peaks (e.g., ~2â3 between 100â200).
2. **MISRP (Red Solid Line)**:
- **Trend**: Consistently lower than GS/QBC. Peaks:
- ~2 at time step 100.
- ~1.5 at time step 200.
- Sharp rise to ~4.5 at time step 400.
- **Fluctuations**: Smoother, with smaller oscillations (e.g., ~1â2 between 0â100).
### Key Observations
- **GS/QBC** exhibits larger mean errors, especially at time steps 100, 200, and 400.
- **MISRP** reduces mean error by ~30â50% compared to GS/QBC at peak points.
- Both series show a general upward trend in error as time steps increase, but GS/QBCâs spikes are more pronounced.
### Interpretation
The data suggests that the MISRP method significantly mitigates mean error compared to the baseline GS/QBC approach. The spikes in GS/QBC at later time steps (e.g., 400) may indicate instability or sensitivity to external factors, while MISRPâs smoother curve implies better error control. This could be critical for applications requiring consistent accuracy, such as real-time monitoring or predictive modeling. The divergence at time step 400 highlights a potential failure mode in GS/QBC that MISRP addresses.
</details>
Figure 8: Moving average estimation error over time. Top-Left: SR with Shannon Surprise. Top-Right: SR with Bayesian Surprise. Bottom-Left: SC/E. Bottom-Right: GS/QBC.
Across all comparisons, the baseline strategies display considerable volatility. In contrast, MISRP-governed counterparts produce smoother and consistently lower error curves, highlighting the stabilizing effect of MIS through its ability to facilitate adaptive responses in dynamic environments.
Table 2 presents the average estimation errors and their corresponding standard errors. The standard error is measured across $10$ Monte Carlo simulations and $450$ frames. Across all sampling strategies, incorporating the MIS reaction policy yields a substantial reduction in both mean estimation error and variability. Improvements in estimation error range from $24\%$ to $76\%$ , while reductions in standard error range from $36\%$ to $90\%$ .
To further illustrate the advantage of MISRP, we increase the per-frame sampling budget and the initial number of observed locations of the baseline strategies from $10$ to $25$ , and expand the total memory buffer from $200$ to $500$ , in order to assess whether baseline strategies can match the performance of MISRP-governed approaches. Table 3 compares the estimation error of MISRP-governed strategies (maintaining the original sampling budget of $10$ ) against the enhanced baseline strategies. Even with a $2.5Ă$ increase in sampling budget, the baseline strategies remain significantly outperformed by their MISRP-governed counterparts.
Table 2: Comparison of pollution map estimation errors: baseline sampling strategies versus MISRP-governed strategies.
| SR with Shannon SR with Bayesian SC/E | $6.64± 0.436$ $2.79± 0.096$ $2.02± 0.071$ | $\mathbf{1.60± 0.043}$ $\mathbf{0.87± 0.016}$ $\mathbf{1.53± 0.045}$ | $76\%$ $69\%$ $24\%$ | $90\%$ $83\%$ $36\%$ |
| --- | --- | --- | --- | --- |
| GS/QBC | $2.07± 0.071$ | $\mathbf{1.49± 0.039}$ | $28\%$ | $45\%$ |
Table 3: Error Comparison under Extended Sampling for Baseline Strategies.
| Sampling Strategy SR with Shannon SR with Bayesian | Estimation Error (MISRP-Governed, Budget 10) $\mathbf{1.60}$ $\mathbf{0.87}$ | Estimation Error (Baseline, Budget $25$ ) $6.23$ $2.72$ |
| --- | --- | --- |
| SC/E | $\mathbf{1.53}$ | $1.89$ |
| GS/QBC | $\mathbf{1.49}$ | $2.00$ |
So far we demonstrated that governing basic sampling strategies with MISRP can substantially enhance learning performance in dynamic environments. To provide a clearer view of how MISRP operates over time, we conduct an additional simulation examining its actions throughout the process.
In this experiment, we simulate a two-phase pollution map evolution governed by the same PDE used in earlier simulation. During the first phase (time $0$ â $250$ ), three pollution sources emit high levels of pollutants, and the map evolves under diffusion, decay, and wind effects. At time step $250$ , the emission sources are removed, and the decay factor is reduced to one-twentieth of its original value. The system then continues evolving for an additional $50$ steps.
When the pollution sources exists and are emitting (the dynamic phase, time $0$ â $250$ ), the underlying process is a non-stationary process in which we expect frequent MIS triggering. When the pollution sources are gone (the stationary phase, time $251$ â $300$ ), the pollutants in the area will eventually diffuse to a stationary existence, during which time MIS is expected to stop being triggered.
<details>
<summary>x18.png Details</summary>

### Visual Description
## Line Graph: Error Progression with Action Overlays
### Overview
The image is a line graph comparing two error progression metrics over time steps (0â300). It includes overlays for "Fork-Merge span" (green vertical bars) and "Adjustment event" (red vertical lines). Two red circles highlight specific points on the blue line.
### Components/Axes
- **X-axis**: Time Step (0â300, linear scale).
- **Y-axis**: Mean Error (0â50, linear scale).
- **Legend**:
- Solid blue line: "Mean Error Over Time (with MISRP)".
- Dashed red line: "Mean Error Over Time (SR with Shannon)".
- Green vertical bars: "Fork-Merge span".
- Red vertical lines: "Adjustment event".
### Detailed Analysis
1. **Blue Line (MISRP)**:
- Starts at ~15 (time step 0), dips to ~5 at time step 30 (highlighted by red circle), fluctuates between 2â8 until time step 110 (another red circle at ~3), then trends downward to ~1 by time step 300.
- Notable peaks: ~10 at time step 50, ~8 at time step 150.
2. **Red Line (Shannon)**:
- Starts at ~10, rises to ~40 at time step 100, drops to ~10 at time step 150, spikes to ~50 at time step 200, then declines to ~2 by time step 300.
- Sharp peaks at time steps 100 and 200.
3. **Overlays**:
- **Fork-Merge spans**: Green vertical bars at irregular intervals (e.g., time steps 20, 60, 140, 220). Width varies (1â5 time steps).
- **Adjustment events**: Red vertical lines at time steps 40, 80, 120, 160, 200, 240. Thickness consistent (~1 time step).
4. **Highlighted Points**:
- Red circle 1: Time step 30, error ~5 (blue line).
- Red circle 2: Time step 110, error ~3 (blue line).
### Key Observations
- The blue line (MISRP) shows a **general downward trend** with localized fluctuations, while the red line (Shannon) exhibits **high volatility** with two dominant peaks.
- Fork-Merge spans and Adjustment events align with spikes in the red line but not consistently with the blue line.
- The two highlighted points on the blue line represent **local minima** during periods of high error in the red line.
### Interpretation
- **Method Comparison**: MISRP (blue) demonstrates **lower and more stable error progression** compared to Shannon (red), which suffers from **catastrophic spikes** at critical time steps (100, 200). This suggests MISRP may be more robust in dynamic environments.
- **Fork-Merge Spans**: These periods (green bars) correlate with increased error in the Shannon method but not MISRP, implying Fork-Merge events may exacerbate instability in the Shannon approach.
- **Adjustment Events**: Red vertical lines (Adjustment events) occur at time steps where the red line peaks (e.g., 100, 200), suggesting these interventions are reactive to high-error states in the Shannon method.
- **Highlighted Minima**: The circled points on the blue line (time steps 30, 110) indicate **temporary stabilization** in MISRP despite overall volatility, possibly due to adaptive mechanisms.
## Conclusion
The graph highlights the **superior stability of MISRP** over Shannon in managing error progression, particularly during Fork-Merge spans and Adjustment events. The highlighted minima suggest MISRPâs ability to recover from errors, while Shannonâs spikes indicate systemic vulnerabilities. Further analysis could explore the causal relationship between Fork-Merge events and error spikes in the Shannon method.
</details>
Figure 9: A visualization of estimation error progression with MISRP action overlays.
Figure 9 shows the estimation error progression with action overlays under surprise-reactive sampling based on Shannon surprise. Recall from Section 3.4 that there are two actions employed in MISRP governance: sampling adjustments and process forking. These two actions are marked as red vertical lines and green shaded regions in the plot, respectively. For clarity, we present the $20$ -frame moving average of estimation error, whereas the unsmoothed version is provided in the Appendix. Actions are displayed $20$ steps in advance, corresponding to their first observable effect on the smoothed error trajectory.
Several key observations emerge from the figure. First, both sampling adjustments and process forking occur frequently during the dynamic phase as expected, highlighting the effectiveness of MISRPâs action design in maintaining low estimation error. Second, sudden spikes in estimation error (circled) under MISRP governance are almost always followed by corrective actions that prevent further error growth, resulting in non-smooth error progressions after intervention. By contrast, the baseline sampling strategy allows estimation error to rise unchecked. Then, once the system enters the stationary phase, MISRP ceases intervention, aligning with the intuition that a balanced sampling strategy in a well-regulated system should not trigger Mutual Information Surprise.
5 Conclusion
In this work, we reimagined the concept of surprise as a mechanism for fostering understanding, rather than merely detecting anomalies. Traditional definitionsâsuch as Shannon and Bayesian Surprisesâfocus on single-instance deviations and belief updates, yet fail to capture whether a system is truly growing in its understanding over time. By introducing Mutual Information Surprise (MIS), we proposed a new framework that reframes surprise as a reflection of learning progression, grounded in mutual information growth.
We developed a formal test sequence to monitor deviations in estimated mutual information, and introduced a reaction policy, MISRP, that transforms surprise into actionable system behavior. Through a synthetic case study and a real-time pollution map estimation task, we demonstrated that MIS governance offers clear advantages over conventional sampling strategies. Our results show improved stability, better responsiveness to environmental drift, and significant reductions in estimation error. These findings affirm MIS as a robust and adaptive supervisory signal for autonomous systems.
Looking forward, this work opens several promising directions for future research. A natural next step is the development of a continuous space formulation of mutual information surprise, enabling its application in large complex systems. Another direction involves designing a specialized reaction policy âone that incorporates a sampling strategy tailored directly to the structure and signals of MIS, rather than relying on existing sampling strategies. This could enhance efficiency and responsiveness in highly dynamic or resource-constrained systems. Moreover, pairing MIS with physical probing capability for specific physical systems could unlock the true potential of MIS, as MIS provides new perspectives in system characterization compared to traditional measures.
Appendix
The appendix is organized as follows. In the first section, we present empirical evidence supporting our claim in Section 3.2 that standard deviation-based tests are overly permissive. In the second section, we provide the derivation of the standard deviation-based test for mutual information. In the third section, we provide the proof of Theorem 1. The fourth section details the simulation setup for dynamic pollution map generation. In the fifth section, we provide the pseudocode for the surprise-reactive (SR) sampling strategy (?) to facilitate reproducibility.
MLE Mutual Information Estimator Standard Deviation
In Section 3.2, we discussed the limitations of standard deviation-based tests. Specifically, the current distribution agnostic tightest bound for the standard deviation of a maximum likelihood estimator (MLE) for mutual information with $n$ observations is given by (?)
$$
\sigma\lesssim\frac{\log n}{\sqrt{n}}.
$$
Despite the best result, this bound is still too loose.
To empirically verify this statement, we perform a simple simulation as follows. We construct variable pairs $(x,y)$ where $y=x\;\text{mod}\;10$ , in the same manner as the simulation in Section 4.1. The variable $x$ is generated as random integers sampled from randomly generated probability mass functions over the domain $[0,100]$ . We generate $100$ such probability mass functions. For each probability mass function, we generate $3,000$ pairs of $(x,y)$ , repeat the process using $10$ Monte Carlo simulations, and compute the standard deviation of the MLE mutual information estimates over the $10$ simulations for varying numbers of $(x,y)$ pairs $n$ . We then plot the average standard deviation across the $100$ different probability mass functions as a function of $n$ versus the estimation bound shown in Eq. (5). The results are shown in Figure 10.
<details>
<summary>x19.png Details</summary>

### Visual Description
## Line Chart: MI Estimation Standard Deviation vs. Sample Size (n)
### Overview
The chart compares two lines representing the standard deviation (Std) of mutual information (MI) estimation across increasing sample sizes (n). The blue line represents the actual MI estimation Std, while the orange line represents the theoretical upper bound for this Std. Both metrics decline as n increases, but the MI estimation Std stabilizes much faster than the upper bound.
### Components/Axes
- **X-axis (n)**: Sample size, ranging from 0 to 3000 in increments of 500.
- **Y-axis (Std)**: Standard deviation of MI estimation, ranging from 0.0 to 0.5 in increments of 0.1.
- **Legend**: Located in the top-right corner, with:
- **Blue line**: "MI Estimation Std"
- **Orange line**: "Upper Bound for Estimation Std"
### Detailed Analysis
1. **MI Estimation Std (Blue Line)**:
- At n=0: Starts at approximately 0.12.
- Sharp decline to near 0.0 by n=500.
- Remains flat at ~0.0 for n > 500.
- Final value at n=3000: ~0.0.
2. **Upper Bound for Estimation Std (Orange Line)**:
- At n=0: Starts at approximately 0.5.
- Gradual decline to ~0.03 by n=3000.
- No stabilization observed within the plotted range.
### Key Observations
- The MI estimation Std drops sharply and stabilizes by n=500, suggesting diminishing returns in precision beyond this point.
- The upper bound declines more slowly, indicating a theoretical limit to how small the Std can become, even with infinite n.
- The blue line is consistently below the orange line, confirming the upper bound acts as a ceiling.
### Interpretation
The data demonstrates that increasing sample size (n) significantly reduces the variability of MI estimation, but only up to a critical threshold (nâ500). Beyond this, further increases in n yield negligible improvements in estimation precision. The persistent gap between the MI estimation Std and its upper bound suggests inherent limitations in the estimation method or data structure, such as noise or model complexity. This implies that optimizing n beyond 500 may not be cost-effective for improving MI estimation reliability.
</details>
Figure 10: Empirical standard deviation of MLE mutual information estimates vs. the current tightest bound.
We observe that the current bound for the standard deviation of the mutual information estimate, computed using Eq. (5), is significantly larger than the empirical average standard deviation. This empirical observation supports our claim in Section 3.2 that the test in Eq. (6) is rarely violated in practice.
Standard Deviation Test Derivation
First, recall that the estimation standard deviation satisfies
$$
\sigma\lesssim\frac{\log n}{\sqrt{n}}.
$$
Therefore, we treat this worst case scenario as the baseline when deriving the test of difference between the two maximum likelihood estimators (MLE) of mutual information.
Let:
- $\hat{I}_{n}$ be the MLE estimate from a sample of size $n$ ,
- $\hat{I}_{m+n}$ be the MLE estimate from a larger sample of size $m+n$ ,
Assume the standard deviation of the MLE estimator is approximately:
$$
\sigma_{n}=\frac{\log n}{\sqrt{n}},\quad\sigma_{m+n}=\frac{\log(m+n)}{\sqrt{m+n}}
$$
We want to test the hypothesis:
$$
H_{0}:\mathbb{E}[\hat{I}_{n}]=\mathbb{E}[\hat{I}_{m+n}]\quad\text{vs.}\quad H_{1}:\mathbb{E}[\hat{I}_{n}]\neq\mathbb{E}[\hat{I}_{m+n}]
$$
Note that we are omitting the estimation bias of MLE mutual information estimators for simplicity.
Under the null hypothesis and assuming the two estimates are independent, the test statistic is:
$$
z_{\alpha}=\frac{\hat{I}_{n}-\hat{I}_{m+n}}{\sqrt{\sigma_{n}^{2}+\sigma_{m+n}^{2}}}=\frac{\hat{I}_{n}-\hat{I}_{m+n}}{\sqrt{\left(\frac{\log n}{\sqrt{n}}\right)^{2}+\left(\frac{\log(m+n)}{\sqrt{m+n}}\right)^{2}}}
$$
Moving the denominator to the left hand side will yield the form presented in Eq. (6).
Proof of Theorem 1
First, we formally introduce the maximum likelihood entropy estimator $\hat{H}$ (?) for random variable $\mathbf{x}â\mathcal{X}$ as follows
$$
\hat{H}(\mathbf{x})=\sum_{i=1}^{|\mathcal{X}|}\hat{p}_{i}\log\hat{p}_{i},
$$
where $\hat{p}_{i}$ is the empirical probability mass of random variable $\mathbf{x}$ at category $i$ . The MLE mutual information estimator is then defined based on the MLE entropy estimator
$$
\hat{I}(\mathbf{x},\mathbf{y})=\hat{H}(\mathbf{x})+\hat{H}(\mathbf{y})-\hat{H}(\mathbf{x},\mathbf{y}).
$$
MIS test bound (Expectation):
Here, we derive the first part of the MIS test bound, representing the expectation of the MIS statistics, i.e., $\mathbb{E}[\text{MIS}]$ . The derivation involves two cases, $n\ll|\mathcal{X}|,|\mathcal{Y}|$ and $n\gg|\mathcal{X}|,|\mathcal{Y}|$ .
When $n\ll|\mathcal{X}|,|\mathcal{Y}|$ , an MLE entropy estimator $\hat{H}$ with $n$ observations will behave simply as $\log n$ (?), conditioning on the $n$ observations are selected using some kind of space filling designs, which is common for design the initial set of experimentation locations in design of experiments literature (?). We have $\mathbb{E}[\hat{H}_{n}(\mathbf{x})]=\log n$ . Hence, the mutual information estimator with $n$ observations admits
$$
\mathbb{E}[\hat{I}_{n}(\mathbf{x},\mathbf{y})]=\mathbb{E}[\hat{H}_{n}(\mathbf{x})+\hat{H}_{n}(\mathbf{y})-\hat{H}_{n}(\mathbf{x},\mathbf{y})]=\log n.
$$
Then for MIS, we have
$$
\mathbb{E}[\text{MIS}]=\mathbb{E}[\hat{I}_{m+n}]-\mathbb{E}[\hat{I}_{n}]=\log(m+n)-\log n.
$$
When $n\gg|\mathcal{X}|,|\mathcal{Y}|$ , we are facing an oversampled scenario where the samples have most likely exhausted the input and output space. In this case, we first introduce the following lemma.
**Lemma 1**
*(?) For a random variable $\mathbf{x}â\mathcal{X}$ , the bias of an oversampled ( $n\gg|\mathcal{X}|$ ) MLE entropy estimator $\hat{H}_{n}(\mathbf{x})$ is
$$
\mathbb{E}[\hat{H}_{n}(\mathbf{x})]-H(\mathbf{x})=-\frac{|\mathcal{X}|-1}{n}+o(\frac{1}{n}). \tag{9}
$$*
With the above lemma, we can derive the following Corollary.
**Corollary 1**
*For random variable $\mathbf{x}â\mathcal{X}$ and $\mathbf{y}â\mathcal{Y}$ , when the $\mathbf{y}=f(\mathbf{x})$ mapping is noise free, the MLE mutual information estimator $\hat{I}_{n}$ asymptotically satisfies
$$
\mathbb{E}[\hat{I}_{n}]=I-\frac{|\mathcal{Y}|-1}{n}.
$$*
The proof of the above Corollary immediately follows observing the fact of $|\mathcal{X}|=|\mathcal{X},\mathcal{Y}|$ for noise free mapping and invoking Lemma 1.
Therefore, for MIS under the case of oversampling, we have
| | $\displaystyle\mathbb{E}[\text{MIS}]$ | $\displaystyle=\mathbb{E}[\hat{I}_{m+n}]-\mathbb{E}[\hat{I}_{n}]$ | |
| --- | --- | --- | --- |
MIS test bound (Variation):
In this part, we derive the second term of the MIS test bound, accounting for the variation of the MIS statistics. We first investigate the maximum change in mutual information estimation $\hat{I}$ when changing one observation. Here, we derive the following Lemma.
**Lemma 2**
*Let $\mathcal{S}=\{(x_{i},y_{i})\}_{i=1}^{n}$ be an i.i.d. sample from an unknown joint distribution on finite alphabets and denote by
$$
\hat{I}_{n}(\mathbf{x},\mathbf{y})\;=\;\hat{H}_{n}(\mathbf{x})+\hat{H}_{n}(\mathbf{y})-\hat{H}_{n}(\mathbf{x},\mathbf{y})
$$
the MLE estimator, where $\hat{H}_{n}$ is the empirical Shannon entropy (in nats). If $\mathcal{S}^{\prime}$ differs from $\mathcal{S}$ in exactly one observation, then with a mild abuse of notation (denoting mutual information estimator on sample set $\mathcal{S}$ with $\hat{I}_{n}(\mathcal{S})$ ),
$$
\bigl{|}\hat{I}_{n}(\mathcal{S})-\hat{I}_{n}(\mathcal{S}^{\prime})\bigr{|}\;\leq\;\frac{2\,\log n}{n}.
$$*
* Proof*
Proof For Lemma 2 We omit $\hat{·}$ for estimators during this proof for simplicity. Write $H=-\sum_{i}p_{i}\log p_{i}$ for Shannon entropy estimator with natural logarithms. Replacing a single observation does two things: 1. in one $X$ -category and one $Y$ -category the counts change by $± 1$ (all other marginal counts are unchanged);
1. in one joint cell the count changes by $-1$ and in another joint cell the count changes by $+1$ .
Step 1. How much can one empirical Shannon entropy change?
Assume a single observation is moved from category $A$ to category $B$ . Let the counts before the move be $A=a$ (with $aâ„ 1$ ) and $B=b$ (with $bâ„ 0$ ). After the move the counts become $a-1$ and $b+1$ . Only these two probabilities change; every other probability is fixed.
The change in entropy is therefore
| | $\displaystyle\Delta H$ | $\displaystyle=\big{(}\frac{a}{n}\log\frac{a}{n}-\frac{a-1}{n}\log\frac{a-1}{n}\big{)}-\big{(}\frac{b+1}{n}\log\frac{b+1}{n}-\frac{b}{n}\log\frac{b}{n}\big{)}.$ | |
| --- | --- | --- | --- |
We can see that the maximum difference is largest when $a=n$ and $b=0$ , i.e. when all $n$ observations initially occupy a single category and we create a brand-new one. In that worst case
$$
\displaystyle\Delta H \displaystyle=\frac{n-1}{n}\log\frac{n-1}{n}+\frac{1}{n}\log n \displaystyle\leq\frac{n-1}{n}\log\frac{n}{n}+\frac{1}{n}\log n=\frac{\log n}{n}. \tag{10}
$$
The forth equality follows the Taylor expansion of $\log(1-x)$ . Conversly, one could see that $-\frac{\log n}{n}â€\Delta H$ also holds. Therefore, the maximum absolute differences of entropy estimation under the shift of one observations is upper bounded by $\frac{\log n}{n}$ .
Step 2. Sign coupling between the three entropies.
Assume the moved observation leaves joint cell $(i,j)$ and enters cell $(k,\ell)$ . Because $(i,j)$ lies in row $i$ and column $j$ only, we have the key fact (denoting sign operator with $\text{sgn}(·)$ ):
$$
\text{sgn}\bigl{(}\Delta H(\mathbf{x},\mathbf{y})\bigr{)}\in\bigl{\{}\text{sgn}\bigl{(}\Delta H(\mathbf{x})\bigr{)},\text{sgn}\bigl{(}\Delta H(\mathbf{y})\bigr{)}\bigr{\}}.
$$
Hence $-\text{sgn}\bigl{(}\Delta H(\mathbf{x},\mathbf{y})\bigr{)}=\text{sgn}\bigl{(}\Delta H(\mathbf{x})\bigr{)}=\text{sgn}\bigl{(}\Delta H(\mathbf{y})\bigr{)}$ is impossible.
Then, with $\Delta I=\Delta H(\mathbf{x})+\Delta H(\mathbf{y})-\Delta H(\mathbf{x},\mathbf{y})$ , we can see the following fact
$$
|\Delta I|=\bigl{|}\Delta H(\mathbf{x})+\Delta H(\mathbf{y})-\Delta H(\mathbf{x},\mathbf{y})|\leq 2\max\{|\Delta H(\mathbf{x})|,|\Delta H(\mathbf{y})|,|\Delta H(\mathbf{x},\mathbf{y})|\}.
$$
Applying the one-entropy bound (10) to the two marginals,
$$
|\Delta I|\leq\frac{2\log n}{n},
$$
which is the desired inequality. â
Establishing Lemma 2 allows us to apply the McDiarmidâs Inequality (?), a concentration inequality for functions with bounded difference.
**Lemma 3 (McDiarmidâs Inequality)**
*If $\{\mathbf{x}_{i}â\mathcal{X}_{i}\}_{i=1}^{n}$ are independent random variables (not necessarily identical), and a function $f:\mathcal{X}_{1}Ă\mathcal{X}_{2}...\mathcal{X}_{n}â\mathbb{R}$ satisfies coordinate wise bounded condition
$$
\underset{\mathbf{x}^{\prime}_{j}\in\mathcal{X}_{j}}{sup}|f(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{j},\ldots,\mathbf{x}_{n})-f(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}^{\prime}_{j},\ldots,\mathbf{x}_{n})|<c_{j},
$$
for $1†j†n$ , then for any $\epsilon℠0$ ,
$$
P(|f(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})-\mathbb{E}[f]|>\epsilon)\leq 2e^{-2\epsilon^{2}/\sum c_{j}^{2}}. \tag{11}
$$*
To apply the McDiarmidâs Inequality, we can view the mutual information estimator with $n$ old observations and $m$ new observations, denoted with $\hat{I}_{m+n}$ , as a function of the new $m$ observations $\{\mathbf{x}_{i}â\mathcal{X}\}_{i=1}^{m}$ . Moreover, we have already bounded the maximum differences of the mutual information estimator through Lemma 2, meaning
$$
\underset{\mathbf{x}^{\prime}_{j}\in\mathcal{X}}{sup}|\hat{I}_{m+n}(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{j},\ldots,\mathbf{x}_{m})-\hat{I}_{m+n}(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}^{\prime}_{j},\ldots,\mathbf{x}_{m})|<\frac{2\log(m+n)}{m+n}.
$$
Then, plug the upper bound into Eq. (11), we have
$$
P(|\hat{I}_{m+n}-\mathbb{E}[\hat{I}_{m+n}]|>\epsilon)\leq 2e^{-2\epsilon^{2}/\sum(\frac{2\log(m+n)}{m+n})^{2}}=2e^{-(m+n)^{2}\epsilon^{2}/2m\log^{2}(m+n)}.
$$
By setting the RHS of the above equation to $\rho$ , we can get the following statement with probability at least $1-\rho$ ,
$$
|\hat{I}_{m+n}-\mathbb{E}[\hat{I}_{m+n}]|\leq\frac{\sqrt{2m\log 2/\rho}\log(m+n)}{m+n}. \tag{12}
$$
Finally, combining the derivation in the two parts, when $n\ll|\mathcal{X}|,|\mathcal{Y}|$ , we have the following with probability at least $1-\rho$
| | $\displaystyle MIS$ | $\displaystyle=\hat{I}_{m+n}-\hat{I}_{n}$ | |
| --- | --- | --- | --- |
The second equation follows the typical sample assumption in Assumption 1. The proof of Theorem 1 is now complete.
Pollution Map Dataset
The dynamic pollution map is modeled as $u(\mathbf{x},t)$ , a function of spatial location $\mathbf{x}=(x_{1},x_{2})â[0,1]^{2}$ . The governing partial differential equation (PDE) for the pollution map is
$$
\frac{\partial u}{\partial t}=-\mathbf{v}\cdot\nabla u+\nabla(\mathbf{D}\nabla u)-\zeta u+S(\mathbf{x}), \tag{13}
$$
where $\mathbf{v}=[1,0]$ is the advection velocity, representing wind that transports pollution horizontally to the right. The matrix $\mathbf{D}=\text{diag}(0.01,2)$ is the diagonal diffusion matrix, indicating that pollution diffuses much more rapidly in the $x_{2}$ direction than in the $x_{1}$ direction. The parameter $\zeta=2$ represents the exponential decay factor, modeling the natural decay of pollution levels over time. The term $S(\mathbf{x})$ models the spatially dependent but temporally constant pollution source at location $\mathbf{x}$ . Additionally, a base level of random pollution with mean $2$ and standard deviation $0.25$ is added to the pollution field. The evolution of the pollution map is computed in the Fourier domain by applying a discretized Fourier transformation to the PDE in Eq. (13).
In the last simulation experiment with the pollution map, we use the same PDE with modified parameters. Specifically, the pollution sources $S(\mathbf{x})$ is removed, and the decay parameter $\zeta$ is reduced to $0.1$ in the second phase.
Surprise Reactive Sampling Strategy Pseudo Code
In this section, we present the pseudocode for the SR sampling strategy in (?) for reproducibility purpose in Algorithm 2.
Algorithm 2 Surprise Reactive (SR) Sampling Strategy
1: Observation set $\mathbf{X}:\{\mathbf{x}_{i}â\mathcal{X}\}_{i=1}^{n}$ ; Total sampling budget $k$ ; Exploitation limit $t$ ; A surprise measure $S(·)$ ; A surprise triggering threshold $s$ ; Exploration mode indicator $\xi=\text{True}$ ; Surprising location $\mathbf{x}_{s}=\text{None}$ ; Surprising location set $\mathbf{X}_{s}=\text{None}$ ; Neighborhood radius $\epsilon$ .
2: while $i<k$ ( $i$ starts from $0$ ) do
3: if $\xi$ then
4: Sample $\mathbf{x}^{*}$ as
$$
\mathbf{x}^{*}=\underset{\mathbf{x}}{\operatorname{argmax}}\>\underset{\mathbf{x}_{i}\in\mathbf{X}}{\min}\>\|\mathbf{x}-\mathbf{x}_{i}\|_{2}.
$$
5: $i=i+1$
6: Compute $S(\mathbf{x}^{*})$
7: if $S(\mathbf{x}^{*})†s$ then
8: $\mathbf{X}=[\mathbf{X},\mathbf{x}^{*}]$
9: else
10: $\xi=False$ , $\mathbf{x}_{s}=\mathbf{x}^{*}$ , $\mathbf{X}_{s}=[\mathbf{x}^{*}]$
11: end if
12: else
13: while $j†t$ ( $j$ starts from $0$ ) do
14: Sample $\mathbf{x}^{*}$ randomly in the $\epsilon$ ball centered at $\mathbf{x}_{s}$ .
15: $j=j+1$ , $i=i+1$
16: Compute $S(\mathbf{x}^{*})$
17: if $S(\mathbf{x}^{*})†s$ then
18: $\mathbf{X}=[\mathbf{X},\mathbf{x}^{*}]$ , $\xi=\text{True}$ , $\mathbf{X}_{s}=\text{None}$
19: Break While
20: else
21: $\mathbf{X}_{s}=[\mathbf{X}_{s},\mathbf{x}^{*}]$
22: end if
23: if $iâ„ k$ then
24: Break While
25: end if
26: end while
27: if $\mathbf{X}_{s}$ is not None then
28: $\mathbf{X}=[\mathbf{X},\mathbf{X}_{s}]$ , $\xi=\text{True}$
29: end if
30: end if
31: end while
Non-smoothed Error Progression with Action Overlays
Here we present the non-smoothed estimation error progression figure with action overlays.
<details>
<summary>x20.png Details</summary>

### Visual Description
## Line Graph: Error Progression with Action Overlays--Shannon
### Overview
The graph visualizes the progression of mean error over time for two systems: one using MISRP (solid blue line) and another using Shannon-based SR (dashed red line). It includes annotations for "ForkâMerge span" (green vertical bars) and "Adjustment event" (red vertical bars). The x-axis represents time steps (0â300), and the y-axis represents mean error (0â140).
---
### Components/Axes
- **X-axis (Time Step)**: Labeled "Time Step," ranging from 0 to 300 in increments of 50.
- **Y-axis (Mean Error)**: Labeled "Mean Error," ranging from 0 to 140 in increments of 20.
- **Legend**: Located in the top-right corner, with four entries:
- Solid blue line: "Mean Error Over Time (with MISRP)"
- Dashed red line: "Mean Error Over Time (SR with Shannon)"
- Green vertical bars: "ForkâMerge span"
- Red vertical bars: "Adjustment event"
---
### Detailed Analysis
1. **Mean Error Over Time (MISRP)**:
- Solid blue line with small fluctuations.
- Peaks at ~30 (time step 100) and ~25 (time step 200).
- Remains below 10 for most of the timeline, stabilizing near 0 after time step 250.
2. **Mean Error Over Time (SR with Shannon)**:
- Dashed red line with significant volatility.
- Peaks at ~120 (time step 100) and ~80 (time step 250).
- Frequently exceeds 40, with sharp spikes during adjustment events.
3. **ForkâMerge Span**:
- Green vertical bars span the entire y-axis (0â140) at irregular intervals.
- Occur ~10â15 times, with durations of ~10â20 time steps.
- Correlate with spikes in the red line (SR with Shannon).
4. **Adjustment Event**:
- Red vertical bars span ~20â30 units on the y-axis.
- Occur ~5â7 times, aligned with peaks in the red line.
- Appear to mitigate error spikes but are less effective for the Shannon method.
---
### Key Observations
- **Error Volatility**: The Shannon-based SR method exhibits ~4â5x higher mean error than MISRP during peaks.
- **Temporal Correlation**: ForkâMerge spans align with error spikes in the Shannon method, suggesting systemic stress during these periods.
- **Adjustment Efficacy**: Adjustment events reduce errors for MISRP but have limited impact on the Shannon method, which experiences persistent high errors post-event.
---
### Interpretation
The data demonstrates that the MISRP system maintains lower, more stable errors compared to the Shannon-based SR method. ForkâMerge spans likely represent periods of high system activity or complexity, which disproportionately affect the Shannon method. Adjustment events partially mitigate errors but fail to address the root cause of volatility in the Shannon approach. This suggests that the Shannon method may require architectural improvements to handle dynamic workloads, whereas MISRPâs stability indicates robustness in error management. The red lineâs persistent spikes after adjustment events highlight a critical limitation in the Shannon-based systemâs adaptability.
</details>
Figure 11: A non-smoothed visualization of estimation error progression with MISRP action overlays.
References and Notes
- 1. B. Burger, P. M. Maffettone, V. V. Gusev, C. M. Aitchison, Y. Bai, X. Wang, X. Li, B. M. Alston, B. Li, R. Clowes, N. Rankin, B. Harris, R. S. Sprick, and A. I. Cooper, âA mobile robotic chemist,â Nature, vol. 583, pp. 237â241, 2020.
- 2. A. Merchant, S. Batzner, S. S. Schoenholz, M. Aykol, G. Cheon, and E. D. Cubuk, âScaling deep learning for materials discovery,â Nature, vol. 624, pp. 80â85, 2023.
- 3. N. J. Szymanski, B. Rendy, Y. Fei, R. E. Kumar, T. He, D. Milsted, M. J. McDermott, M. Gallant, E. D. Cubuk, A. Merchant, H. Kim, A. Jain, C. J. Bartel, K. Persson, Y. Zeng, and G. Ceder, âAn autonomous laboratory for the accelerated synthesis of novel materials,â Nature, vol. 624, pp. 86â91, 2023.
- 4. T. Dai, S. Vijayakrishnan, F. T. SzczypiĆski, J.-F. Ayme, E. Simaei, T. Fellowes, R. Clowes, L. Kotopanov, C. E. Shields, Z. Zhou, J. W. Ward, and A. I. Cooper, âAutonomous mobile robots for exploratory synthetic chemistry,â Nature, vol. 635, pp. 890â897, 2024.
- 5. J. Levinson, J. Askeland, J. Becker, J. Dolson, D. Held, S. Kammel, J. Z. Kolter, D. Langer, O. Pink, V. Pratt, M. Sokolsky, G. Stanek, D. Stavens, A. Teichman, M. Werling, and S. Thrun, âTowards fully autonomous driving: Systems and algorithms,â in Proceedings of the 2011 IEEE Intelligent Vehicles Symposium, (Baden-Baden, Germany), June 2011.
- 6. B. P. MacLeod, F. G. Parlane, T. D. Morrissey, F. HĂ€se, L. M. Roch, K. E. Dettelbach, R. Moreira, L. P. Yunker, M. B. Rooney, and J. R. Deeth, âSelf-driving laboratory for accelerated discovery of thin-film materials,â Science Advances, vol. 6, no. 20, p. eaaz8867, 2020.
- 7. E. Yurtsever, J. Lambert, A. Carballo, and K. Takeda, âA survey of autonomous driving: Common practices and emerging technologies,â IEEE Access, vol. 8, pp. 58443â58469, 2020.
- 8. D. Bogdoll, M. Nitsche, and J. M. Zöllner, âAnomaly detection in autonomous driving: A survey,â in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (New Orleans, USA), June 2022.
- 9. H.-S. Park and N.-H. Tran, âAn autonomous manufacturing system based on swarm of cognitive agents,â Journal of Manufacturing Systems, vol. 31, no. 3, pp. 337â348, 2012.
- 10. J. Leng, Y. Zhong, Z. Lin, K. Xu, D. Mourtzis, X. Zhou, P. Zheng, Q. Liu, J. L. Zhao, and W. Shen, âTowards resilience in industry 5.0: A decentralized autonomous manufacturing paradigm,â Journal of Manufacturing Systems, vol. 71, pp. 95â114, 2023.
- 11. J. Reis, Y. Cohen, N. MelĂŁo, J. Costa, and D. Jorge, âHigh-tech defense industries: Developing autonomous intelligent systems,â Applied Sciences, vol. 11, no. 11, p. 4920, 2021.
- 12. P. Nikolaev, D. Hooper, F. Webber, R. Rao, K. Decker, M. Krein, J. Poleski, R. Barto, and B. Maruyama, âAutonomy in materials research: A case study in carbon nanotube growth,â NPJ Computational Materials, vol. 2, p. 16031, 2016.
- 13. J. Chang, P. Nikolaev, J. Carpena-NĂșñez, R. Rao, K. Decker, A. E. Islam, J. Kim, M. A. Pitt, J. I. Myung, and B. Maruyama, âEfficient closed-loop maximization of carbon nanotube growth rate using Bayesian optimization,â Scientific Reports, vol. 10, p. 9040, 2020.
- 14. I. Ahmed, S. T. Bukkapatnam, B. Botcha, and Y. Ding, âToward futuristic autonomous experimentationâa surprise-reacting sequential experiment policy,â IEEE Transactions on Automation Science and Engineering, vol. 22, pp. 7912â7926, 2025.
- 15. Z.-G. Zhou and P. Tang, âContinuous anomaly detection in satellite image time series based on z-scores of season-trend model residuals,â in Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium, (Beijing, China), July 2016.
- 16. K. Cohen and Q. Zhao, âActive hypothesis testing for anomaly detection,â IEEE Transactions on Information Theory, vol. 61, no. 3, pp. 1432â1450, 2015.
- 17. J. F. Kamenik and M. Szewc, âNull hypothesis test for anomaly detection,â Physics Letters B, vol. 840, p. 137836, 2023.
- 18. D. J. Weller-Fahy, B. J. Borghetti, and A. A. Sodemann, âA survey of distance and similarity measures used within network intrusion anomaly detection,â IEEE Communications Surveys & Tutorials, vol. 17, no. 1, pp. 70â91, 2014.
- 19. L. Montechiesi, M. Cocconcelli, and R. Rubini, âArtificial immune system via Euclidean distance minimization for anomaly detection in bearings,â Mechanical Systems and Signal Processing, vol. 76, pp. 380â393, 2016.
- 20. Y. Wang, Q. Miao, E. W. Ma, K.-L. Tsui, and M. G. Pecht, âOnline anomaly detection for hard disk drives based on Mahalanobis distance,â IEEE Transactions on Reliability, vol. 62, no. 1, pp. 136â145, 2013.
- 21. Y. Hou, Z. Chen, M. Wu, C.-S. Foo, X. Li, and R. M. Shubair, âMahalanobis distance based adversarial network for anomaly detection,â in Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing, (Virtual), May 2020.
- 22. T. Schlegl, P. Seeböck, S. M. Waldstein, G. Langs, and U. Schmidt-Erfurth, âF-anoGAN: Fast unsupervised anomaly detection with generative adversarial networks,â Medical Image Analysis, vol. 54, pp. 30â44, 2019.
- 23. B. Lian, Y. Kartal, F. L. Lewis, D. G. Mikulski, G. R. Hudas, Y. Wan, and A. Davoudi, âAnomaly detection and correction of optimizing autonomous systems with inverse reinforcement learning,â IEEE Transactions on Cybernetics, vol. 53, no. 7, pp. 4555â4566, 2022.
- 24. A. Barto, M. Mirolli, and G. Baldassarre, âNovelty or surprise?,â Frontiers in Psychology, vol. 4, p. 907, 2013.
- 25. L. Itti and P. Baldi, âBayesian surprise attracts human attention,â Vision Research, vol. 49, no. 10, pp. 1295â1306, 2009.
- 26. V. Liakoni, A. Modirshanechi, W. Gerstner, and J. Brea, âLearning in volatile environments with the Bayes factor surprise,â Neural Computation, vol. 33, no. 2, pp. 269â340, 2021.
- 27. M. Faraji, K. Preuschoff, and W. Gerstner, âBalancing new against old information: The role of puzzlement surprise in learning,â Neural Computation, vol. 30, no. 1, pp. 34â83, 2018.
- 28. O. Ăatal, S. Leroux, C. De Boom, T. Verbelen, and B. Dhoedt, âAnomaly detection for autonomous guided vehicles using Bayesian surprise,â in Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, (Las Vegas, USA), October 2020.
- 29. Y. Zamiri-Jafarian and K. N. Plataniotis, âA Bayesian surprise approach in designing cognitive radar for autonomous driving,â Entropy, vol. 24, no. 5, p. 672, 2022.
- 30. A. Dinparastdjadid, I. Supeene, and J. Engstrom, âMeasuring surprise in the wild,â arXiv preprint arXiv:2305.07733, 2023.
- 31. A. S. Raihan, H. Khosravi, T. H. Bhuiyan, and I. Ahmed, âAn augmented surprise-guided sequential learning framework for predicting the melt pool geometry,â Journal of Manufacturing Systems, vol. 75, pp. 56â77, 2024.
- 32. S. Jin, J. R. Deneault, B. Maruyama, and Y. Ding, âAutonomous experimentation systems and benefit of surprise-based Bayesian optimization,â in Proceedings of the 2022 International Symposium on Flexible Automation, (Yokohama, Japan), July 2022.
- 33. A. Modirshanechi, J. Brea, and W. Gerstner, âA taxonomy of surprise definitions,â Journal of Mathematical Psychology, vol. 110, p. 102712, 2022.
- 34. P. Baldi, âA computational theory of surprise,â in Information, Coding and Mathematics: Proceedings of Workshop Honoring Prof. Bob Mceliece on his 60th Birthday, pp. 1â25, 2002.
- 35. A. Prat-Carrabin, R. C. Wilson, J. D. Cohen, and R. Azeredo da Silveira, âHuman inference in changing environments with temporal structure,â Psychological Review, vol. 128, no. 5, p. 879â912, 2021.
- 36. P. J. Rousseeuw and C. Croux, âAlternatives to the median absolute deviation,â Journal of the American Statistical Association, vol. 88, no. 424, pp. 1273â1283, 1993.
- 37. C. Aytekin, X. Ni, F. Cricri, and E. Aksu, âClustering and unsupervised anomaly detection with l-2 normalized deep auto-encoder representations,â in Proceedings of the 2018 International Joint Conference on Neural Networks, (Rio de Janeiro, Brazil), October 2018.
- 38. D. T. Nguyen, Z. Lou, M. Klar, and T. Brox, âAnomaly detection with multiple-hypotheses predictions,â in Proceedings of the 36th International Conference on Machine Learning, (Long Beach, USA), June 2019.
- 39. A. Kolossa, B. Kopp, and T. Fingscheidt, âA computational analysis of the neural bases of Bayesian inference,â Neuroimage, vol. 106, pp. 222â237, 2015.
- 40. C. E. Shannon, âA mathematical theory of communication,â The Bell System Technical Journal, vol. 27, no. 3, pp. 379â423, 1948.
- 41. L. Paninski, âEstimation of entropy and mutual information,â Neural Computation, vol. 15, no. 6, pp. 1191â1253, 2003.
- 42. D. François, V. Wertz, and M. Verleysen, âThe permutation test for feature selection by mutual information,â in Proceedings of the 14th European Symposium on Artificial Neural Networks, (Bruges, Belgium), April 2006.
- 43. G. Doquire and M. Verleysen, âMutual information-based feature selection for multilabel classification,â Neurocomputing, vol. 122, pp. 148â155, 2013.
- 44. T. M. Cover, Elements of Information Theory. John Wiley & Sons, 1999.
- 45. A. Bondu, V. Lemaire, and M. BoullĂ©, âExploration vs. exploitation in active learning: A Bayesian approach,â in Proceedings of the 2010 International Joint Conference on Neural Networks, (Barcelona, Spain), July 2010.
- 46. J. G. Moreno-Torres, T. Raeder, R. Alaiz-RodrĂguez, N. V. Chawla, and F. Herrera, âA unifying view on dataset shift in classification,â Pattern Recognition, vol. 45, no. 1, pp. 521â530, 2012.
- 47. M. Sugiyama, M. Krauledat, and K.-R. MĂŒller, âCovariate shift adaptation by importance weighted cross validation,â Journal of Machine Learning Research, vol. 8, no. 5, pp. 985â1005, 2007.
- 48. S. Bickel, M. BrĂŒckner, and T. Scheffer, âDiscriminative learning under covariate shift,â Journal of Machine Learning Research, vol. 10, no. 9, pp. 2137â2155, 2009.
- 49. I. ĆœliobaitÄ, M. Pechenizkiy, and J. Gama, âAn overview of concept drift applications,â Big Data Analysis: New Algorithms for a New Society, vol. 16, pp. 91â114, 2016.
- 50. K. Zhang, A. T. Bui, and D. W. Apley, âConcept drift monitoring and diagnostics of supervised learning models via score vectors,â Technometrics, vol. 65, no. 2, pp. 137â149, 2023.
- 51. N. Cebron and M. R. Berthold, âActive learning for object classification: From exploration to exploitation,â Data Mining and Knowledge Discovery, vol. 18, pp. 283â299, 2009.
- 52. U. J. Islam, K. Paynabar, G. Runger, and A. S. Iquebal, âDynamic explorationâexploitation trade-off in active learning regression with Bayesian hierarchical modeling,â IISE Transactions, vol. 57, no. 4, pp. 393â407, 2025.
- 53. V. R. Joseph, âSpace-filling designs for computer experiments: A review,â Quality Engineering, vol. 28, no. 1, pp. 28â35, 2016.
- 54. K. Chai, âGeneralization errors and learning curves for regression with multi-task Gaussian processes,â in Proceedings of the 23rd Advances in Neural Information Processing Systems, (Vancouver, Canada), December 2009.
- 55. S. P. Strong, R. Koberle, R. R. D. R. Van Steveninck, and W. Bialek, âEntropy and information in neural spike trains,â Physical Review Letters, vol. 80, p. 197, 1998.
- 56. C. McDiarmid, âOn the method of bounded differences,â Surveys in Combinatorics, vol. 141, no. 1, pp. 148â188, 1989.