# Diffusion Causal Models for Counterfactual Estimation
## Diffusion Causal Models for Counterfactual Estimation
Pedro Sanchez
Sotirios A. Tsaftaris The University of Edinburgh
Editors: Bernhard Sch¨ olkopf, Caroline Uhler and Kun Zhang
Figure 1: Counterfactuals on ImageNet 256x256 generated by Diff-SCM. From left to right : a random image sampled from the data distribution and its counterfactuals do ( class ) , corresponding to 'how the image should change in order to be classified as another class?'.
<details>
<summary>Image 1 Details</summary>

### Visual Description
\n
## Photographs: Object Recognition Examples
### Overview
The image presents four photographs arranged horizontally. Each photograph depicts a different object, and each is labeled with a descriptive string above it. The labels appear to be related to object recognition or image classification tasks.
### Components/Axes
There are no axes or scales present in this image. The components are simply the four photographs and their corresponding labels. The labels are positioned directly above their respective images.
### Detailed Analysis or Content Details
1. **carbonara:** The first image shows a plate of spaghetti carbonara. The dish includes pasta, bacon, egg yolk, and what appears to be peas. A can of soda is visible in the background.
2. **do(cliff):** The second image depicts a rocky cliff face overlooking a body of water (likely the ocean). Vegetation is growing on the cliff. The sky is overcast.
3. **do(espresso maker):** The third image shows an espresso maker in operation, pressing down on a portion of what appears to be a pastry or cake. The machine is metallic and industrial-looking.
4. **do(waffle iron):** The fourth image displays a waffle iron with cooked waffles inside. There is also a pile of dark-colored food (possibly meat) on top of the waffles.
### Key Observations
The images are diverse in content, ranging from food to natural landscapes to machinery. The labels "do(cliff)", "do(espresso maker)", and "do(waffle iron)" suggest these images are being used as examples for a computer vision or machine learning task, potentially related to object detection or image classification. The "do()" prefix might indicate a function call or a specific context within a program.
### Interpretation
The image serves as a visual demonstration of objects that a system might be trained to recognize. The variety of objects suggests the system is intended to handle a broad range of visual inputs. The labels indicate a focus on identifying specific objects within images. The "do()" notation suggests these images are part of a larger process, possibly a test set or training data for a machine learning model. The image does not provide any quantitative data or trends; it is purely illustrative. The image is a collection of photographs used to demonstrate object recognition capabilities.
</details>
## Abstract
We consider the task of counterfactual estimation from observational imaging data given a known causal structure. In particular, quantifying the causal effect of interventions for highdimensional data with neural networks remains an open challenge. Herein we propose Diff-SCM, a deep structural causal model that builds on recent advances of generative energy-based models. In our setting, inference is performed by iteratively sampling gradients of the marginal and conditional distributions entailed by the causal model. Counterfactual estimation is achieved by firstly inferring latent variables with deterministic forward diffusion, then intervening on a reverse diffusion process using the gradients of an anti-causal predictor w.r.t the input. Furthermore, we propose a metric for evaluating the generated counterfactuals. We find that Diff-SCM produces more realistic and minimal counterfactuals than baselines on MNIST data and can also be applied to ImageNet data. Code is available https://github.com/vios-s/Diff-SCM .
## 1. Introduction
The notion of applying interventions in learned systems has been gaining significant attention in causal representation learning (Scholkopf et al., 2021). In causal inference, relationships between variables are directed. An intervention on the cause will change the effect, but not the other way around. This notion goes beyond learning conditional distributions p ( x ( k ) | x ( j ) ) based on the data alone, as in the classical statistical learning framework (Vapnik, 1999). Building causal models implies capturing the underlying physical mechanism that generated the data into a model (Pearl,
PEDRO.SANCHEZ@ED.AC.UK
## SANCHEZ TSAFTARIS
2009). As a result, one should be able to quantify the causal effect of a given action. In particular, when an intervention is applied for a given instance, the model should be able the generate hypothetical scenarios. These are the so-called counterfactuals .
Building causal models that quantify the effect of a given action for a given causal structure and available data is referred to as causal estimation . However, estimating the effect of interventions for high-dimensional data remains an open problem (Pawlowski et al., 2020; Yang et al., 2021). While machine learning is a powerful tool for learning relationships between high-dimensional variables, most causal estimation methods using neural networks (Johansson et al., 2016; Louizos et al., 2017; Shi et al., 2019; Du et al., 2021) are only applied in semi-synthetic low-dimensional datasets (Hill, 2012; Shimoni et al., 2018). Therefore, causal estimation through learning deep neural networks for high-dimensional variables remains a desired quest. We show that we can estimate the effect of interventions by generating counterfactuals on imaging datasets, as illustrated in Fig. 1.
Herein, we leverage recent advances in generative energy based models (EBMs) (Song et al., 2021b; Ho et al., 2020) to devise approaches for causal estimation. This formulation has two key advantages: (i) the stochasticity of the diffusion process relates to uncertainty-aware causal models; and (ii) the iterative sampling can be naturally extended for applying interventions. Additionally, we propose an algorithm for counterfactual inference and a metric for evaluating the results. In particular, we use neural networks that learn to reverse a diffusion process (Ho et al., 2020) via denoising. These models are trained to approximate the gradient of a log-likelihood of a distribution w.r.t. the input. We also employ neural networks that are learned in the anti-causal direction (Sch¨ olkopf et al., 2012; Kilbertus et al., 2018) to sample via the causal mechanisms. We use the gradients of these anti-causal predictors for applying interventions in specific variables during sampling. Counterfactual estimation is possible via a deterministic version of diffusion models (Song et al., 2021a) which recovers manipulable latent spaces from observations. Finally, the counterfactuals are generated iteratively using Markov Chain Monte Carlo (MCMC) algorithms.
In summary, we devise a framework for causal effect estimation with high-dimensional variables based on diffusion models entitled Diff-SCM. Diff-SCM behaves as a structured generative model where one can sample from the interventional distribution as well as estimate counterfactuals. Our contributions: (i) We propose a theoretical framework for causal modeling using generative diffusion models and anti-causal predictors (Sec. 3.2). (ii) We investigate how anti-causal predictors can be used for applying interventions in the causal direction (Sec. 3.3). (iii) We propose an algorithm for counterfactual estimation using Diff-SCM (Sec. 3.4). (iv) We propose a metric term counterfactual latent divergence for evaluating the minimality of the generated counterfactuals (Sec. 5.2). We use this metric to compared our method with the selected baselines and hyperparameter search (Sec. 5.3)
## 2. Background
## 2.1. Generative Energy-Based Models
A family of generative models based on diffusion processes (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2021b) has recently gained attention even achieving state-of-the-art image generation quality (Dhariwal and Nichol, 2021).
In particular, Denoising Diffusion Probabilistic Models (DDPMs) (Ho et al., 2020) consist in learning to denoise images that were corrupted with Gaussian noise at different scales. DDPMs are defined in terms of a forward Markovian diffusion process. This process gradually adds Gaussian
noise, with a time-dependent variance β t ∈ [0 , 1] , to a data point x 0 ∼ p data ( x ) . Thus, the latent variable x t , with t ∈ [0 , T ] , is learned to correspond to versions of x 0 perturbed by Gaussian noise following p ( x t | x 0 ) = N ( x t ; √ α t x 0 , (1 -α t ) I ) , where α t := ∏ t j =0 (1 -β j ) and I is the identity matrix.
As such, p ( x t ) = ∫ p data ( x ) p ( x t | x )d x should approximate the data distribution p ( x 0 ) ≈ p data at time t = 0 and a zero centered Gaussian distribution at time t = T . Generative modelling is achieved by learning to reverse this process using a neural network θ trained to denoise images at different scales β t . The denoising model effectively learns the gradient of a log-likelihood w.r.t. the observed variable ∇ x log p ( x ) (Hyv¨ arinen, 2005).
Training. With sufficient data and model capacity, the following training procedure ensures that the optimal solution to ∇ x log p t ( x ) can be found by training θ to approximate ∇ x log p t ( x t | x 0 ) . The training procedure can be formalised as
<!-- formula-not-decoded -->
Inference. Once the model θ is learned using Eq. 1, generating samples consists in starting from x T ∼ N ( 0 , I) and iteratively sampling from the reverse Markov chain following:
<!-- formula-not-decoded -->
We note that, in the DDPM setting, z is re-sampled at each iteration. Diffusion models are Markovian and stochastic by nature. As such, they can be defined as a stochastic differential equation (SDE) (Song et al., 2021b). We adopt the time-dependent notation from Song et al. (2021b) as it will be useful for the connection with causal models in Sec. 3.2.
## 2.2. Causal Models
Counterfactuals can be understood from a formal perspective using the causal inference formalism (Pearl, 2009; Peters et al., 2017; Scholkopf et al., 2021). Structural Causal Models (SCM) G := ( S , p U ) consist of a collection S = ( f (1) , f (2) , ...., f ( K ) ) of structural assignments (so-called mechanisms ), defined as
<!-- formula-not-decoded -->
where X = { x (1) , x (2) , ..., x ( K ) } are the known endogenous random variables, pa ( k ) is the set of parents of x ( k ) (its direct causes) and U = { u (1) , u (2) , ..., u ( K ) } are the exogenous variables. The distribution p ( U ) of the exogenous variables represents the uncertainty associated with variables that were not taken into account by the causal model. Moreover, variables in U are mutually independent following the joint distribution:
<!-- formula-not-decoded -->
These structural equations can be defined graphically as a directed acyclic graph. Vertices are the endogenous variables and edges represent (directional) causal relationships between them. In particular, there is a joint distribution p G ( X ) = ∏ K k =1 p ( x ( k ) | pa ( k ) ) which is Markov related to G . In other words, the SCM G represents a joint distributions over the endogenous variables. A graphical example of a SCM is depicted on the left part of Fig. 2. Finally, SCMs should comply to what is known as Pearl's Causal Hierarchy (see Appendix B for more details).
## 3. Causal Modeling with Diffusion Processes
## 3.1. Problem Statement
In this work, we build a causal model capable of estimating counterfactuals of high-dimensional variables. We will base our work on three assumptions: (i) The SCM is known and the intervention is identifiable. (ii) The variables over which the counterfactuals will be estimated need to contain enough information to recover their causes; i.e. an anti-causal predictor can be trained. (iii) All endogenous variables in the training set are annotated.
Notation. We use x ( k ) t is the k th endogenous random variable in a causal graph G at diffusion time t . x ( k ) t,i is a sample i ∈ [ CF , F ] (F and CF being factual and counterfactual respectively) from x ( k ) t . Whenever t is omitted, it should be considered zero, i.e. the sample is not corrupted with Gaussian noise. an ( k ) for the ancestors, with pa ( k ) ⊂ an ( k ) , and de ( k ) for the descendants of x ( k ) in G .
## 3.2. Diff-SCM: Unifying Diffusion Processes and Causal Models
Figure 2: Illustration of a diffusion process as weakening of causal relationships. Left: Example of a SCM with endogenous variables x ( k ) and respective exogenous variables u ( k ) . Right: The diffusion process weakens the relationship between endogenous variables until they become completely independent at t = T . Arrows with solid lines indicate the causal relationship between variables and direction, while the thickness of the arrow indicates strength of the relation. Note that time t is a fiction used as reference for the diffusion process and is not a causal variable.
<details>
<summary>Image 2 Details</summary>

### Visual Description
\n
## Diagram: System State Transition over Time
### Overview
The image depicts a directed graph illustrating the state transition of a system over time, from an initial state at *t* = 0 to a final state at *t* = *T*. The diagram shows three variables, denoted as x<sup>(1)</sup>, x<sup>(2)</sup>, and x<sup>(3)</sup>, and their corresponding inputs u<sup>(1)</sup>, u<sup>(2)</sup>, and u<sup>(3)</sup>. The left side of the diagram shows the initial state, while the right side shows the system evolving over time, represented by the subscript *t*.
### Components/Axes
The diagram consists of nodes representing system states and variables, and directed edges representing the flow of influence or dependency. The diagram is divided into two sections by a vertical line. The left section shows the initial state, and the right section shows the system's evolution over time. The time axis is indicated above the right section, ranging from *t* = 0 to *t* = *T*.
The variables are labeled as follows:
* x<sup>(1)</sup>
* x<sup>(2)</sup>
* x<sup>(3)</sup>
* u<sup>(1)</sup>
* u<sup>(2)</sup>
* u<sup>(3)</sup>
The subscript *t* denotes the state of the variables at time *t*.
### Detailed Analysis or Content Details
The left side of the diagram shows the initial state. x<sup>(1)</sup> has an incoming arrow from u<sup>(1)</sup> and an outgoing arrow to x<sup>(3)</sup>. x<sup>(2)</sup> has an incoming arrow from u<sup>(2)</sup> and an outgoing arrow to x<sup>(3)</sup>. x<sup>(3)</sup> has incoming arrows from both x<sup>(1)</sup> and x<sup>(2)</sup>, and an outgoing arrow to u<sup>(3)</sup>.
The right side of the diagram shows the system evolving over time. The variables x<sup>(1)</sup>, x<sup>(2)</sup>, and x<sup>(3)</sup> are shown at multiple time steps, denoted by the subscript *t*. The connections between the variables remain consistent over time. x<sub>t</sub><sup>(1)</sup> receives input from u<sup>(1)</sup> and influences x<sub>t</sub><sup>(3)</sup>. x<sub>t</sub><sup>(2)</sup> receives input from u<sup>(2)</sup> and influences x<sub>t</sub><sup>(3)</sup>. x<sub>t</sub><sup>(3)</sup> receives input from both x<sub>t</sub><sup>(1)</sup> and x<sub>t</sub><sup>(2)</sup> and influences u<sup>(3)</sup>. The ellipsis (...) indicates that this pattern continues for multiple time steps until *t* = *T*.
The final state at *t* = *T* shows the variables u<sup>(1)</sup>, u<sup>(2)</sup>, and u<sup>(3)</sup>, indicating that the system's output is determined by the final states of the variables.
### Key Observations
The diagram illustrates a dynamic system where the state of each variable at time *t* depends on its previous state and the inputs it receives. The system appears to be interconnected, with each variable influencing others. The diagram does not provide specific numerical values or equations, but rather a qualitative representation of the system's behavior.
### Interpretation
The diagram represents a state-space model of a dynamic system. The variables x<sup>(1)</sup>, x<sup>(2)</sup>, and x<sup>(3)</sup> represent the system's states, and the variables u<sup>(1)</sup>, u<sup>(2)</sup>, and u<sup>(3)</sup> represent the inputs and outputs of the system. The arrows indicate the causal relationships between the variables.
The diagram suggests that the system's behavior evolves over time based on its initial state and the inputs it receives. The system is interconnected, meaning that changes in one variable can affect the others. The diagram is a simplified representation of a complex system, and it does not provide any information about the specific dynamics of the system. However, it does provide a useful framework for understanding the system's overall behavior. The use of superscripts (1), (2), and (3) likely indicates different components or aspects of the system, while the subscript *t* denotes the time step. The diagram is a visual representation of a control system or a feedback loop.
</details>
SCMs have been associated with ordinary (Mooij et al., 2013; Rubenstein et al., 2018) and stochastic (Sokol and Hansen, 2014; Bongers and Mooij, 2018) differential equations as well as other types of dynamical systems (Blom et al., 2020). In these cases, differential equations are useful for modeling time-dependent problems such as chemical kinetics or mass-spring systems. From the energy-based models perspective, Song et al. (2021b) unify denoising diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020) and denoising score models (Song and Ermon, 2019) into a framework based on SDEs. In Song et al. (2021b), SDEs are used for formalising a diffusion process in a continuous manner where a model is learned to reverse the SDE in order to generate images.
Here, we unify the SDE framework with causal models. Diff-SCM models the dynamics of causal variables as an Ito process x ( k ) t , ∀ t ∈ [0 , T ] (Øksendal, 2003; S¨ arkk¨ a and Solin, 2019) going
from an observed endogenous variable x ( k ) 0 = x ( k ) to its respective exogenous noise x ( k ) T = u ( k ) and back. In other words, we formulate the forward diffusion as a gradual weakening of the causal relations between variables of a SCM , as illustrated in Fig. 2.
The diffusion forces the exogenous noise u ( j ) corresponding to a variable x ( j ) of interest to be independent of other u ( i ) , ∀ i = j , following the constraints from Eq. 4. The Brownian motion (diffusion) leads to a Gaussian distribution, which can be seen as a prior. Analogously, the original joint distribution entailed by the SCM p G ( X ) diffuses to independent Gaussian distributions equivalent to p ( U ) . As such, the time-dependent joint distribution p ( X t ) , ∀ t ∈ [0 , T ] have as bounds p ( X T ) = p ( U ) and p ( x 0 ) = p G ( X ) . Note that p ( X t ) refers to time-dependent distribution over all causal variables x ( k ) .
We follow Song et al. (2021b) in defining the diffusion process from Sec. 2.1 in terms of an SDE. Since SDEs are stochastic processes, their solution follows a certain probability distribution instead of a deterministic value. By constraining this distribution to be the same as the distribution p G ( X ) entailed by an SCM G , we can define a deep structural causal model (DSCM) as a set of SDEs (one for each node k ):
$$dx ^ { ( k ) } = - \frac { 1 } { 2 } b _ { t } x ^ { ( k ) } d t + \sqrt { \beta _ { t } } d w , \forall k \in [ 1 , K ] ,$$
Here, w denotes the Wiener process (or Brownian motion). The first part of the SDE ( -1 2 β t x ( k ) ) is known as drift function (S¨ arkk¨ a and Solin, 2019) 1 .
The generative process is the solution of the reverse-time SDE from Eq. 6 in time. This process is done by iteratively updating the exogenous noise x ( k ) T = u ( k ) with the gradient of the data distribution w.r.t. the input variable ∇ x ( k ) t log p ( x ( k ) t ) , until it becomes x ( k ) 0 = x ( k ) with:
$$d x ( t ) = [ - \frac { 1 } { 2 } p _ { t } + p _ { t } \sum _ { i = 1 } ^ { k } x _ { i } ( t ) log p ( x _ { i } )$$
The reverse SDE can, therefore, be considered as the process of strengthening causal relations between variables. More importantly, the iterative fashion of the generative process (reverse SDE) is ideal in a causal framework due to the flexibility of applying interventions. We refer the reader to Song et al. (2021b) for a detailed description and proofs of SDE formulation for score-based diffusion models.
## 3.3. How to Apply Interventions with Anti-Causal Predictors?
An interesting result of Eq. 6 is that one only needs the gradients of the distribution entailed by the SCM p G for sampling. This allows learning of the anti-causal conditional distributions p G -and applying interventions with the causal mechanism. This can be useful when anti-causal learning is more straightforward (Sch¨ olkopf et al., 2012). In these cases, one would train classifiers in the anti-causal direction for each edge and diffusion models for each node (over which one wants to
1. The drift function can potentially be used to define temporal relations between variables as in Rubenstein et al. (2018) and Blom et al. (2020).
## SANCHEZ TSAFTARIS
measure the effect of interventions) in the graph. Then, one might use the gradients of the classifiers and diffusion models to propagate the intervention in the causal direction over the nodes. Following this idea, proposition 1 arises as a result of Eq. 6.
Proposition 1 (Interventions as anti-causal gradient updates) We consider the SCM G and a variable x ( j ) ∈ an ( k ) . The effect observed on x ( k ) caused by an intervention on x ( j ) , p G ( x ( k ) | do ( x ( j ) = x ( j ) )) , is equivalent to solving a reverse-diffusion process for x ( k ) t . Since the sampling process involves taking into account the distribution entailed by G , it is guided by the gradient of an anti-causal predictor w.r.t. the effect when the cause is assigned a specific value:
$$\sum _ { x _ { i } \in P _ { 6 } ^ { ( x ) } } ( x ^ { ( y ) } | x _ { i } ^ { ( z ) } ).$$
Proposition 1 respects the principle of independent causal mechanisms (ICM) 2 (Peters et al., 2017; Sch¨ olkopf et al., 2012). It implies independence between the cause distribution and the mechanism producing the effect distribution. As shown in Eq. 7, sampling with the causal mechanism does not require the distribution of the cause p ( x ( j ) ) (Scholkopf et al., 2021).
## 3.4. Counterfactual Estimation with Diff-SCM
Apowerful consequence of building causal models, following Pearl's Causal Hierarchy , is the estimation of counterfactuals. Counterfactuals are hypothetical scenarios for a given factual observation under a local intervention. Estimation of counterfactuals differentiates of sampling from an interventional distribution because the changes are applied for a given observation. As detailed in Pearl (2016), sec. 4.2.4, counterfactual estimation requires three steps: (i) abduction of exogenous noise - forward diffusion with DDIM algorithm (Song et al., 2021a) following Alg. 3 in Appendix D; (ii) action - graph mutilation by erasing the edges between the intervened variable and its parents; (iii) prediction - reverse diffusion controlled by the gradients of an anti-causal classifier.
Here, we are interested in estimating x ( k ) CF based on the observed (factual) x ( k ) F for the random variable x ( k ) after assigning a value x ( j ) CF to x ( j ) ∈ an ( k ) , i.e. applying an intervention do ( x ( j ) = x ( j ) CF ) . It's equivalent to sample from counterfactual distribution p G ( x ( k ) | do ( x ( j ) = x ( j ) CF ); x ( k ) = x ( k ) F ) . We will consider a setting where only x ( j ) and x ( k ) are present in the graph as a simplifying assumption for Alg. 1. Considering only two variables removes the need for the graph mutilation explained above. It is also the setting used in our experiments. We will leave an extension to more complex SCMs for future work. We detail in Alg. 1 how abduction of exogenous noise and prediction is done.
Abduction of Exogenous Noise. The first step for estimating a counterfactual is the abduction of exogenous noise. Note from Eq. 3 that the value of a causal variable depends both on its parents and on its respective exogenous noise. From a deep learning perspective (Pawlowski et al., 2020), one might consider the exogenous u ( k ) an inferred latent variable. The prior p ( u ( k ) ) of u ( k ) in Diff-SCM is a Gaussian as detailed in Sec. 3.2.
With diffusion models, abduction can be done with a derivation done by Song et al. (2021a) and Song et al. (2021b). Both works make a connection between diffusion models and neural ODEs (Chen et al., 2018). They show that one can obtain a deterministic inference system while training
2. The principle states that 'The causal generative process of a system's variables is composed of autonomous modules that do not inform or influence each other.'
with a diffusion process, which is stochastic by nature. This formulation allows the process to be invertible by recovering a latent space u ( k ) by performing the forward diffusion with the learned model. The algorithm for recovering u ( k ) is highlighted as the first box in Alg. 1.
Prediction under Intervention. Once the abduction of exogenous noise u ( k ) is done for a given factual observation x ( k ) F , counterfactual estimation consists in applying an intervention in the reverse diffusion process with the gradients of an anti-causal predictor. In particular, we use the formulation of guided DDIM from Dhariwal and Nichol (2021) which forms the second part of Alg. 1.
Controlling the Intervention. There are three main factors contributing for the counterfactual estimation in Alg. 1: (i) The inferred u ( k ) keeps information about the factual observation; (ii) ∇ x ( k ) t log p φ ( x ( j ) CF | x ( k ) t ) guide the intervention towards the desired counterfactual class; and (iii) θ ( x ( k ) t , t ) forces the estimation to belong to the data distribution. We follow Dhariwal and Nichol (2021) in adding an hyperparameter s which controls the scale of ∇ x ( k ) t log p φ ( x ( j ) CF | x ( k ) t ) . High values of s might result in counterfactuals that are too different from the factual data. We show this empirically and discuss the effects of this hyperparameter in Sec. 5.3.
Algorithm 1 Inference of counterfactual for a variable x ( k ) from an intervention on x ( j ) ∈ an ( k )
Models: trained diffusion model θ and anti-causal predictor p φ ( x ( j ) | x ( k ) t )
Input : factual variable x ( k ) 0 , F , target intervention x ( j ) 0 , CF , scale s
Output: counterfactual x ( k ) 0 , CF
Abduction of Exogenous Noise - Recovering u ( k ) from x ( k ) 0 , F
$$\begin{aligned}
& \int _ { t } ^ { - 0 } t e^{i ( k ) x } d t \\
& = \frac { i ( k ) } { T _ { F } } \int _ { t } ^ { - 0 } \frac { e^{i ( k ) x } } { \sqrt { 1 + e^{2 i ( k ) x } } } d x \\
& + \sqrt { a _ { t } + 1 } e^{i ( k ) x _ { F } } \int _ { t } ^ { - 0 } \frac { e^{i ( k ) x } } { \sqrt { 1 + e^{2 i ( k ) x } } } d x \\
& = \frac { i ( k ) } { T _ { F } } \left( \frac { e^{i ( k ) t } } { \sqrt { 1 + e^{2 i ( k ) t } } } - \frac { e^{i ( k ) t } } { \sqrt { 1 + e^{2 i ( k ) t } } } \right) \\
& = x _ { F } e^{i ( k ) t }
\end{aligned}$$
## Generation under Intervention
$$\begin{array}{ll}
\text{for } t \to - T \to 0 \text{ do} & \begin{aligned}
e^{-\epsilon q ( x _ { t } ^ { k } , t ) - s \sqrt{1 - a _ { t } ^ { \nabla } } \frac{\log p _ { \phi } ( x _ { t } ^ { k } )}{x _ { t } ^ { k } - \sqrt{a _ { t } ^ { 2 } e}} & \\
\end{aligned}
& + \sqrt{a _ { t } ^ { 2 } e}
\end{array}
\end{equation}
\begin{array}{ll}
\text{end} & x ( k )_{0,CF} = x ( k )_{0}
\end{array}$$
## 4. Related Work
Generative EBMs. Our generative framework is inspired on the energy based models literature (Ho et al., 2020; Song et al., 2021b; Du and Mordatch, 2019; Grathwohl et al., 2020). In particular, we leverage the theory around denoising diffusion models (Sohl-Dickstein et al., 2015; Ho et al.,
## SANCHEZ TSAFTARIS
2020; Nichol and Dhariwal, 2021). We take advantage of a non-Markovian definition DDIM (Song et al., 2021a) which allows faster sampling and recovering latent spaces from observations. Our theory connecting diffusion models and SDEs follows Song et al. (2021b), but from a different perspective. Even though Du et al. (2020) are not constrained to causal modeling, they also use the idea of guiding the generation with gradient of conditional energy models. Recently, Sinha et al. (2021) proposed a version of diffusion models for manipulable generation based on contrastive learning. Finally, Dhariwal and Nichol (2021) derive a conditional sampling process for DDIM that is used in this paper as detailed in Sec. 3.3. Here, we re-interpret their generation algorithm from a causal perspective and add deterministic latent inference for counterfactual estimation. The main, but key difference, is that we add the abduction of exogenous noise . Without this abduction, we cannot ensure that the resulting image will match other aspects of the original image whilst altering only the intended aspect (ie. Where we want to intervene). We can sample from a counterfactual distribution instead of the interventional distribution.
Counterfactuals. Designing causal models with deep learning components has allowed causal inference with high-dimensional variables (Pawlowski et al., 2020; Shen et al., 2020; Dash et al., 2020; Xia et al., 2021; Zeˇ cevi et al., 2021). Given a factual observation, counterfactuals are obtained by measuring the effect of an intervention in one of the ancestral attributes. They have been used in a range of applications such as (i) explaining predictions (Verma et al., 2020; Goyal et al., 2019; Looveren and Klaise, 2021; Hvilshøj et al., 2021); (ii) defining fairness (Kusner et al., 2017); (iii) mitigating data biases (Denton et al., 2019); (iv) improving reinforcement learning (Lu et al., 2020); (v) predicting accuracy (Kaushik et al., 2020); (vi) increasing robustness against spurious correlations (Sauer and Geiger, 2021). Most similar to our work, Schut et al. (2021) estimate counterfactuals via iterative updates using the gradients of a classifier. However, their method is based on adversarial updates computed via epistemic uncertainty, not diffusion processes.
## 5. Experiments
Ground truth counterfactuals are, by definition, impossible to acquire. Counterfactuals are hypothetical predictions. In an ideal scenario, the SCM of problem is fully specified. In this case, one would be able to verify if unrelated causal variables kept their values 3 . However, a complete causal graph is rarely known in practice. In this section, we (i) present ideas on how to evaluate counterfactuals without access to the complete causal graph nor semi-synthetic data; (ii) show with quantitative and qualitative experiments that our method is appropriate for counterfactual estimation; (iii) propose CLD, a metric for quantitative evaluation of counterfactuals; and (iv) use CLD for fine tuning an important hyperparameter of our framework.
Causal Setup. Weconsider a causal model G image with two variables x (1) ← x (2) following the example in Sec. 3.3. Here, x (1) represents an image and x (2) a class. In practice, the gradient of the marginal distribution of x (1) is learned with a diffusion model, which we refer as θ , as in Sec. 2.1. The anti-causal conditional distribution is also learned with a neural network p φ ( x (2) | x (1) ) . Our experiments aim at sampling from the counterfactual distribution p G ( x (1) | do ( x (2) = x (2) CF ); x (1) F ) . Extra experiments on sampling from interventional distribution are in Appendix F.
Implementation. θ is implemented as an encoder-decoder architecture with skip-connections, i.e. a Unet-like network (Ronneberger et al., 2015). For anti-causal classification tasks, we use the
3. Remember that interventions only change descendants in a causal graph.
encoder of θ with a pooling layer followed by a linear classifier. Both θ and p φ ( x (2) | x (1) ) dependent on diffusion time. The diffusion model and anti-causal predictor are trained separately. Implementation details are in Appendix E.
Baselines. Weconsider Schut et al. (2021) and Looveren and Klaise (2021) because they (i) generate counterfactuals based on classifiers decisions; and (ii) evaluate results with metrics tailored to counterfactual estimation on images.
Datasets. Considering the causal model G image described above, we compare our method quantitatively and qualitatively with baselines on MNIST data (Lecun et al., 1998). Furthermore, we show empirically that our approach works with more complex, higher-resolution images from the ImageNet dataset (Deng et al., 2009). We only perform qualitative evaluations on ImageNet since the baseline methods cannot generate counterfactuals for this dataset.
## 5.1. Evaluating Counterfactuals: Realism and Closeness to Data Manifold
Taking into account the causal model G image , we now employ the strategies for counterfactual estimation in Sec. 3.4. In particular, given an image x (1) F ∼ x (1) and a target intervention x (2) CF in the class variable, we wish to estimate the counterfactual x (1) CF for the image x (1) F . We use two metrics proposed by Looveren and Klaise (2021), IM1 and IM2, to measure the realism, interpretability and closeness to the data manifold based on the reconstruction loss of autoencoders trained on specific classes. See details in Appendix G.
Experimental Setup. We run Alg. 1 over the test set with randomly sampled target counterfactual classes x (2) CF ∼ x (2) , ∀ x (2) = x (2) F . For example. we generate counterfactuals of all MNIST classes for a given factual image, as illustrated in Appendix H. We evaluate realism of Diff-SCM, Schut et al. and Looveren and Klaise using the IM1 and IM2 metrics. Diff-SCM achieves better results (lower is better) in both metrics 4 , as shown in Tab. 1. We show qualitative results on ImageNet in Fig. 1 and on MNIST in Appendix H. A qualitative comparison between methods is depicted in Fig. 3( b ).
Table 1: Quantitative comparison between Diff-SCM and baselines. Lower is better for all metrics. Results are presented with mean ( µ ) and standard deviation σ over the test set in the format µ σ .
| Method | IM1 ↓ | IM2 ↓ | CLD ↓ |
|---------------------|---------------|---------------|---------------|
| Diff-SCM (ours) | 0 . 94 0 . 02 | 0 . 04 0 . 00 | 1 . 08 0 . 03 |
| Looveren and Klaise | 1 . 10 0 . 03 | 0 . 05 0 . 00 | 1 . 25 0 . 03 |
| Schut et al. | 1 . 05 0 . 01 | 0 . 10 0 . 00 | 1 . 19 0 . 01 |
## 5.2. Counterfactual Latent Divergence (CLD)
Since one cannot measure changes in all variables of a real SCM, we leverage the sparse mechanism shift (SMS) hypothesis 5 (Scholkopf et al., 2021) for justifying a minimality property of counterfac-
4. We highlight that our setting is slightly different from baseline works where the target counterfactual classes were similar to the factual classes. e.g. Transforming MNIST digits from 2 → [3 , 7] or 4 → [1 , 9] . Since we are sampling target classes randomly, their metric values will look lower than in their respective papers.
5. SMS states that a 'small distribution changes tend to manifest themselves in a sparse or local way in the causal factorization, that is, they should usually not affect all factors simultaneously.'
## SANCHEZ TSAFTARIS
Figure 3: ( a ) A t-SNE visualization of the 20-dimensional latent vector of a variational autoencoder VAE over all MNIST samples. Each point represents an MNIST image and colors represent the ground-truth label of each sample. CLD's goal is to estimate a relative similarity between the factual data and the counterfactual. The distance between the generated counterfactual do (0) and factual observation is compared to the distances between the factual observation and all other data points from factual and counterfactual classes. ( b ) Qualitative comparison with baselines approaches for counterfactual estimation. Each column represents one method and each row a different intervention on digit class. The train. column shows training samples belonging to the target intervention class.
<details>
<summary>Image 3 Details</summary>

### Visual Description
## Diagram: CLD Intuition and Qualitative Comparison of Digit Recognition Models
### Overview
The image presents a visualization of a Contrastive Learning for Disentanglement (CLD) intuition alongside a qualitative comparison of digit recognition models. The left side shows a 2D embedding space generated by the CLD model, colored by digit class. The right side displays example digits and their reconstructions by different models.
### Components/Axes
The image is divided into three main sections:
1. **Top-Left:** Two example digits, one labeled "train" and the other "factual", connected by an arrow.
2. **Center-Left:** A 2D scatter plot representing the embedding space. The x and y axes are not explicitly labeled, but represent the dimensions of the embedding.
3. **Right:** A grid of images showing original digits and their reconstructions by different models. The rows are labeled "do(8)", "do(3)", "do(9)", and "do(4)". The columns represent the original digit, the reconstruction by "Diff-SCM (ours)", the reconstruction by "Schnt et al.", the reconstruction by "Looveren & Klaise", and the training digit.
A legend is positioned to the right of the scatter plot, mapping colors to digit classes 0-9. The colors are as follows:
* 0: Blue
* 1: Light Blue
* 2: Purple
* 3: Violet
* 4: Pink
* 5: Red
* 6: Orange
* 7: Yellow
* 8: Green
* 9: Brown
### Detailed Analysis or Content Details
**Scatter Plot Analysis:**
The scatter plot shows a clustering of points, with each cluster representing a digit class. The clusters are somewhat overlapping, but generally well-separated.
* Digit 0 (Blue): Located in the bottom-left region.
* Digit 1 (Light Blue): Adjacent to digit 0, slightly above and to the right.
* Digit 2 (Purple): Positioned above digit 1.
* Digit 3 (Violet): Located above digit 2.
* Digit 4 (Pink): Situated to the right of digit 3.
* Digit 5 (Red): Positioned to the right of digit 4.
* Digit 6 (Orange): Located to the right of digit 5.
* Digit 7 (Yellow): Positioned to the right of digit 6.
* Digit 8 (Green): Located to the right of digit 7.
* Digit 9 (Brown): Positioned to the right of digit 8.
**Qualitative Comparison Grid Analysis:**
The grid shows the original digits and their reconstructions by different models.
* **do(8):** The original digit is a '6'. "Diff-SCM (ours)" produces a relatively clear reconstruction. "Schnt et al." and "Looveren & Klaise" produce more distorted reconstructions. The training digit is a '6'.
* **do(3):** The original digit is a '3'. "Diff-SCM (ours)" produces a clear reconstruction. "Schnt et al." and "Looveren & Klaise" produce more distorted reconstructions. The training digit is a '3'.
* **do(9):** The original digit is a '4'. "Diff-SCM (ours)" produces a clear reconstruction. "Schnt et al." and "Looveren & Klaise" produce more distorted reconstructions. The training digit is a '9'.
* **do(4):** The original digit is a '1'. "Diff-SCM (ours)" produces a clear reconstruction. "Schnt et al." and "Looveren & Klaise" produce more distorted reconstructions. The training digit is a '4'.
### Key Observations
* The CLD model appears to successfully disentangle the digit classes in the embedding space, as evidenced by the clustering of points.
* "Diff-SCM (ours)" consistently produces clearer reconstructions of the digits compared to "Schnt et al." and "Looveren & Klaise".
* The reconstructions by "Schnt et al." and "Looveren & Klaise" often exhibit distortions and artifacts.
* The "factual" digit in the top-left is a '3', while the "train" digit is a '0'.
### Interpretation
The image demonstrates the effectiveness of the CLD approach for learning disentangled representations of digits. The embedding space visualization shows that the model can separate different digit classes, allowing for more accurate reconstruction and generation. The qualitative comparison highlights the superior performance of "Diff-SCM (ours)" in reconstructing digits, suggesting that it is better at capturing the underlying structure of the data. The fact that the training digit differs from the factual digit in the top-left suggests that the model is able to generalize to unseen digits. The distortions in the reconstructions by other models indicate that they may be overfitting to the training data or failing to capture the essential features of the digits. The image suggests that the CLD approach is a promising technique for learning robust and interpretable representations of data.
</details>
tuals. SMS translates, in our setting, to an intervention will not change many elements of the observed data . Therefore, an important property of counterfactuals is minimality or proximity to the factual observation. We suggest here a new metric entitled counterfactual latent divergence (CLD), illustrated in Fig. 3( a ), that estimates minimality.
Note that the metrics IM1 and IM2 from Sec. 5.1 do not take minimality into account. In addition, previous work (Wachter et al., 2018; Schut et al., 2021) only used the mean absolute error or 1 distance in the data space for measuring minimality. However, measuring similarity at pixellevel can be challenging as an intervention might change the structure of the image whilst keeping other factors unchanged. In this case, a pixel-level comparison might not be informative about the other factors of variation.
Latent Similarity. Therefore, we choose to measure similarity between latent representation. In addition, we want a representation that captures all factors of variation on the input data. In particular, we train a variational autoencoder (VAE) (Kingma and Welling, 2014) for recovering probabilistic latent representations that capture all factors of variation in the data. The latent spaces computed with the VAE's encoder E φ are denoted as µ i , σ i = E φ ( x (1) i ) , where subscript i means different samples from x (1) ( t = 0 ). We use the Kullback-Leibler divergence (KL) divergence for measuring the distances between latents. The divergence for a given counterfactual estimation and
factual observation pair ( x (1) CF , x (1) F ) can, therefore, be denoted as
$$\div D ( x ^ { i } , x ^ { j } ) = D _ { k L } ( N ( \mu _ { j } , \sigma _ { j } ) .$$
Relative Measure. However, absolute similarity measures give limited information. Therefore, we leverage class information for measuring minimality whilst making sure that the counterfactual is far enough from the factual class. A relative measure is obtained by estimating the probability of sets of divergence measures between the factual observation and other data points in the dataset (formalized in the Eq. 9) to less or greater than div . In particular, we compare div with the set S class of divergence measures between the factual observation x (1) F and all data points x (1) in a dataset D = { ( x (1) , x (2) ) | x (1) ∈ R 2 , x (2) ∈ N } for which the class x (2) is x (2) class is denoted in set-builder notation 6 with:
$$S _ { class } = \{ D ( x ^ { ( 1 ) } , x ^ { ( 2 ) } ) | ( x ^ { ( 1 ) } , x ^ { ( 2 ) } ) \in D \} \cap$$
The sets S CF and S F are obtained by replacing 'class' in S class with the appropriate target class of the counterfactual and factual observation class respectively.
The relative measures are: (i) P ( S CF ≤ div ) for comparing div with the distance between all data points of the counterfactual class and the factual image; and (ii) P ( S F ≥ div ) for comparing div with the distance between all other data points of the factual class and the factual image. We aim for counterfactuals with low P ( S CF ≤ div ) , enforcing minimality, and low P ( S F ≥ div ) , enforcing bigger distances from the factual class.
CLD. We highlight the competing nature of the two measures P ( S CF ≤ div ) and P ( S F ≥ div ) in the counterfactual setting. For example, if the intervention is too minimal i.e. low P ( S CF ≤ div ) - the counterfactual will still resemble observations from the factual class i.e. high P ( S F ≥ div ) . Therefore, the goal is to find the best balance between the two measures. Finally, we define the counterfactual latent divergence (CLD) metric as the LogSumExp of the two probability measures. The LogSumExp operation acts as a smooth approximation of the maximum function. It also penalizes relative peak values for any of the measures when compared to a simple summation. We denote CLD as:
$$\begin{aligned}
& S C F \leq d ( v ) + e x p ( P ( S _ { F } \geq d ( v ) ) ) \\
& = 1 0 .
\end{aligned}$$
We show, using the same experimental setup as in Sec. 5.1, that CLD improves counterfactual estimation when quantitatively compared with the baseline methods, as illustrated in Tab. 1.
## 5.3. Tuning the Hyperparameter s with CLD
We now utilize CLD, the proposed metric, for fine-tuning s , the scale hyperparameter of our framework detailed in Sec. 3.4. Incidentally, the model with hyperparameters achieving best CLD outperforms previous methods in other metrics (see Tab. 1) and output the best qualitative results (see Fig. 3( b )). This result further validate that our metric is suited for counterfactual evaluation.
6. We use the following set-builder notation: MY SET = { function ( input ) | input domain } .
Figure 4: Scale hyperparameter search using CLD (lower is better). The line plot shows the mean and 95% confidence interval. We found that s = 0 . 7 is the best value.
<details>
<summary>Image 4 Details</summary>

### Visual Description
\n
## Line Chart: CLD vs. Scale
### Overview
The image presents a line chart illustrating the relationship between "Scale" on the x-axis and "CLD" on the y-axis. A blue line represents the central tendency, and a shaded region around it indicates the variability or confidence interval. The chart shows a decreasing trend initially, followed by a leveling off and a slight increase.
### Components/Axes
* **X-axis:** Labeled "Scale", ranging from 0.0 to 3.0, with tick marks at 0.5 increments.
* **Y-axis:** Labeled "CLD", ranging from 0.0 to 1.35, with tick marks at 0.25 increments.
* **Data Series:** A single blue line representing the relationship between Scale and CLD.
* **Confidence Interval:** A light blue shaded region surrounding the blue line, representing the variability of the data.
### Detailed Analysis
The blue line starts at approximately 1.31 at Scale = 0.0. The line then exhibits a steep downward slope until approximately Scale = 0.75, where CLD reaches a minimum value of around 0.95. From Scale = 0.75 to Scale = 1.5, the line plateaus, remaining relatively constant around a CLD value of approximately 1.05. Between Scale = 1.5 and Scale = 2.0, the line shows a slight upward trend, increasing to a CLD value of around 1.12. From Scale = 2.0 to Scale = 3.0, the line remains relatively flat, hovering around a CLD value of approximately 1.12.
Here's a more detailed breakdown of approximate data points:
* Scale = 0.0, CLD ≈ 1.31
* Scale = 0.5, CLD ≈ 1.15
* Scale = 0.75, CLD ≈ 0.95
* Scale = 1.0, CLD ≈ 1.02
* Scale = 1.5, CLD ≈ 1.07
* Scale = 2.0, CLD ≈ 1.12
* Scale = 3.0, CLD ≈ 1.12
The shaded region (confidence interval) is widest around Scale = 0.5, indicating greater uncertainty in this region. It narrows as Scale increases, suggesting more consistent data points at higher Scale values.
### Key Observations
* The most significant change in CLD occurs between Scale = 0.0 and Scale = 0.75, with a substantial decrease.
* The data stabilizes after Scale = 1.5, indicating that increasing the Scale beyond this point has minimal impact on CLD.
* The confidence interval suggests higher variability in the data at lower Scale values.
### Interpretation
The chart suggests an inverse relationship between Scale and CLD, at least initially. As the Scale increases, CLD decreases, but this effect diminishes as the Scale reaches higher values. The leveling off of the curve indicates a potential saturation point, where further increases in Scale do not significantly alter CLD. The confidence interval provides a measure of the reliability of the data, with wider intervals indicating greater uncertainty.
The specific meaning of "Scale" and "CLD" is not provided in the image, but the chart suggests that "Scale" might be a parameter influencing "CLD", and there's an optimal range for "Scale" where "CLD" is relatively stable. This could represent a system where increasing a certain input ("Scale") initially improves a performance metric ("CLD"), but beyond a certain point, the improvement plateaus or even reverses. Further context about the variables would be needed to provide a more definitive interpretation.
</details>
Experimental Setup. We run Alg. 1 while varying the scale hyperparameter s in the [0 . 0 , 3 . 0] interval for MNIST data, as depicted in Fig. 4. When s = 0 , the classifier does not influence the generation, therefore, the counterfactuals are reconstructions of the factual data; resulting in a high CLD.
When s = 3 (too high), the diffusion model contributes much less than the classifier, therefore, the counterfactuals are driven towards the desired class while ignoring the exogenous noise of a given observation. High values of s correspond to strong interventions which do not hold the minimality property, also resulting in a high CLD. Therefore, the optimum point for s is an intermediate value where CLD is minimum. All MNIST experiments were performed using s = 0 . 7 , following this hyperparameter search. See Appendix I for qualitative results.
## 6. Conclusions
We propose a theoretical framework for causal estimation using generative diffusion models entitled Diff-SCM. Diff-SCM unifies recent advances in generative energy-based models and structural causal models. Our key idea is to use gradients of the marginal and conditional distributions entailed by an SCM for causal estimation. The main benefit of only using the distribution's gradients is that one can learn an anti-causal mechanism and use its gradients as a causal mechanism for generation. We show empirically how it can be applied to a two variable causal model. We leave the extension to more complex causal models to future work.
Furthermore, we present an algorithm for performing interventions and estimating counterfactuals with Diff-SCM. We acknowledge the difficulty of evaluating counterfactuals and propose a metric entitled counterfactual latent divergence (CLD). CLD measures the distance, in a latent space, between the observation and the generated counterfactual by comparison with other distances between samples in the dataset. We use CLD for comparison with baseline methods and for hyperparameter search. Finally, we show that the proposed Diff-SCM achieves better quantitative and qualitative results compared to state-of-the-art methods for counterfactual generation on MNIST.
Limitations and future work. We only have specifications for two variables in our empirical setting, therefore, applying an intervention on x (2) means changing all the correlated variables within this dataset. Applying Diff-SCM to more complex causal models would require the use of additional techniques. For instance, consider the SCM depicted in Fig. 2, a classifier naively trained to predict x (2) (class) from x (1) (image) would be biased towards the confounder x (3) . Therefore, the gradient of the classifier w.r.t the image would also be biased. This would make the intervention do( x (2) ) not correct. In this case, the graph mutilation (removing edges from parents of node intervened on) would not happen because the gradients from the classifier would pass information about x (3) . We leave this extension for future work.
## 7. Acknowledgement
We thank Spyridon Thermos, Xiao Liu, Jeremy Voisey, Grzegorz Jacenkow and Alison O'Neil for their input on the manuscript and research support. This work was supported by the University of Edinburgh, the Royal Academy of Engineering and Canon Medical Research Europe via Pedro Sanchez's PhD studentship. This work was partially supported by the Alan Turing Institute under the EPSRC grant EP N510129 \ 1. We thank Nvidia for donating a TitanX GPU. S.A. Tsaftaris acknowledges the support of Canon Medical and the Royal Academy of Engineering and the Research Chairs and Senior Research Fellowships scheme (grant RCSRF1819 \ 825).
## References
- E Bareinboim, J Correa, D Ibeling, and T Icard. On Pearl's Hierarchy and the Foundations of Causal Inference, 2020.
- Tineke Blom, Stephan Bongers, and Joris M Mooij. Beyond Structural Causal Models: Causal Constraints Models. In Proc. 35th Uncertainty in Artificial Intelligence Conference , pages 585594, 2020.
- Stephan Bongers and Joris M Mooij. From Random Differential Equations to Structural Causal Models: the stochastic case. arxiv pre-print , 2018.
- Ricky T Q Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural Ordinary Differential Equations. In Advances in Neural Information Processing Systems , 2018.
- Saloni Dash, Vineeth N Balasubramanian, and Amit Sharma. Evaluating and Mitigating Bias in Image Classifiers: A Causal Perspective Using Counterfactuals. arxiv pre-print , 2020.
- Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Proc. of Conference on Computer Vision and Pattern Recognition , pages 248-255. IEEE, 2009.
- Emily Denton, Ben Hutchinson, Margaret Mitchell, Timnit Gebru, and Andrew Zaldivar. Image Counterfactual Sensitivity Analysis for Detecting Unintended Bias. arxiv pre-print , 12 2019.
- Prafulla Dhariwal and Alex Nichol. Diffusion Models Beat GANs on Image Synthesis. In Advances in Neural Information Processing Systems , 2021.
- Xin Du, Lei Sun, Wouter Duivesteijn, Alexander Nikolaev, and Mykola Pechenizkiy. Adversarial balancing-based representation learning for causal effect inference with observational data. Data Mining and Knowledge Discovery , 35(4):1713-1738, 12 2021.
- Yilun Du and Igor Mordatch. Implicit Generation and Generalization in Energy-Based Models. In Advances in Neural Information Processing Systems , 12 2019.
- Yilun Du, Shuang Li, Igor Mordatch, and Google Brain. Compositional Visual Generation with Energy Based Models. In Advances in Neural Information Processing Systems , 2020.
## SANCHEZ TSAFTARIS
- Yash Goyal, Ziyan Wu, Jan Ernst, Dhruv Batra, Devi Parikh, and Stefan Lee. Counterfactual Visual Explanations. Proc. of 36th International Conference on Machine Learning , pages 4254-4262, 12 2019.
- Will Grathwohl, Kuan-Chieh Wang, J¨ orn-Henrik Jacobsen, David Duvenaud, Kevin Swersky, and Mohammad Norouzi. Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One. In Proc. of International Conference on Learning Representations , 2020.
- Jennifer Hill. Bayesian Nonparametric Modeling for Causal Inference. Journal of Computational and Graphical Statistics , 20(1):217-240, 12 2012.
- Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising Diffusion Probabilistic Models. In Advances on Neural Information Processing Systems , 2020.
- Frederik Hvilshøj, Alexandros Iosifidis, and Ira Assent. ECINN: Efficient Counterfactuals from Invertible Neural Networks. 12 2021.
- Aapo Hyv¨ arinen. Estimation of Non-Normalized Statistical Models by Score Matching. Journal of Machine Learning Research , 6:695-709, 2005.
- Fredrik D Johansson, Uri Shalit, and David Sontag. Learning Representations for Counterfactual Inference. In Proc. of International Conference on Machine Learning , 2016.
- Divyansh Kaushik, Amrith Setlur, Eduard Hovy, and Zachary C Lipton. EXPLAINING THE EFFICACY OF COUNTERFACTUALLY AUGMENTED DATA, 12 2020.
- N Kilbertus, G Parascandolo, and B Scholkopf. Generalization in anti-causal learning. In NeurIPS Workshop on Critiquing and Correcting Trends in Machine Learning , 2018.
- Diederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. 2nd International Conference on Learning Representations , 2014.
- Matt Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. Counterfactual Fairness. In Advances on Neural Information Processing Systems , 2017.
- Y Lecun, L Bottou, Y Bengio, and P Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE , 86(11):2278-2324, 1998.
- Arnaud Van Looveren and Janis Klaise. Interpretable Counterfactual Explanations Guided by Prototypes. In European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases , volume 1907.02584, 12 2021.
- Christos Louizos, Uri Shalit, Joris Mooij, David Sontag, Richard Zemel, and Max Welling. Causal Effect Inference with Deep Latent-Variable Models. In Advances on Neural Information Processing Systems , 2017.
- Chaochao Lu, Biwei Huang, Ke Wang, Jos´ e Miguel Hern´ andez-Lobato, Kun Zhang, and Bernhard Sch¨ olkopf. Sample-Efficient Reinforcement Learning via Counterfactual-Based Data Augmentation. arxiv pre-print , 12 2020.
- Joris M Mooij, Dominik Janzing, and Bernhard Sch¨ olkopf. From Ordinary Differential Equations to Structural Causal Models: The Deterministic Case. In Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence , pages 440-448, 2013.
- Alex Nichol and Prafulla Dhariwal. Improved Denoising Diffusion Probabilistic Models. arxiv pre-print , 12 2021.
- Bernt Øksendal. Stochastic Differential Equations: An Introduction with Applications . Springer, fifth edition edition, 2003. ISBN 978-3-642-14394-6.
- Nick Pawlowski, Daniel C Castro, and Ben Glocker. Deep Structural Causal Models for Tractable Counterfactual Inference. In Advances in Neural Information Processing Systems , 2020.
- Judea Pearl. Causality . Cambridge University Press, 2009. doi: 10.1017/CBO9780511803161.
- Judea Pearl. Causal inference in statistics : a primer . John Wiley & Sons Ltd, Chichester, West Sussex, UK, 2016. ISBN 978-1-119-18684-7.
- Judea Pearl and Dana Mackenzie. The Book of Why: The New Science of Cause and Effect . Basic books, 2018.
- Jonas Peters, Dominik Janzing, and Bernhard Sch¨ olkopf. Elements of causal inference . MIT Press, 2017.
- ORonneberger, P.Fischer, and T Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proc. of Medical Image Computing and Computer-Assisted Intervention , volume 9351, pages 234-241. Springer, 2015.
- Paul K Rubenstein, Stephan Bongers, uvanl Bernhard Sch¨ olkopf, and Joris M Mooij. From Deterministic ODEs to Dynamic Structural Causal Models. In Proceedings of the 34th Annual Conference on Uncertainty in Artificial Intelligence (UAI-18). , 2018.
- Simo S¨ arkk¨ a and Arno Solin. Applied Stochastic Differential Equations , volume 10. Cambridge University Press, 2019.
- Axel Sauer and Andreas Geiger. Counterfactual Generative Networks. In Proc. of International Conference on Learning Representations , 12 2021.
- Bernhard Sch¨ olkopf, Dominik Janzing, Jonas Peters, Eleni Sgouritsa, Kun Zhang, and Joris Mooij JMOOIJ. On Causal and Anticausal Learning. In Proc. of the International Conference on Machine Learning , 2012.
- Bernhard Scholkopf, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, and Yoshua Bengio. Toward Causal Representation Learning. Proceedings of the IEEE , 2021.
- Lisa Schut, Oscar Key, Rory McGrath, Luca Costabello, Bogdan Sacaleanu, Medb Corcoran, and Yarin Gal. Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties. In Proc. of The 24th International Conference on Artificial Intelligence and Statistics , pages 1756-1764, 2021.
## SANCHEZ TSAFTARIS
- Xinwei Shen, Furui Liu, Hanze Dong, Qing Lian, Zhitang Chen, and Tong Zhang. Disentangled Generative Causal Representation Learning. arxiv pre-print , 2020.
- Claudia Shi, David M Blei, and Victor Veitch. Adapting Neural Networks for the Estimation of Treatment Effects. In Proc. of Neural Information Processing Systems , 2019.
- Yishai Shimoni, Chen Yanover, Ehud Karavani, and Yaara Goldschmnidt. Benchmarking Framework for Performance-Evaluation of Causal Inference Analysis. arxiv pre-print , 12 2018.
- Abhishek Sinha, Jiaming Song, Chenlin Meng, and Stefano Ermon. D2C: Diffusion-Decoding Models for Few-Shot Conditional Generation. arXiv pre-print , 2021.
- Jascha Sohl-Dickstein, Eric A Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep Unsupervised Learning using Nonequilibrium Thermodynamics. Proc. of 32nd International Conference on Machine Learning , 3:2246-2255, 12 2015.
- Alexander Sokol and Niels Richard Hansen. Causal interpretation of stochastic differential equations. Electronic Journal of Probability , 19:1-24, 2014.
- Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising Diffusion Implicit Models. In Proc. of International Conference on Learning Representations , 2021a.
- Yang Song and Stefano Ermon. Generative Modeling by Estimating Gradients of the Data Distribution. Advances in Neural Information Processing Systems , 32, 2019.
- Yang Song Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-Based Generative Modeling Through Stochastic Differential Equations. In ICLR , 2021b.
- Vladimir N Vapnik. An overview of statistical learning theory. IEEE Transactions on Neural Networks , 10(5):988-999, 1999.
- Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention Is All You Need. In Advances in neural information processing systems , pages 5998-6008, 2017.
- Sahil Verma, John Dickerson, and Keegan Hines. Counterfactual Explanations for Machine Learning: A Review. arxiv pre-print , 12 2020.
- Sandra Wachter, Brent Mittelstadt, and Chris Russell. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law & Technology , 31 (2), 2018.
- Kevin Xia, Kai-Zhan Lee, Yoshua Bengio, and Elias Bareinboim. The Causal-Neural Connection: Expressiveness, Learnability, and Inference. 12 2021.
- Mengyue Yang, Furui Liu, Zhitang Chen, Xinwei Shen, Jianye Hao, and Jun Wang. CausalVAE: Disentangled Representation Learning via Neural Structural Causal Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 9593-9602, 2021.
- Matej Zeˇ cevi, Devendra Singh Dhami, Petar Veliˇ ckovi, and Kristian Kersting. Relating Graph Neural Networks to Structural Causal Models. arxiv pre-print , 2021.
## Appendix A. Theory for Training Diffusion Models
We now review with more detailed the formulation of Denoising Diffusion Probabilistic Models (DDPMs) (Ho et al., 2020). In DDPM, samples are generated by reversing a diffusion process with a neural network from a Gaussian prior distribution. We begin by defining our data distribution x 0 ∼ p ( x 0 ) and a Markovian noising process which gradually adds noise to the data to produce noised samples x t up to x T . In particular, each step of the noising process adds Gaussian noise according to some variance schedule given by β t :
$$p ( x _ { 1 } | x _ { 1 - 1 } ) = N ( x _ { 1 } ; \sqrt { I - p _ { 1 } x _ { 1 } } )$$
In addition, it's possible to sample x t directly from x 0 without repeatedly sample from x t ∼ p ( x t | x t -1 ) . Instead, p ( x t | x 0 ) can be expressed as a Gaussian distribution by defining a variance of the noise for an arbitrary timestep α t := ∏ t j =0 (1 -β j ) . We, therefore, proceed to define
$$p ( x _ { 1 } | x _ { 0 } ) = N ^ { \prime } ( x _ { 1 } ; \sqrt { a _ { 1 } x _ { 0 } } ( 1 - a$$
$$= \sqrt { a _ { 1 } x _ { 0 } + e ^ { i v I - a _ { 2 } } , e > N }$$
However, we are interested in a generative process which consists in performing a reverse diffusion, going from noise x T to data x 0 . As such, the model trained with parameters θ should correspond to conditional distribution p θ ( x t -1 | x t ) .
Using Bayes theorem, one finds that the posterior p ( x t -1 | x t , x 0 ) is also a Gaussian with mean ˜ µ t ( x t , x 0 ) and variance ˜ β t defined as follows:
$$\frac { \sqrt { a _ { t } - 1 } } { 1 - a _ { t } } = \frac { a _ { t } ( x _ { t } , x _ { 0 } ) } { 1 - a _ { t } }$$
$$p ( x _ { t - 1 } | x _ { t } , x _ { 0 } ) = N ( x _ { t - 1 } ; \mu ( x _ { t } , x _ { 0 }$$
Training p θ ( x t -1 | x t ) such that p ( x 0 ) learns the true data distribution, the following variational lower-bound L vlb for p θ ( x 0 ) can be optimized:
$$L _ { v i b } := - \log p _ { 0 } ( x _ { 0 } | x )$$
Ho et al. (2020) considered a variational approximation of the Eq. 15 for training p θ ( x t -1 | x t ) efficiently. Instead of directly parameterize µ θ ( x t , t ) as a neural network, a model θ ( x t , t ) is trained to predict from Equation 13. This simplified objective is defined as follows:
$$L _ { simple } = E _ { t } \cdot x 0 - p d t a _ { c }$$
## Appendix B. Pearl's Causal Hierarchy
Bareinboim et al. (2020) use Pearl's Causal Hierarchy (PCH) nonmenclature after Pearl's seminal work on causality which is well illustrated in Pearl and Mackenzie (2018) as the Ladder of Causation . PCH states that structural causal models should be able to sample from a collection of three distributions (Peters et al. (2017), Ch. 6) which are related to cognitive capabilities:
1. The observational ('seeing') distribution p G ( x ( k ) ) .
2. The do-calculus (Pearl, 2009) formalizes sampling from the interventional ('doing') distribution p G ( x ( k ) | do ( x ( j ) = x ( j ) )) . The do () operator means an intervention on a specific variable is propagated only through it's descendants in the SCM G . The causal structure forces that only the descendants of the variable intervened upon will be modified by a given action.
3. Sampling from a counterfactual ('imagining') distribution p G ( x ( k ) | do ( x ( j ) = x ( j ) ); x ( k ) ) involves applying an intervention do ( x ( j ) = x ( j ) ) on an given instance x ( k ) . Contrary to the factual observation, a counterfactual corresponds to a hypothetical scenario.
## Appendix C. Example of Anti-causal Intervention
We illustrate Prop. 1 in a case with two variables, which is also used in the experiments. Consider a variable x (1) caused by x (2) , i.e. x (1) ← x (2) . Following the causal direction, the joint distribution can be factorised as p ( x (1) , x (2) ) = p ( x (1) | x (2) ) p ( x (2) ) . Applying an intervention with the SDE framework, however, one would only need ∇ x (1) log p t ( x (1) | x (2) = x (2) ) , as in Eq. 6. By applying Bayes' rule, one can derive p ( x (1) | x (2) ) = p ( x (2) | x (1) ) p ( x (1) ) /p ( x (2) ) . Therefore, the sampling process would be done with
$$\sum _ { x ( 1 ) } ^ { \infty } \log p ( x ^ { 2 } ) | x ( 1 ) } + \sum _ { x ( 1 ) } ^ { \infty } \log p ( x ^ { 2 } ) .$$
## Appendix D. DDIM sampling procedure
A variation of the DDPM (Ho et al., 2020) sampling procedure is done with Denoising Diffusion Implicit Models (DDIM, Song et al. (2021a)). DDIM formulates an alternative non-Markovian noising process that allows a deterministic mapping between latents to images. The deterministic mapping means that the noisy term in Eq. 2 is no longer necessary for sampling. This sampling approach has the same forward marginals as DDPM, therefore, it can be trained in the same manner. This approach was used for sampling throughout the paper as explained in Sec. 3.4.
Alg. 2 describes DDIM's sampling procedure from x T ∼ N (0 , I ) (exogenous noise distribution) to x 0 (data distribution) deterministic procedure. This formulation has two main advantages: (i) it allows a near-invertible mapping between x T and x 0 as shown in Alg. 3; and (ii) it allows efficient sampling with fewer iterations even when trained with the same diffusion discretization. This is done by choosing different undersampling t in the [0 , T ] interval.
## Algorithm 2 Sampling with DDIM - Image Generation
Models:
trained diffusion model θ .
Input :
x T ∼ N (0 , I)
Output:
x 0 - Image
for t ← T to 0 do
$$\begin { cases }
x _ { t - 1 } + \sqrt { \alpha _ { t - 1 } } ( x _ { t } - 1 ) \\
+ \sqrt { \alpha _ { t } }$$
end
| Algorithm 3 Reverse-Sampling with DDIM - Inferring the Noisy Latent | |
|-----------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------|
| Models: trained diffusion model θ . | Models: trained diffusion model θ . |
| Input : x 0 - Image | Input : x 0 - Image |
| Output: x T - Latent Space | Output: x T - Latent Space |
| for t ← T to 0 do √ | for t ← T to 0 do √ |
| x t +1 ← √ α t +1 ( x t - 1 - α t θ ( x t ,t ) √ α t ) + √ α t +1 θ ( x t , t ) | x t +1 ← √ α t +1 ( x t - 1 - α t θ ( x t ,t ) √ α t ) + √ α t +1 θ ( x t , t ) |
## Appendix E. Implementation Details
For each dataset, we train two models that are trained separately: (i) θ is implemented as an encoder-decoder architecture with skip-connections, i.e. a Unet-like network (Ronneberger et al., 2015). (ii) A (Anti-causal) classifier that uses the encoder of θ with a pooling layer followed by a linear classifier. All models are time conditioned. Time, which is a scalar, is embedded using the transformer's sinusoidal position embedding (Vaswani et al., 2017). The embedding is incorporated into the convolutional models with an Adaptive Group Normalization layer into each residual block (Nichol and Dhariwal, 2021). Our architectures and training procedure follow Dhariwal and Nichol (2021). They performed an extensive ablation study of important components from DDPM (Ho et al., 2020) and improved overall image quality and log-likelihoods on many image benchmarks. We use the same hyperparameters as Dhariwal and Nichol (2021) for the ImageNet and define ours for MNIST. The specific hyperparameters for diffusion and classification models follow Tab. 2. We train all of our models using Adam with β 1 = 0 . 9 and β 2 = 0 . 999 . We train in 16-bit precision using loss-scaling, but maintain 32-bit weights, EMA, and optimizer state. We use an EMA rate of 0.9999 for all experiments.
We use DDIM sampling for all experiments with 1000 timesteps. The same noise schedule is used for training. Even though DDIM allows faster sampling, we found that it does not work well for counterfactuals.
Table 2: Hyperparameters for models.
| dataset | ImageNet 256 | ImageNet 256 | MNIST | MNIST |
|----------------------|----------------|----------------|-----------|------------|
| model | diffusion | classifier | diffusion | classifier |
| Diffusion steps | 1000 | 1000 | 1000 | 1000 |
| Model size | 554M | 54M | 2M | 500K |
| Channels | 256 | 128 | 64 | 32 |
| Depth | 2 | 2 | 1 | 1 |
| Channels multiple | 1,1,2,2,4,4 | 1,1,2,2,4,4 | 1,2,4 | 1,2,4,4 |
| Attention resolution | 32,16,8 | 32,16,8 | - | - |
| Batch size | 256 | 256 | 256 | 256 |
| Iterations | ≈ 2 M | ≈ 500 K | 30K | 3K |
| Learning Rate | 1e-4 | 3e-4 | 1e-4 | 1e-4 |
## Appendix F. Sampling from The Interventional Distribution
In this section, we make sure that our method complies with the second level of Pearl's Causal Hierarchy (details in Appendix B). Diff-SCM can be used for efficiently sampling from the interventional distributions p G image ( x (1) | do ( x (2) = x (2) )) . Sampling from the interventional distribution can be done by using the second part ('Generation with Intervention') of Alg. 1 but sampling u ( k ) from a Gaussian prior, instead of inferring the latent space (using 'Abduction of Exogenous Noise'). This formulation is identical to Dhariwal and Nichol (2021) with guided DDIM (Song et al., 2021a) (details in appendix D). Dhariwal and Nichol (2021) achieves state-of-the-art image quality results in generation while providing faster sampling than DDPM. Since its capabilities in image synthesis compared to other generative models are shown in Dhariwal and Nichol (2021), we restrict ourselves to present qualitative results on ImageNet 256x256.
Experimental Setup. Our experiment, depicted in Fig. 5, consists in sampling a single latent space u (1) from a Gaussian distribution and generating samples for different classes. Since all images are generated from the same latent, this allows visualization of the effect of the classifier guidance for different classes. This setup differs from experiments in Dhariwal and Nichol (2021), where each image presented was a different sample u (1) ∼ u (1) . Here, by sampling u (1) only once, we isolate the contribution of the causal mechanism from the sampling of the exogenous noise u (1) . We use the scale hyperparameter s = 5 for these experiments.
Figure 5: Sampling ImageNet images from the interventional distribution. All images originate from the same initial noise u ( k ) but different interventions are applied at inference time.
<details>
<summary>Image 5 Details</summary>

### Visual Description
\n
## Image: Visual Representation of Input Conditions
### Overview
The image presents five distinct visual conditions, each labeled with a mathematical or descriptive notation. These conditions appear to be inputs or stimuli used in a larger experiment or model. The conditions are: noise, chimpanzee, mushroom, bookshop, and goose. Each condition is represented by a corresponding image.
### Components/Axes
The image consists of five horizontally arranged panels. Each panel is labeled above its corresponding image. The labels are:
* `u^(k) [noise]`
* `do(chimpanzee)`
* `do(mushroom)`
* `do(bookshop)`
* `do(goose)`
### Detailed Analysis or Content Details
1. **Noise:** The first panel displays a field of teal-colored noise. It appears to be a static or random pattern, lacking any discernible features.
2. **Chimpanzee:** The second panel shows a close-up image of a chimpanzee. The chimpanzee is facing forward, with a dark coat and a somewhat inquisitive expression.
3. **Mushroom:** The third panel features a photograph of a mushroom. The mushroom has a bright red cap with white spots and a thick, green stalk.
4. **Bookshop:** The fourth panel depicts the interior of a bookshop. Bookshelves filled with books are visible, creating a cluttered and warm atmosphere.
5. **Goose:** The fifth panel shows a goose swimming in water. The goose is predominantly white with a black head and neck.
### Key Observations
The image presents a diverse set of visual stimuli, ranging from abstract noise to concrete images of animals and objects. The use of `do()` notation suggests these conditions might be interventions or manipulations within a causal model. The `u^(k)` notation for noise suggests it might be a variable or input.
### Interpretation
The image likely represents a set of input conditions for a machine learning model, a cognitive experiment, or a causal inference study. The `do()` operator, commonly used in causal inference, indicates that these are interventions – setting the value of a variable rather than observing it. The noise condition serves as a baseline or control. The other conditions (chimpanzee, mushroom, bookshop, goose) represent different stimuli or scenarios that the model or subject is exposed to. The purpose of this setup is likely to investigate how the model or subject responds to these different inputs and to understand the causal relationships between them. The diversity of the images suggests the study is exploring a broad range of perceptual or conceptual categories. The image does not provide any quantitative data, but rather a qualitative description of the experimental setup.
</details>
## Appendix G. IM1 and IM2
Looveren and Klaise (2021) propose IM1 and IM2 for measuring the realism and closeness to the data manifold. These metrics are based on the reconstruction losses of auto-encoders trained on specific classes:
$$\begin{array}{c}
M ( x _ { C F } , x _ { F } , x _ { C P } ) = \frac{1}{\left| x _ { C F } - A E \right|} \left| x _ { C F } - A E \right|^2 \\
&= \frac{1}{2} \left| x _ { C F } - A E \right|^2 + e
\end{array}$$
<!-- formula-not-decoded -->
where AE x (2) denotes an autoencoder trained only on instances from class x (2) , and AE is an autoencoder trained on data from all classes. IM1 is the ratio of the reconstruction loss of an autoencoder trained on the counterfactual class divided by the loss of an autoencoder trained on all classes. IM2 is the normalized difference between the reconstruction of the CF under an autoencoder trained on the counterfactual class, and one trained on all classes.
## Appendix H. More MNIST Counterfactuals
Here, we show in Fig. 6 that we can generate counterfactuals of all MNIST classes, given factual image. We use the scale hyperparameter s = 0 . 7 for these experiments.
orig. rec. do(0) do(1) do(2) do(3) do(4) do(5) do(6) do(7) do(8) do(9)
<details>
<summary>Image 6 Details</summary>

### Visual Description
\n
## Image: Handwritten Digit Samples
### Overview
The image displays a grid of handwritten digits, likely used for training or testing a machine learning model for optical character recognition (OCR). The digits are presented in a 3x9 arrangement, with each cell containing a single digit. The background is black, and the digits are rendered in white.
### Components/Axes
There are no explicit axes or labels in the image. The arrangement suggests a simple grid structure. The digits themselves represent the data points.
### Detailed Analysis or Content Details
The image contains the following digits, arranged in a 3x9 grid (row by row):
* **Row 1:** 5, 5, 0, 1, 2, 3, 4, 6, 7, 8, 9
* **Row 2:** 6, 6, 0, 1, 2, 3, 4, 5, 7, 8, 9
* **Row 3:** 8, 8, 0, 1, 2, 3, 8, 5, 6, 7, 8
It's important to note that the handwriting style varies significantly between digits, and some digits are more clearly defined than others. Some digits appear multiple times (e.g., 0, 1, 2, 3, 5, 6, 8, 9).
### Key Observations
* The digit '0' appears three times.
* The digit '1' appears three times.
* The digit '2' appears three times.
* The digit '3' appears three times.
* The digit '5' appears twice.
* The digit '6' appears twice.
* The digit '8' appears three times.
* The digit '9' appears twice.
* The digit '4' appears twice.
* The digit '7' appears twice.
The variations in handwriting style suggest a diverse dataset, which is beneficial for training robust OCR models.
### Interpretation
The image represents a sample of handwritten digits, likely intended for use in a machine learning context. The purpose is to provide a dataset for training and evaluating algorithms that can recognize handwritten characters. The diversity in handwriting styles is a crucial aspect of this dataset, as it helps the model generalize to different writing styles and improve its accuracy. The presence of multiple instances of each digit ensures that the model has sufficient examples to learn from. The image itself doesn't provide any specific insights beyond the representation of these digits; its value lies in its potential use as training data. The image is a visual representation of data, but does not contain any data points beyond the digits themselves.
</details>
Figure 6: MNIST counterfactuals. From the left to right, one can observe the original image ( orig. ), the reconstruction ( rec. , which entails in running the algorithm 1 without the anti-causal predictor) and the resulting counterfactuals for each of the digit classes in the dataset.
## Appendix I. Qualitative influence of classifier scale
Here, we show in Fig. 7 the influence of changing the classifier's scale s quantitatively. If s is too low, the intervention will have a mild effect. On the other had, if s is too high, the intervention will neglect the information present in the exogenous noise, therefore, the counterfactual is maintain less factors from the original image.
Figure 7: MNIST counterfactuals. From top to bottom, one can observe the original image ( orig. ), the reconstruction ( rec. , and the resulting counterfactuals for the intervention do (5) over three scales. As shown in Fig. 4, s = 0 . 7 is the optimal scale for MNIST data.
<details>
<summary>Image 7 Details</summary>

### Visual Description
\n
## Image: Digit Reconstruction with Varying Noise Levels
### Overview
The image displays a series of handwritten digits, demonstrating a reconstruction process with increasing levels of noise. The original digits are shown alongside their reconstructions at different noise levels, denoted by 's' values. The image is structured as a grid, with rows representing the original, reconstructed, and noisy versions of the digits.
### Components/Axes
The image is labeled with the following:
* **orig.** (top row): Original digits.
* **rec.** (second row): Reconstructed digits.
* **s 0.1** (third row): Reconstructed digits with noise level s = 0.1.
* **s 0.7** (fourth row): Reconstructed digits with noise level s = 0.7.
* **s 2.0** (bottom row): Reconstructed digits with noise level s = 2.0.
* **do(5)**: A vertical label on the left side, likely indicating a depth or operation parameter of 5.
The digits displayed are: 7, 3, 1, 2, 9, 7, 9, 6, 0, and a circular shape resembling '0' or 'O'. These digits are repeated across each row.
### Detailed Analysis / Content Details
The image shows a comparison of handwritten digits across five rows.
* **Row 1 (orig.):** Displays the original handwritten digits. The digits appear relatively clear and well-defined.
* **Row 2 (rec.):** Shows the reconstructed digits. These are very similar to the originals, indicating a good initial reconstruction.
* **Row 3 (s 0.1):** Reconstructions with a noise level of 0.1. The digits are still recognizable, but show slight distortions and added noise.
* **Row 4 (s 0.7):** Reconstructions with a noise level of 0.7. The digits are more distorted and the noise is more prominent. The digit '1' appears to be significantly altered.
* **Row 5 (s 2.0):** Reconstructions with a noise level of 2.0. The digits are heavily distorted and the noise is very significant, making some digits difficult to identify. The digit '5' is particularly affected, appearing almost unrecognizable.
The digits in each row, from left to right, are:
1. 7
2. 3
3. 1
4. 2
5. 9
6. 7
7. 9
8. 6
9. 0
10. Circular shape
### Key Observations
* As the noise level ('s' value) increases, the quality of the reconstructed digits degrades.
* Some digits are more robust to noise than others. For example, the digit '7' remains relatively recognizable even at s = 2.0, while the digit '1' becomes heavily distorted.
* The reconstruction process appears to be sensitive to noise, particularly at higher levels.
### Interpretation
This image demonstrates the effect of noise on a digit reconstruction process. It likely illustrates the performance of an autoencoder or similar machine learning model trained to reconstruct handwritten digits. The 's' value likely represents a standard deviation or magnitude of added noise. The 'do(5)' label suggests a specific operation or parameter setting used during the reconstruction process.
The increasing distortion with higher noise levels highlights the limitations of the reconstruction model. The varying robustness of different digits suggests that some digits have features that are more easily reconstructed than others, or that the model is better trained on certain digits. This type of visualization is common in machine learning research to assess the robustness and generalization ability of models. The image suggests that the model performs well with low noise but struggles with higher levels of noise, indicating a need for further improvement in its noise handling capabilities.
</details>