# The synthetic instrument: From sparse association to sparse causation
Abstract
In many observational studies, researchers are often interested in the effects of multiple exposures on a single outcome. Standard approaches for high-dimensional data, such as the Lasso, assume that the associations between the exposures and the outcome are sparse. However, these methods do not estimate causal effects in the presence of unmeasured confounding. In this paper, we consider an alternative approach that assumes the causal effects under consideration are sparse. We show that under sparse causation, causal effects are identifiable even with unmeasured confounding. Our proposal is built around a novel device called the synthetic instrument, which, in contrast to standard instrumental variables, can be constructed directly from the observed exposures. We demonstrate that, under the assumption of sparse causation, the problem of causal effect estimation can be formulated as an $\ell_{0}$ -penalization problem and solved efficiently using off-the-shelf software. Simulations show that our approach outperforms state-of-the-art methods in both low- and high-dimensional settings. We further illustrate our method using a mouse obesity dataset.
Dingke Tang 1, Dehan Kong 2, and Linbo Wang 2 Address for correspondence: Linbo Wang, Department of Statistical Sciences, University of Toronto, 700 University Avenue, 9th Floor, Toronto, ON, Canada, M5G 1Z5 Email: linbo.wang@utoronto.ca 1 Department of Mathematics and Statistics, University of Ottawa, Ottawa, Ontario, Canada 2 Department of Statistical Sciences, University of Toronto, Toronto, Ontario, Canada
Keywords: Causal inference; Multivariate analysis; Unmeasured confounding.
1 Introduction
Sparsity is a common assumption in the modern statistical learning literature, as it facilitates variable selection in models and enhances the interpretability of parameter estimates. For example, the Lasso (Tibshirani, 1996) assumes sparse associations between a single outcome and potentially high-dimensional predictors; in other words, only a small subset of predictors have nonzero associations with the outcome. Hastie et al. (2009) summarizes the philosophy behind such methods as the “bet on sparsity” principle: use a procedure that performs well in sparse settings, since no procedure performs well in dense ones. Methods like the Lasso perform well under sparse associations and, as a result, have gained significant popularity in recent decades.
Importantly, the “bet on sparsity” principle does not restrict the types of problems to which sparsity may apply. Beyond sparse associations, a growing body of literature emphasizes sparse causation, where only a fraction of exposures exert nonzero causal effects on the outcome (e.g., Spirtes and Glymour, 1991; Claassen et al., 2013; Wang et al., 2017; Miao et al., 2023a; Zhou et al., 2024). This assumption is often more interpretable and plausible in real data applications. For example, suppose we are interested in the relationship between gene expression and a phenotype such as lung cancer. Biological evidence suggests that only a small proportion of genes may influence the risk of lung cancer (e.g., Kanwal et al., 2017). However, this does not imply sparse association, since unmeasured confounding may induce spurious correlations between many genes and the phenotype.
To illustrate, consider a linear structural model (Pearl, 2013) with a $p$ -dimensional exposure vector $X=(X_{1},...,X_{p})^{\mathrm{\scriptscriptstyle T}}$ , an outcome $Y$ , and a $q$ -dimensional latent variable $U$ :
$$
\displaystyle X \displaystyle=\Lambda U+\epsilon_{x}, \displaystyle Y \displaystyle=X^{\mathrm{\scriptscriptstyle T}}\beta+U^{\mathrm{\scriptscriptstyle T}}\gamma+\epsilon_{y}, \tag{1}
$$
where $\Lambda∈\mathbb{R}^{p× q}$ , $\beta∈\mathbb{R}^{p}$ , and $\gamma∈\mathbb{R}^{q}$ are coefficient vectors, and $\epsilon_{x}=(\epsilon_{1},...,\epsilon_{p})^{\mathrm{\scriptscriptstyle T}}$ , $\epsilon_{y}$ , and $U$ are mutually uncorrelated. Under this model, spurious correlations induced by unmeasured confounding, given by $\text{Cov}(X)^{-1}\Lambda\gamma$ , are typically dense. Consequently, the overall association between $X$ and $Y$ is dense, even when the causal effect $\beta$ itself is sparse.
Identification and estimation of the causal parameter $\beta$ are nontrivial due to the presence of unmeasured confounding by $U$ . The contributions of this paper are twofold. First, under an additional plurality condition, we establish that the parameter $\beta$ in model (3) is identifiable if and only if $\|\beta\|_{0}<p-q$ . This sparsity assumption is both necessary and sufficient, representing a significant improvement over conditions previously introduced in the literature; see Section 1.1 for details. Remarkably, in contrast to many other identification assumptions in causal inference, this assumption can be consistently tested from data.
Second, we develop a two-stage synthetic regularized regression approach for estimating $\beta$ , with a first stage based on ordinary least squares and a second stage using $\ell_{0}$ -penalized regression. The key technique behind our results is a novel device, which we term the synthetic instrument. Unlike standard instrumental variables, the synthetic instrument is constructed from a subset of exposures, enabling identification of causal effects without requiring exogenous variables. Our procedure enjoys Lasso-type theoretical guarantees in both low- and high-dimensional settings.
1.1 Related works
Our proposal is related to recent work on multivariate hidden confounding. Ćevid et al. (2020a) and Guo et al. (2022a) propose a spectral deconfounding method for estimating $\beta$ in a high-dimensional model. Their method assumes a dense confounding structure, which is feasible only in a high-dimensional regime where $p$ tends to infinity with the sample size and the magnitudes of spurious associations tend to zero. Bing et al. (2022) consider a more general setup than the one we study, in which they also allow the outcome to be multivariate. However, they aim to identify the projection of $\beta$ onto a related space rather than the causal parameter $\beta$ itself. Chandrasekaran et al. (2010) study a related problem under the assumption that $(X,Y)$ are normally distributed. Under this assumption, they not only identify the effect of $X$ on $Y$ but also recover the covariance among components of $X$ conditional on $U$ .
The estimation problem for $\beta$ can be framed within the context of causal inference with unmeasured confounding. Currently, the most popular approach in practice is the instrumental variable (IV) framework, which uses information from an exogenous variable known as an IV to identify causal effects (e.g., Angrist et al., 1996; Wang and Tchetgen Tchetgen, 2018; Pfister and Peters, 2022). Another approach that has gained attention recently is the proximal causal inference framework (Tchetgen Tchetgen et al., 2024), which uses information from ancillary variables, known as negative control exposures and outcomes, to remove bias due to unmeasured confounding. Compared with these frameworks, our approach does not rely on the collection of additional ancillary variables, which can be challenging in many practical settings. Instead, we rely on the availability of multiple exposures and the sparsity assumption for identification and estimation.
Recently, a strand of literature has sought to identify the causal effects of multiple exposures. Wang and Blei (2019) popularized this setting by proposing the so-called deconfounder method, which first obtains an estimate $\widehat{U}$ of the unmeasured confounder and then adjusts for $\widehat{U}$ using standard regression methods. However, it has been pointed out that in this setting, without further assumptions, the causal effect $\beta$ is not identifiable (D’Amour, 2019; Ogburn et al., 2020). Kong et al. (2022) show that under model (1) and a binary choice model for the outcome with a non-probit link, the causal effects are identifiable. Their identification results, however, apply only to binary outcomes and do not lead to straightforward estimation procedures. Miao et al. (2023a) consider a similar setting to (1) and (3), showing that the causal effect is identifiable if $\|\beta\|_{0}≤(p-q)/2$ . Their sparsity constraint is significantly stronger than ours, especially when the number of exposures is large relative to the number of latent confounders. Miao et al. (2023a) also develop a robust linear regression-based estimator for $\beta$ . In contrast to our estimator, their estimator is consistent only in the low-dimensional regime where $p$ is fixed and $\|\beta\|_{0}≤ p/2-q+1$ . Furthermore, their estimator for $\beta$ is not sparse and therefore cannot be used for selecting treatments with nonzero effects.
Our results also connect to recent literature on multiply robust causal identification (e.g., Sun et al., 2023), as we show identification in the union of many causal models. This contrasts with the extensive literature on multiply robust estimators under the same causal model (e.g., Wang and Tchetgen Tchetgen, 2018) and on improved doubly robust estimators that are consistent under multiple working models for two components of the likelihood (e.g., Han and Wang, 2013).
1.2 Outline of this paper
The rest of this article is organized as follows. In Section 2, we introduce the setup and background. In Section 3, we describe our identification strategy using the synthetic instrument method. In Section 4, we present our estimation procedure and provide theoretical justifications. We also discuss extensions to nonlinear outcome models. Simulation studies in Section 5 compare our proposal with several state-of-the-art methods in finite-sample performance. In Section 6, we apply our method to mouse obesity data. We conclude with a brief discussion in Section 7.
The proposed method is implemented in an R package, available at https://github.com/dingketang/syntheticIV.
2 Framework, notation, and identifiability
2.1 The model
We assume that we observe $n$ independent samples from the joint distribution of $(X,Y)$ . Consider structural model (1) and
$$
\displaystyle Y \displaystyle=X^{\mathrm{\scriptscriptstyle T}}\beta+g(U)+\epsilon_{y}. \tag{3}
$$
Here, $g(U):\mathbb{R}^{q}→\mathbb{R}$ is a measurable function encoding the effects of unmeasured confounders $U$ on the outcome $Y$ . We do not assume knowledge of the functional form of $g(U)$ because $U$ is unmeasured, making it implausible to specify the exact form of $g(·)$ .
We start with a linear outcome model where the treatment effect is linear in $X$ . In Section 4.3, we will consider a nonlinear treatment effect model, where the relationship between treatment $X$ and outcome $Y$ is represented by a potentially nonlinear function $f(X;\beta)$ .
We consider both low- and high-dimensional settings, where $p$ may be smaller or larger than the sample size $n$ . Let $\dot{\beta}$ denote the true value of $\beta$ in model (3). Without loss of generality, we assume all the variables in (1) and (3) are centered, $\text{Cov}(U)=I_{q}$ , and $\mathbb{E}(g(U))=0$ .
We maintain the following conditions throughout the article.
1. (Invertibility) Any $q× q$ submatrix of $\text{Cov}^{-1}(X)\Lambda$ is invertible.
1. $\Lambda$ is identifiable up to a rotation.
Condition A1 is a regularity condition commonly assumed in the literature (e.g., Miao et al., 2023a, Theorem 3). However, it may be relaxed in our setting. For example, if certain treatment effects are unconfounded after normalization, so that specific rows of $\text{Cov}^{-1}(X)\Lambda$ are zero, then even though Condition A1 is violated, our proposed method can still be used to identify and estimate the treatment effects. See Sections S.1.1 and S.1.3 of the supplementary material for further details on how the algorithm identifies treatment effects under relaxed versions of A1.
Condition A2 has been discussed extensively in the factor model literature. One classical result is Proposition 1, which is a direct corollary of Anderson and Rubin (1956, Theorem 5.1).
**Proposition 1**
*Under models (1), (3), and Condition A1, if $p≥ 2q+1$ and $D=\text{Cov}(\epsilon_{x})$ is a diagonal matrix, then $\Lambda$ is identifiable up to a rotation.*
We note that the condition that $D$ be a diagonal matrix is a classical assumption in the factor analysis literature. However, Condition A2, and hence our algorithm, may still hold even if $D$ is not diagonal. For example, under the assumption that $D$ is sparse, the covariance structure $\text{Cov}(X)=D+\Lambda\Lambda^{\mathrm{\scriptscriptstyle T}}$ implies a sparse plus low-rank decomposition. This allows for the identification of the low-rank component $\Lambda\Lambda^{\mathrm{\scriptscriptstyle T}}$ , as established in Chandrasekaran et al. (2011, Corollary 3), which leads to identifying $\Lambda$ up to a rotation. In another example, in the high-dimensional setting where $p→∞$ , it is possible to identify $\Lambda^{*}∈\mathbb{R}^{p× q}$ , whose columns correspond to the top $q$ eigenvalues of $\text{Cov}(X)$ . Under additional boundedness assumptions on the correlation matrix $D$ and the coefficient matrix $\Lambda$ , one can show that there exists a matrix $O∈\mathbb{R}^{q× q}$ such that the $\ell_{2}$ -norm between each column of $\Lambda O$ and $\Lambda^{*}$ converges to zero as $p$ tends to infinity. See Fan et al. (2013a, Proposition 2.2, Theorem 3.3), Bai (2003, Theorem 2), and Shen et al. (2016, Theorem 1) for more details.
2.2 Identifiability of the causal effect $\beta$
In this section, we discuss the identifiability of the causal parameter $\beta$ in (3). We illustrate the key ideas using the specific example where $p=3$ and $q=1$ in models (1) and (3). Figure 1 provides graphical illustrations.
First, note that without additional assumptions, $\beta$ is generally not identifiable due to unmeasured confounding by $U$ . To see this, observe that under models (1) and (3), we have
$$
\text{Cov}(X_{j},Y)=\beta_{1}\text{Cov}(X_{j},X_{1})+\beta_{2}\text{Cov}(X_{j},X_{2})+\beta_{3}\text{Cov}(X_{j},X_{3})+\gamma\Lambda_{j},\quad j=1,2,3, \tag{4}
$$
where $\Lambda_{j}$ is the $j$ th element of $\Lambda∈\mathbb{R}^{p× 1}$ and $\gamma=\mathbb{E}(Ug(U))$ . Since there are three equations in (4) but four unknown parameters, $\beta_{1},\beta_{2},\beta_{3},\gamma$ , the causal parameters $\beta$ are not identifiable from these equations.
One possible approach to identifying $\beta$ is to assume prior knowledge about certain elements of $\beta$ . For instance, in Figure 1(b), it is assumed that $\beta_{2}=0$ , meaning that $X_{2}$ has no causal effect on the outcome $Y$ . In this scenario, it is straightforward to see from (4) that under Conditions A1 and A2, $\beta_{1}$ , $\beta_{3}$ , and $|\gamma|$ are identifiable.
$U$ $X_{2}$ $X_{1}$ $X_{3}$ $Y$ $\Lambda_{1}$ $\Lambda_{2}$ $\Lambda_{3}$ ${\beta_{1}}$ ${\beta_{2}}$ ${\beta_{3}}$ ${\gamma}$
(a) No additional assumptions.
$U$ $X_{2}$ $X_{1}$ $\epsilon_{1}$ $X_{3}$ $Y$ $\Lambda_{1}$ $\Lambda_{2}$ $\Lambda_{3}$ ${\beta_{1}}$ ${\beta_{3}}$ ${\gamma}$
(b) Assume $\beta_{2}=0$ .
$U$ $X_{2}$ $X_{1}$ $X_{3}$ $Y$ $\Lambda_{1}$ $\Lambda_{2}$ $\Lambda_{3}$ ${\beta_{1}}$ ${\beta_{2}}$ ${\beta_{3}}$ ${\gamma}$
(c) Assume $||\beta||_{0}≤ 1$ .
Figure 1: Causal diagrams corresponding to models (1) and (3): $p=3,q=1.$
In practice, however, it is often difficult to know which exposures have zero causal effects a priori. In this paper, we instead consider the following sparsity assumption; see Figure 1 c for an illustration.
1. (Sparsity) $\|\dot{\beta}\|_{0}≤ p-q-1$ , where $\dot{\beta}$ denotes the true value of $\beta$ .
**Remark 1**
*Condition A3 is significantly less restrictive than similar assumptions in the existing literature used for causal effect identification in this context. For example, Miao et al. (2023a) assumed $\|\dot{\beta}\|_{0}≤(p-q)/2$ .*
2.3 Instrumental variable
The method of instrumental variables is a widely used approach for estimating causal relationships when unmeasured confounders exist between the exposure $X$ and the outcome $Y$ . Suppose we have an exogenous variable $Z$ . For simplicity, assume that the relationships among the random variables are linear and follow the structural equation models:
$$
\begin{split}Y&=\beta X+\gamma U+\pi Z+\epsilon_{y},\\
X&=\alpha_{z}Z+\Lambda U+\epsilon_{x}.\end{split}
$$
For $Z$ to be a valid instrumental variable, the following assumptions are commonly made (e.g., Wang and Tchetgen Tchetgen, 2018): $\pi=0$ (exclusion restriction), $\alpha_{z}≠ 0$ (instrumental relevance), and $\text{Cov}(U,Z)=0$ (unconfoundedness). Under these assumptions, one can consistently estimate $\beta$ via a two-stage least squares estimator: first, obtain the predicted exposure $\widehat{\mathbb{E}}(X\mid Z)$ by linearly regressing $X$ on $Z$ , and then regress $Y$ on $\widehat{\mathbb{E}}(X\mid Z)$ to obtain an estimate of $\beta$ . Here, $\mathbb{E}(X\mid Z)$ refers to the conditional expectation of $X$ given $Z$ , and $\widehat{\mathbb{E}}(X\mid Z)$ refers to its estimator obtained through linear regression.
3 Identifying causal effects via the synthetic instrument
3.1 A new identification approach via voting
We now present a new identification strategy for $\beta$ under the sparsity condition A3. Consider the scenario depicted in Figure 1(c), where $\dot{\beta}_{1}=\dot{\beta}_{2}=0$ but $\dot{\beta}_{3}≠ 0$ . We assume that this information is unavailable to the analyst. Instead, the analyst relies on Condition A3, assuming that $\|\dot{\beta}\|_{0}≤ 1$ .
To explain the identification strategy, it is helpful to consider a voting analogy; see also Zhou et al. (2014) and Guo et al. (2018) for similar approaches in different contexts. Suppose the analyst consults three experts, and expert $j$ hypothesizes that $\beta_{j}=0$ . Based on this hypothesis, one can identify the other elements in $\beta$ using the approach described in Section 2.2. Specifically, for $j=1,2,3$ , let $\widetilde{\beta}^{(j)}$ (and $|\widetilde{\gamma}^{(j)}|$ ) solve (4) assuming $\beta_{j}=0$ . Table 1 summarizes these solutions. Note that the hypotheses by experts 1 and 2 are both correct, so we have $\widetilde{\beta}^{(1)}=\widetilde{\beta}^{(2)}=\beta$ under Conditions A1 – A2. On the other hand, the hypothesis postulated by expert 3 is incorrect. Therefore, in general, $\widetilde{\beta}^{(3)}≠\beta$ . To decide among these three experts, we compare the solutions $\widetilde{\beta}^{(j)}$ and find their mode, defined as $\beta_{\text{mode}}=\mathop{\arg\max}\limits_{\beta∈\mathbb{R}^{3}}|\{j:\widetilde{\beta}^{(j)}=\beta\}|,$ where $|\mathcal{S}|$ denotes the cardinality of a set $\mathcal{S}$ . One can easily see from Table 1 that $\beta_{\text{mode}}=\dot{\beta}$ .
Table 1: A voting analogy of our identification approach for $\beta$ . Note $\widetilde{\beta}^{(j)}=\left(\widetilde{\beta}^{(j)}_{1},\widetilde{\beta}^{(j)}_{2},\widetilde{\beta}^{(j)}_{3}\right)$ denotes the solution to equation (4) under the hypothesis that $\beta_{j}=0$
$$
j \widetilde{\beta}_{1}^{(j)} \widetilde{\beta}_{2}^{(j)} \widetilde{\beta}_{3}^{(j)} j=1 \beta_{1}=0 \dot{\beta}_{3} j=2 \beta_{2}=0 \dot{\beta}_{3} j=3 \beta_{3}=0 \tag{4}
$$
3.2 The synthetic instrument
On the surface, one may follow the identification strategy described in Section 3.1 to estimate $\beta$ . However, in the general case where $q>1$ , each expert would hypothesize that exactly $q$ elements of $\beta$ are zero. In total, there are $C_{p}^{q}$ different hypotheses. Several challenges arise when the data are moderate to high-dimensional, so that $p$ and $q$ are not small.
1. One needs to solve the empirical version of equation (4) $C_{p}^{q}$ times. This could be computationally expensive.
1. Finding the mode of $C_{p}^{q}$ $p$ -dimensional estimates is a non-trivial statistical problem.
To overcome these challenges, we introduce a new device, called the synthetic instrumental variable (SIV) method. As we shall see later, the SIV method has significant advantages in terms of both computational efficiency and identifiability for $\beta$ .
**Remark 2**
*Other approaches that use the voting analogy for identification (e.g., Zhou et al., 2014; Guo et al., 2018) face the same challenges we present here. It is only due to the special structure of our problem that we are able to develop a method that bypasses the model selection step and addresses these challenges.*
In the following, we first introduce the SIV in the context of Figure 1(b), where it is assumed that $\beta_{2}=0$ . Note from Figure 1(b) that the error term $\epsilon_{1}$ serves as an instrumental variable for estimating the effect parameter $\beta_{1}$ . However, $\epsilon_{1}$ is not observable. Instead, note that (1) implies
$$
\begin{split}X_{1}&=\Lambda_{1}U+\epsilon_{1},\\
X_{2}&=\Lambda_{2}U+\epsilon_{2},\end{split} \tag{5}
$$
where $\Lambda_{1}$ and $\Lambda_{2}$ are identified up to the same sign flip, so that $\Lambda_{1}/\Lambda_{2}$ is identifiable. Eliminating $U$ from (5), we obtain $X_{1}-{\Lambda_{2}}X_{2}/{\Lambda_{1}}=\epsilon_{1}-{\Lambda_{1}}\epsilon_{2}/{\Lambda_{1}},$ which depends only on the error terms $\epsilon_{1}$ and $\epsilon_{2}$ . Since $\epsilon_{2}$ is also uncorrelated with $U$ , it is not difficult to see from Figure 1(b) that $SIV_{1}^{(2)}=X_{1}-{\Lambda_{1}}X_{2}/{\Lambda_{2}}$ satisfies the conditions for an instrumental variable for identifying $\beta_{1}$ described in Section 2.3, hence the name synthetic instrument. In contrast to a standard instrumental variable, the synthetic instrument is directly constructed as a linear combination of the exposures, so there is no need to measure additional exogenous variables.
To identify $\beta_{3}$ , one can similarly define $SIV_{3}^{(2)}=X_{3}-{\Lambda_{3}}X_{2}/{\Lambda_{2}}$ . Let $SIV^{(2)}=\left(SIV_{1}^{(2)},SIV_{3}^{(2)}\right)$ . One can then obtain $(\beta_{1},\beta_{3})$ using the so-called synthetic two-stage least squares:
1. Fit a linear regression of $X=(X_{1},X_{2},X_{3})$ on $SIV^{(2)}=(SIV_{1}^{(2)},SIV_{3}^{(2)})$ and obtain $\widetilde{X}=\widehat{\mathbb{E}}[X\mid SIV^{(2)}]$ through the fitted values of the linear regression.
1. Fit a linear regression of $Y$ on $\widetilde{X}$ , fixing $\beta_{2}=0$ , and obtain the coefficients $\widetilde{\beta}_{1}$ and $\widetilde{\beta}_{3}$ .
3.3 Voting with the synthetic instrument
Now consider applying the synthetic instrument to the case in Figure 1(c), where the analyst does not have prior information on which exposure has zero effect on the outcome. Instead, we assume the sparsity condition that $\|\beta\|_{0}≤ 1$ .
By combining the voting procedure in Section 3.1 with the synthetic two-stage least squares method in Section 3.2, we arrive at Algorithm 1 for the estimation of $\beta$ .
Algorithm 1 A naive voting procedure with synthetic two-stage least squares
1. For $j=1,2,3$ , fit a linear regression of $X$ on $SIV^{(j)}$ and obtain $\widetilde{X}^{(j)}=\widehat{\mathbb{E}}\left[X\mid SIV^{(j)}\right]$ ;
1. Fit a linear regression of $Y$ on $\widetilde{X}^{(j)}$ , fixing $\beta_{j}=0$ , and obtain the coefficients $\widetilde{\beta}^{(j)}$ ;
1. Find the mode among $\widetilde{\beta}^{(j)},\,j=1,2,3.$
On the surface, similar to the problems described at the beginning of Section 3.2, voting with the synthetic instrument still involves fitting three different regressions and comparing three vectors $\widetilde{\beta}^{(j)}$ . We now make two key observations regarding the properties of the synthetic instrument, which allow us to simplify Algorithm 1 into a two-stage regression procedure.
**Observation 1**
*Let $\Lambda=(\Lambda_{1},\Lambda_{2},\Lambda_{3})$ . For $j=1,2,3$ , $SIV^{(j)}∈\mathbb{R}^{2}$ span the same linear space $\{\lambda^{\mathrm{\scriptscriptstyle T}}X:\lambda∈\Lambda^{\perp}\}$ . As a result, $\widetilde{X}^{(j)}=\mathbb{E}\left[X\mid SIV^{(j)}\right]$ does not depend on the choice of $j$ , so that one only needs to run Step 1 of Algorithm 1 once.*
**Observation 2**
*From Table 1, we observe that $\|\widetilde{\beta}^{(1)}\|_{0}=\|\widetilde{\beta}^{(2)}\|_{0}=1$ , while $\|\widetilde{\beta}^{(3)}\|_{0}=2$ . Recall that the true value is $\dot{\beta}=\widetilde{\beta}^{(1)}=\widetilde{\beta}^{(2)}$ . Instead of calculating $\widetilde{\beta}^{(j)},\,j=1,2,3,$ separately for each $j$ , Steps 2 and 3 in Algorithm 1 can be replaced with the following penalized regression:
$$
\beta^{SIV}=\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{3}}\|Y-\widetilde{X}^{\mathrm{\scriptscriptstyle T}}\beta\|_{2}^{2}\quad\text{subject to }\|\beta\|_{0}\leq 1,
$$
where, due to Observation 1, $\widetilde{X}^{(1)}=\widetilde{X}^{(2)}=\widetilde{X}^{(3)}\equiv\widetilde{X}$ .*
With these observations, Algorithm 1 simplifies to a two-step regularized regression procedure.
3.4 Synthetic two-stage regularized regression
We now formally introduce the synthetic two-stage regularized regression for the general case. Motivated by Observation 1, we provide the following definition of the synthetic instrument.
**Definition 1 (Synthetic Instrument)**
*Define
$$
SIV=B_{\Lambda^{\perp}}^{\mathrm{\scriptscriptstyle T}}X\in\mathbb{R}^{p-q},
$$
where $B_{\Lambda^{\perp}}∈\mathbb{R}^{p×(p-q)}$ is a semi-orthogonal matrix whose column space is orthogonal to the column space of $\Lambda∈\mathbb{R}^{p× q}$ .*
The following proposition confirms that the $SIV$ are valid instruments.
**Proposition 2**
*Under models (1), (3), and Condition A2, the $SIV$ given by Definition 1 serve as valid instrumental variables for estimating the treatment effects of $X$ on $Y$ .*
To identify the causal parameter $\beta$ in the general case, we introduce the following plurality condition A4.
1. (Plurality rule) Let $C^{*}$ be a subset of $\{1,2,...,p\}$ with cardinality $q$ , and suppose that $\dot{\beta}_{C^{*}}≠ 0$ . The synthetic two-stage least squares coefficient obtained by assuming $\beta_{C^{*}}=0$ is given by $\widetilde{\beta}^{C^{*}}=\underset{\beta∈\mathbb{R}^{p}:\beta_{C^{*}}=0}{\operatorname*{arg\,min}}\,\mathbb{E}\bigl(Y-\widetilde{X}^{\mathrm{\scriptscriptstyle T}}\beta\bigr)^{2},$ where $\widetilde{X}=\widehat{\mathbb{E}}(X\mid SIV)$ . The plurality rule assumes that $\max\limits_{\beta∈\mathbb{R}^{p}}\,|\{C^{*}:\widetilde{\beta}^{C^{*}}=\beta\}|≤ q.$
In Condition A4, each $C^{*}$ corresponds to an expert who makes the incorrect hypothesis that $\beta_{C^{*}}=0$ . Let $s=\|\dot{\beta}\|_{0}$ . In general, there are $C_{p-s}^{q}$ experts making correct hypotheses. If $s<p-q$ , then there are at least $q+1$ experts making correct hypotheses. The plurality rule assumes that no more than $q$ incorrect hypotheses lead to the same synthetic two-stage least squares coefficient. This assumption is similar in spirit to the plurality assumption used in the invalid IV literature (e.g., Guo et al., 2018). We further discuss Assumption A4 in Section S.1.2 of the supplementary material, where we argue that its violation is unlikely.
In parallel to Observation 2, we have the following theorem.
**Theorem 1 (Synthetic two-stage regularized regression)**
*Suppose that models (1), (3), and Conditions A1, A2, and A4 hold.
1. If A3 holds, then $\dot{\beta}$ is identifiable via $\dot{\beta}=\underset{{\beta}∈\mathbb{R}^{p}}{\operatorname*{arg\,min}}\;\mathbb{E}(Y-\widetilde{X}^{\mathrm{\scriptscriptstyle T}}\beta)^{2},$ subject to $\|\beta\|_{0}<p-q$ .
1. If A3 fails, then $\dot{\beta}$ is not identifiable, and for all $\widetilde{\beta}∈\underset{{\beta}∈\mathbb{R}^{p}}{\operatorname*{arg\,min}}\;\mathbb{E}(Y-\widetilde{X}^{\mathrm{\scriptscriptstyle T}}\beta)^{2}$ , we have $\|\widetilde{\beta}\|_{0}≥ p-q$ .*
An important feature of Theorem 1 is that, given $q$ , it is possible to test the sparsity condition A3 from the observed data. In particular, it shows that under models (1), (3), Conditions A1, A2, and the plurality rule A4, the following three statements are equivalent:
1. $\beta$ is identifiable;
1. Condition A3 holds;
1. The most sparse least-squares solution to the second-stage regression has an $\ell_{0}$ -norm smaller than $p-q$ , i.e.,
$$
\min\limits_{\widetilde{\beta}\in\underset{{\beta}\in\mathbb{R}^{p}}{\operatorname*{arg\,min}}\;\mathbb{E}(Y-\widetilde{X}^{\mathrm{\scriptscriptstyle T}}\beta)^{2}}\|\widetilde{\beta}\|_{0}<p-q. \tag{6}
$$
It is worth noting that (6) can be checked from the observed data distribution, so that one may develop a consistent test for Condition A3 and the identifiability of $\beta$ under models (1), (3), and Conditions A1, A2, and A4. See Algorithm 2 below for more details.
4 Estimation via the synthetic two-stage regularized regression
4.1 Estimation
Let ${\bf X}∈\mathbb{R}^{n× p}$ be the design matrix and ${\bf Y}∈\mathbb{R}^{n× 1}$ denote the observed outcome. Theorem 1 suggests the following synthetic two-stage regularized regression for estimating $\beta$ :
$$
\begin{split}\widehat{\bf X}&=\widehat{\mathbb{E}}({\bf X}\mid\widehat{SIV}),\\
\widehat{\beta}&=\underset{{\beta}\in\mathbb{R}^{p}}{\operatorname*{arg\,min}}{\|{\bf Y}-\widehat{\bf X}\beta\|_{2}^{2}}\quad\text{subject to }\|\beta\|_{0}\leq k,\end{split} \tag{7}
$$
where $k$ is a tuning parameter, $\widehat{\Lambda}$ is an estimator of the loading matrix, and $\widehat{SIV}={\bf X}B_{\widehat{\Lambda}^{\perp}}.$
Several estimators have been proposed to determine the number of latent factors in a factor model. In our simulations and data analysis, we use the estimator developed by Onatski (2010a) to obtain $\widehat{q}$ , as it is applicable in both low- and high-dimensional settings. Likewise, various methods exist for estimating the loading matrix $\Lambda$ . In low-dimensional settings where $p$ is fixed, we recommend the maximum likelihood estimator of $\Lambda$ , obtained by maximizing the log-likelihood under multivariate normality. In high-dimensional settings, we estimate $\Lambda$ using principal component analysis (PCA) (Bai, 2003), which yields a row-consistent estimator for $\Lambda$ , that is, each estimated row $\widehat{\Lambda}_{i·}$ consistently estimates its true counterpart ${\Lambda}_{i·}$ . This approach does not require the covariance matrix $\text{Cov}(\epsilon_{X})$ to be diagonal.
Finally, we use cross-validation to select the tuning parameter $k$ . Algorithm 2 summarizes our estimation procedure.
Algorithm 2 The synthetic two-stage regularized regression
Input: ${\bf X}∈\mathbb{R}^{n× p}$ (centered), ${\bf Y}∈\mathbb{R}^{n× 1}$
1: Obtain $\widehat{q}$ from ${\bf X}$ (e.g., Onatski, 2010a).
2: if $n>p$ then obtain $\widehat{\Lambda}∈\mathbb{R}^{p× q}$ via maximum likelihood estimation, assuming multivariate normality;
3: else let $\widehat{\lambda}_{1}≥\widehat{\lambda}_{2}≥...≥\widehat{\lambda}_{p}$ be the eigenvalues of ${\bf X}^{\mathrm{\scriptscriptstyle T}}{\bf X}/(n-1)$ , and let $\widehat{\xi}_{1},\widehat{\xi}_{2},...,\widehat{\xi}_{p}$ be the corresponding eigenvectors. Define $\widehat{\Lambda}=(\sqrt{\widehat{\lambda}_{1}}\widehat{\xi}_{1}\;...\;\sqrt{\widehat{\lambda}_{q}}\widehat{\xi}_{q})$ .
4: Let $B_{\widehat{\Lambda}^{\perp}}$ be a semi-orthogonal matrix whose columns are orthogonal to the columns of $\widehat{\Lambda}$ . This can be obtained, for example, using the Null function from the MASS package in R.
5: Obtain $\widehat{SIV}={\bf X}B_{\widehat{\Lambda}^{\perp}}$ .
6: Obtain $\widehat{\bf X}=\widehat{\mathbb{E}}({\bf X}\mid\widehat{SIV})$ via the fitted values from ordinary least squares.
7: Obtain $\widehat{\beta}$ via (7), where the tuning parameter $k$ is selected via 10-fold cross-validation.
8: if $\;\widehat{q}+\widehat{k}<p$ then output $\widehat{\beta}$ ;
9: else $\;\beta$ is not identifiable.
4.2 Theoretical properties
In this section, we study the theoretical properties of the estimator $\widehat{\beta}$ in Algorithm 2. We consider two paradigms: (1) low-dimensional settings, where the dimension of exposure $p$ is fixed; and (2) high-dimensional settings, where $p$ grows with the sample size $n$ . For the former, we show that under mild regularity conditions, $\widehat{\beta}$ is $\sqrt{n}$ -consistent. For the latter, we show that under mild regularity conditions, $\widehat{\beta}$ achieves a Lasso-type error bound. We also demonstrate variable selection consistency in both scenarios. In our theoretical results, we do not require $\widehat{\Lambda}=\Lambda$ ; we only need certain norms of $\widehat{\Lambda}$ to be consistent with those of $\Lambda$ , which can be achieved by classical estimators. We first introduce assumptions for the low-dimensional case.
**Assumption 1**
*(Assumptions for fixed $p$ )
1. All coefficients ${\Lambda}$ , $\beta$ , and the function $g(·)$ in models (1) and (3) are fixed and do not change as $n→∞$ .
1. $U_{i}$ , $\epsilon_{x,i}$ , and $\epsilon_{y,i}$ are independent random draws from the joint distribution of $(U,\epsilon_{x},\epsilon_{y})$ such that $E(\epsilon_{x})=\bm{0}$ , $E(U)=\bm{0}$ , $\text{Cov}(\epsilon_{x})=D$ , $\text{Cov}(U)=I_{q}$ , and $(U,\epsilon_{x},\epsilon_{y})$ are mutually independent. Furthermore, assume that $\text{Var}(\epsilon_{y})=\sigma^{2}$ and $\max_{1≤ j≤ p}\text{Var}(X_{j})=\sigma_{x}^{2}$ ; these parameters are fixed and do not change as $n→∞$ .
1. For the maximum likelihood estimator $\widehat{\Lambda}$ , there exists an orthogonal matrix $O∈\mathbb{R}^{q× q}$ such that $\|\widehat{\Lambda}-\Lambda O\|_{2}=O_{p}(1/\sqrt{n})$ .
1. Let $\Sigma_{\widetilde{X}}=\text{Cov}(\widetilde{X})$ . We assume $\min_{\theta∈\mathbb{R}^{p},\,0<\|\theta\|_{0}≤ 2s}\frac{\theta^{\mathrm{\scriptscriptstyle T}}\Sigma_{\widetilde{X}}\theta}{\|\theta\|_{2}^{2}}>c$ for some positive constant $c$ .*
Conditions B1 – B2 are standard assumptions for the low-dimensional setting. Given Condition A2, Condition B3 requires that the estimator for factor loadings is root- $n$ consistent. Condition B4 is the population version of the sparse eigenvalue condition (Raskutti et al., 2011, Assumption 3(b)).
Under these conditions, $\widehat{\beta}$ is root- $n$ consistent and achieves consistency in variable selection.
**Theorem 2**
*Under Conditions A1 – A4 and B1 – B4, if the tuning parameter satisfies $\widehat{k}=s$ , the following holds:
1. ( $\ell_{1}$ -error rate) $\|\widehat{\beta}-\dot{\beta}\|_{1}=O_{p}(n^{-1/2}).$
1. (Variable selection consistency) Let $\mathcal{A}=\{j:\dot{\beta}_{j}=0\}$ and $\widehat{\mathcal{A}}=\{j:\widehat{\beta}_{j}=0\}$ . Then $\mathbb{P}(\widehat{\mathcal{A}}=\mathcal{A})→ 1$ as $n→∞.$*
In Theorem 2, it is assumed that $\widehat{k}=s$ . This is a standard condition in the $\ell_{0}$ -optimization literature (e.g., Raskutti et al., 2011; Shen et al., 2013).
Next, we consider the high-dimensional case and demonstrate that our estimator exhibits properties similar to those of standard regularized estimators in the high-dimensional statistics literature, including a Lasso-type error bound and consistency in variable selection. We impose the following regularity conditions.
**Assumption 2**
*(Assumptions for diverging $p$ )
1. $sq^{2}\log(p)\log(n)/n→ 0$ , $n=O(p)$ , and $q+\log(p)\lesssim\sqrt{n}$ , where $x\lesssim y$ means there exists a constant $C$ such that $x≤ Cy$ .
1. The expectation $\gamma:=\mathbb{E}(Ug(U))∈\mathbb{R}^{q}$ , the variance $\sigma_{g}^{2}=\text{Var}(g(U))$ , and the covariance $\Gamma:=\text{Var}(Ug(U))∈\mathbb{R}^{q× q}$ exist. For a matrix $M$ , let $\lambda_{\max}(M)$ and $\lambda_{\min}(M)$ denote the maximum and minimum eigenvalues of $M$ . There exist positive constants $C_{1}$ , $C_{2}$ , and $C_{3}$ such that $0<C_{1}≤\min\{\lambda_{\min}(D),\lambda_{\min}(\Lambda^{\mathrm{\scriptscriptstyle T}}\Lambda/p)\}≤\max\{\lambda_{\max}(D),\lambda_{\max}(\Lambda^{\mathrm{\scriptscriptstyle T}}\Lambda/p)\}≤ C_{2}<∞,$ and $\max\{\|\gamma\|_{2},\;\sigma_{g}^{2},\;\text{Trace}(\Gamma)\}≤ C_{3}.$
1. Assume the random variables in models (1) and (3) satisfy $E(X)=\bm{0}$ , $E(U)=\bm{0}$ , and $\text{Cov}(U)=I_{q}$ . We also assume $\epsilon_{y}$ is independent of $(X,U)$ and $\epsilon_{x}$ is independent of $U$ . Furthermore, assume $\epsilon_{y}$ , $\epsilon_{x,j}$ , and $X_{j}$ are sub-Gaussian random variables with sub-Gaussian parameters $\sigma^{2}$ , $\widetilde{\sigma}_{j}^{2}$ , and $\sigma_{j}^{2}$ , respectively. The parameters satisfy $\sigma^{2}≤ C_{4}$ , and $C_{5}≤\widetilde{\sigma}_{j}^{2},\sigma_{j}^{2}≤ C_{6}$ for some constants $C_{4},C_{5},C_{6}>0$ .
1. There exist positive constants $C_{7}$ and $C_{8}$ such that $\underset{i∈\mathcal{A}}{\min}|\dot{\beta}_{i}|≥ n^{C_{7}-1/2}$ and $s^{2}(q+1)^{2}\log{p}≤ n^{2C_{7}-C_{8}}$ .*
Condition C1 allows the number of exposures $p$ to grow exponentially with the sample size, while the number of latent confounders $q$ grows at a slower polynomial rate. Condition C2 is a standard assumption in high-dimensional factor analysis (Fan et al., 2013a; Shen et al., 2016) for loading identification. Condition C3 assumes that the exposures $X_{j}$ are sub-Gaussian and that the noise level is bounded. Condition C4 is a standard assumption on minimum signal strength.
**Theorem 3**
*Assume that Conditions A1 – A4 and C1 – C3 hold, and that the tuning parameter satisfies $\widehat{k}=s$ . Then:
1. ( $\ell_{1}$ -error rate) $\|\widehat{\beta}-\dot{\beta}\|_{1}=O_{p}\left(s(q+1)\sqrt{\frac{\log(p)}{n}}\right).$
1. (Variable selection consistency) Under Condition C4, $\mathbb{P}(\widehat{\mathcal{A}}=\mathcal{A})→ 1$ as $n→∞.$*
**Remark 3**
*The first part of Theorem 3 differs from Theorem 1 in Ćevid et al. (2020a). Their theoretical result relies on the following linear model and decomposition:
$$
Y=X^{\mathrm{\scriptscriptstyle T}}\beta+U^{\mathrm{\scriptscriptstyle T}}\gamma+\epsilon_{y}=X^{\mathrm{\scriptscriptstyle T}}\beta+X^{\mathrm{\scriptscriptstyle T}}b+(U^{\mathrm{\scriptscriptstyle T}}\gamma-X^{\mathrm{\scriptscriptstyle T}}b)+\epsilon_{y},
$$
where $b$ is the best linear predictor of $U$ from $X$ . Their result depends on (i) $\|b\|_{2}=O\left({1}/{\sqrt{p}}\right)$ and (ii) the term $(U^{\mathrm{\scriptscriptstyle T}}\gamma-X^{\mathrm{\scriptscriptstyle T}}b)+\epsilon_{y}$ being independent of $X$ under their joint Gaussian assumption. In general, these conditions fail to hold under model (3), where $g(U)$ is an unknown function.*
4.3 Extension to nonlinear settings
In this section, we extend the SIV method to address scenarios where the treatment $X$ has nonlinear effects on the outcome $Y$ . Revisiting model (3), we remove the assumption of linearity, allowing both the treatment $X$ and the unmeasured confounder $U$ to influence the outcome $Y$ through nonlinear relationships:
$$
Y=f(X;\beta)+g(U)+\epsilon_{y} \tag{8}
$$
In this model, the treatment influences the outcome through the nonlinear causal function $f(·;\beta)$ , with $\beta∈\mathbb{R}^{p}$ as the parameter of interest. Our focus is on estimating the parameter $\beta$ .
The key observation is that, under model (1), the synthetic instruments $\text{SIV}∈\mathbb{R}^{p-q}=B_{\Lambda^{\perp}}^{\mathrm{\scriptscriptstyle T}}\epsilon_{X}$ are linear combinations of $\epsilon_{X}$ , which are independent of both $g(U)$ , for any measurable $g$ , and $\epsilon_{y}$ . Consequently, we have the following vector equation:
$$
\mathbb{E}\{SIV(Y-f(X;\beta))\}=\mathbb{E}\{SIV(g(U)+\epsilon_{y})\}=0,
$$
when $\beta$ is set to its true value. Following this, we define the population GMM loss:
$$
G(\beta)=\left\|\mathbb{E}\{SIV(Y-f(X;\beta))\}\right\|_{2}^{2}. \tag{9}
$$
For the function $f(X;\beta)$ , let $∂ f(X;\beta)/∂\beta$ denote the $p× 1$ column vector $({∂ f(X;\beta)}/{∂\beta_{1}}$ $,...,{∂ f(X;\beta)}/{∂\beta_{p}})^{\mathrm{\scriptscriptstyle T}}$ , and let $∂ f(X;\beta)/∂\beta^{\mathrm{\scriptscriptstyle T}}$ denote the $1× p$ row vector $({∂ f(X;\beta)}/{∂\beta_{1}},...,$ ${∂ f(X;\beta)}/{∂\beta_{p}})$ . We make the following assumptions, denoted as D1 and D2, which are generalizations of Conditions A1 and A4 in the nonlinear setting.
1. (Invertibility) The matrix $\mathbb{E}(X{∂ f(X;\beta)}/{∂\beta^{\mathrm{\scriptscriptstyle T}}})∈\mathbb{R}^{p× p}$ is invertible, and any $q× q$ submatrix of $\mathbb{E}^{-1}(X{∂ f(X;\beta)}/{∂\beta^{\mathrm{\scriptscriptstyle T}}})\Lambda$ is also invertible for any $\beta$ .
1. (Nonlinear plurality rule) Let $C^{*}$ be a subset of $\{1,2,...,p\}$ with cardinality $q$ and $\dot{\beta}_{C^{*}}≠ 0$ , and let the synthetic GMM estimator obtained by assuming $\beta_{C^{*}}=0$ be $\widetilde{\beta}^{C^{*}}=\underset{\beta∈\mathbb{R}^{p}:\beta_{C^{*}}=0}{\operatorname*{arg\,min}}G(\beta).$ The plurality rule assumes that $\widetilde{\beta}^{C^{*}}$ is uniquely defined and that $\max\limits_{\beta∈\mathbb{R}^{p}}|\{C^{*}:\widetilde{\beta}^{C^{*}}=\beta\}|≤ q.$
The following theorem states that $\beta$ is identifiable using synthetic GMM in a manner parallel to Theorem 1, but within a nonlinear setting.
**Theorem 4 (Synthetic Generalized Method of Moments)**
*Suppose that models (1), (3), and Conditions A2, D1, and D2 hold.
1. If A3 holds, then $\dot{\beta}$ is identifiable via $\dot{\beta}=\underset{{\beta}∈\mathbb{R}^{p}}{\operatorname*{arg\,min}}\;G(\beta)$ , subject to $\|\beta\|_{0}<p-q$ .
1. If A3 fails, then $\dot{\beta}$ is not identifiable, and for all $\widetilde{\beta}∈\underset{{\beta}∈\mathbb{R}^{p}}{\operatorname*{arg\,min}}\;G(\beta)$ , we have $\|\widetilde{\beta}\|_{0}≥ p-q$ .*
We now discuss the estimation of $\beta$ in finite samples. A natural approach is to replace the expectation in equation (9) with its empirical counterpart. Specifically, we consider the following loss function:
$$
G_{n}(\beta)=\left(\frac{1}{n}\sum_{i=1}^{n}[SIV_{i}\{Y_{i}-f(X_{i};\beta)\}]\right)^{\mathrm{\scriptscriptstyle T}}W\left(\frac{1}{n}\sum_{i=1}^{n}[SIV_{i}\{Y_{i}-f(X_{i};\beta)\}]\right),
$$
where $W∈\mathbb{R}^{(p-q)×(p-q)}$ is a weight matrix.
We now discuss the estimation of $\beta$ in finite samples. A natural approach is to replace the expectation in equation (9) with its empirical counterpart. Specifically, we consider the following loss function:
$$
G_{n}(\beta)=\left(\frac{1}{n}\sum_{i=1}^{n}[SIV_{i}\{Y_{i}-f(X_{i};\beta)\}]\right)^{\mathrm{\scriptscriptstyle T}}W\left(\frac{1}{n}\sum_{i=1}^{n}[SIV_{i}\{Y_{i}-f(X_{i};\beta)\}]\right),
$$
where $W∈\mathbb{R}^{(p-q)×(p-q)}$ is a weight matrix. The oracle weight matrix that achieves the highest efficiency in GMM is the inverse of $\text{Cov}(\text{SIV})$ (Burgess et al., 2017, Eq. 17). In practice, we estimate $W$ using the inverse of the empirical covariance, $\widehat{\text{Cov}}^{-1}(\widehat{\text{SIV}})$ .
Finally, let $\bm{X}∈\mathbb{R}^{n× p}$ , $\bm{Y}∈\mathbb{R}^{n× 1}$ , $f(\bm{X};\beta)∈\mathbb{R}^{n× 1}$ , and $\bm{SIV}∈\mathbb{R}^{n×(p-q)}$ denote the relevant random matrices in the finite-sample setting. In the low-dimensional setting, where the number of instruments $p-q$ is fixed and smaller than $n$ , we consider the following optimization problem:
$$
\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{p}}\left\|\bm{SIV}({\bm{SIV}^{\mathrm{\scriptscriptstyle T}}\bm{SIV}})^{-1}{\bm{SIV}^{\mathrm{\scriptscriptstyle T}}(\bm{Y}-f(\bm{X};\beta))}\right\|^{2}_{2}\quad\text{subject to }\|\beta\|_{0}\leq k. \tag{10}
$$
The unconstrained optimization in (10) is also referred to as the nonlinear two-stage least squares estimator in the literature (Amemiya, 1974). Under the linear setting where $f(\bm{X};\beta)=\bm{X}\beta$ , equation (10) has a similar form to (7). Optimization (10) can be accelerated using a splicing approach, which is designed to efficiently solve the best subset selection problem (Zhu et al., 2020; Zhang et al., 2023). We establish the theoretical properties of (10) in Section S.5 of the supplementary material, showing that under mild regularity conditions, the proposed estimator achieves both consistency and variable selection consistency in the low-dimensional setting.
**Remark 4**
*In high-dimensional settings where the dimension of $\operatorname{SIV}$ , namely $p-q$ , exceeds the sample size $n$ , the matrix $\operatorname{SIV}^{\mathrm{\scriptscriptstyle T}}\operatorname{SIV}$ is not invertible. In this case, one may follow the approach of Belloni et al. (2018, p. 35) and choose the weight matrix $W$ as
$$
W=\operatorname{diag}\!\left(\operatorname{Var}\!\left(\mathbb{E}_{n}\!\left[\operatorname{SIV}(Y-f(X;\widetilde{\beta}))\right]\right)\right)^{-1},
$$
where $\widetilde{\beta}$ denotes an initial estimator of $\beta$ .*
5 Simulation studies
In this section, we evaluate the numerical performance of the proposed SIV method and compare it with other methods across various scenarios. First, we assess the performance of the SIV method under a linear outcome model in Section 5.1. Next, we evaluate the algorithm in the context of a nonlinear outcome model in Section 5.2.
We provide additional simulation results in the supplementary material. In Section S.7.2, we present simulation results for various estimators under dense confounding with many weak effects. In Section S.7.3, we explore an alternative moment selection estimator, inspired by the work of Andrews (1999a), and compare it with our proposed estimator. Section S.7.6 discusses the construction of confidence intervals for selected causal variables using the ivreg function. Finally, in Section S.7.7, we extend the SIV algorithm to confounded count data, where the effect of the unmeasured confounder on the outcome cannot be additively separated from the effect of the treatment.
5.1 Simulation studies with a linear outcome model
We begin by evaluating the SIV estimator within a linear outcome model. The model is defined by $f(X;\beta)=X^{\mathrm{\scriptscriptstyle T}}\beta$ and $g(U)=U^{\mathrm{\scriptscriptstyle T}}\gamma$ . In our simulations, we let $q=3$ , $s=5$ , and $\beta=(1,1,1,1,1,0,...,0)^{\mathrm{\scriptscriptstyle T}}∈\mathbb{R}^{p}$ . Each element in $\Lambda_{j,k}$ and $\gamma_{k}$ is independently generated from $\text{Uniform}(-1,1)$ for $j=1,...,p$ and $k=1,...,q$ . The hidden variables $U_{i,k}$ follow i.i.d. standard normal distributions for $i=1,...,n$ and $k=1,...,q$ . The random errors are generated as $\epsilon_{x}\sim\mathbb{N}(0,\sigma_{x}^{2}I_{p})$ and $\epsilon_{y}\sim\mathbb{N}(0,\sigma^{2})$ , where $\sigma_{x}=2$ and $\sigma=5$ . We evaluate the performance of our method under the following two settings: (i) low-dimensional cases: $p=100$ and $n∈\{200,600,1000,...,5000\}$ ; (ii) high-dimensional cases: $n=500$ and $p∈\{500,750,1000,...,3000\}$ . All simulation results are based on $1000$ Monte Carlo runs. The data-generating mechanism is designed to mimic key features of the real application in Section 6; see Section S.7.1 of the supplementary material for a comparison between the real and synthetic data.
We compare the following methods in our simulations.
1. (SIV) We implement Algorithm 2 and determine $\widehat{q}$ using the method proposed by Onatski (2010a). A detailed discussion of Onatski (2010a) ’s method is provided in Section S.1.4 of the supplementary material. For cases where $p≤ 30$ , we employ a full best subset selection routine to solve the $\ell_{0}$ -optimization problem. When $p>30$ , we utilize the adaptive best subset selection method implemented in the abess function in R.
1. (Lasso, Tibshirani, 1996): We implement the Lasso using the glmnet function in R, with the tuning parameter selected via 10-fold cross-validation.
1. (Trim, Ćevid et al., 2020a): We implement Ćevid et al. (2020a) ’s method using the code available from https://github.com/zijguo/Doubly-Debiased-Lasso, with an update detailed in Section S.7.8.1 of the supplementary material.
1. (Null, Miao et al., 2023a): For the low-dimensional settings, we implement Miao et al. (2023a) ’s method using the code available from https://www.tandfonline.com/doi/suppl/10.1080/01621459.2021.2023551. For the high-dimensional settings, their method cannot be applied directly because $\xi$ in their procedure cannot be solved by ordinary least squares. Therefore, we replace ordinary least squares with the Lasso, with the tuning parameter selected by 10-fold cross-validation.
1. (IV-Lasso) Motivated by a reviewer’s suggestion, we consider the following two-step “IV-Lasso” procedure. First, we apply the Lasso as a screening step to identify candidate causal predictors of $Y$ . Specifically, we solve
$$
\widetilde{\beta}=\underset{\beta\in\mathbb{R}^{p}}{\arg\min}\left\{\|{\bf Y}-\widehat{{\bf X}}\beta\|_{2}^{2}+\lambda\|\beta\|_{1}\right\},
$$
where $\widehat{\bf X}=\widehat{\mathbb{E}}({\bf X}\mid\widehat{\text{SIV}})$ . Let $\widehat{\mathcal{A}}=\{j:\widetilde{\beta}_{j}≠ 0\}$ denote the set of variables selected by the Lasso. In the second step, we fit a linear model of $Y$ on $\widehat{X}_{\widehat{\mathcal{A}}}$ using ordinary least squares, yielding estimates $\widehat{\beta}^{\;\text{IV-Lasso}}_{\widehat{\mathcal{A}}}$ . We set $\widehat{\beta}^{\;\text{IV-Lasso}}_{\widehat{\mathcal{A}}^{c}}=0$ .
<details>
<summary>2304.01098v4/updated_figures/fixp_l1_error.png Details</summary>

### Visual Description
# Technical Document Analysis: Line Chart
## Title
**Low-d case: p = 100, q = 3, s = 5**
---
## Axis Labels
- **X-axis**: `n` (values: 256, 512, 1024, 2048, 4096)
- **Y-axis**: `L₁ estimation error` (range: 0.5 to 9.0)
---
## Legend
- **Location**: Top-right corner
- **Labels**:
- Green Square
- Purple Triangle
- Gray Cross
- Red Circle
---
## Data Series & Trends
### 1. Green Square Line
- **Trend**: Steeply decreasing from ~9.5 (n=256) to ~2.5 (n=4096)
- **Data Points**:
- n=256: ~9.5
- n=512: ~8.0
- n=1024: ~7.5
- n=2048: ~6.5
- n=4096: ~2.5
### 2. Purple Triangle Line
- **Trend**: Gradual decrease from ~9.0 (n=256) to ~2.2 (n=4096)
- **Data Points**:
- n=256: ~9.0
- n=512: ~8.0
- n=1024: ~7.0
- n=2048: ~6.0
- n=4096: ~2.2
### 3. Gray Cross Line
- **Trend**: Moderate decrease from ~7.0 (n=256) to ~1.0 (n=4096)
- **Data Points**:
- n=256: ~7.0
- n=512: ~6.0
- n=1024: ~5.0
- n=2048: ~4.0
- n=4096: ~1.0
### 4. Red Circle Line
- **Trend**: Gentle decrease from ~2.5 (n=256) to ~1.2 (n=4096)
- **Data Points**:
- n=256: ~2.5
- n=512: ~1.8
- n=1024: ~1.5
- n=2048: ~1.3
- n=4096: ~1.2
---
## Spatial Grounding
- **Legend Position**: Top-right corner (confirmed via visual alignment)
- **Color Consistency**:
- Green squares match the highest line (Green Square).
- Purple triangles align with the second-highest line (Purple Triangle).
- Gray crosses correspond to the third line (Gray Cross).
- Red circles match the lowest line (Red Circle).
---
## Key Observations
1. All lines exhibit a **monotonic decreasing trend** as `n` increases.
2. The **Green Square** and **Purple Triangle** lines show the steepest declines, suggesting faster convergence.
3. The **Red Circle** line has the slowest rate of decrease, indicating lower sensitivity to `n`.
4. At `n=4096`, all lines converge toward lower error values, with the Red Circle line reaching the lowest error (~1.2).
---
## Notes
- No additional text or embedded diagrams are present.
- All data points are visually interpolated from the chart; exact numerical values are approximated.
- No non-English text detected.
</details>
(a) $p=100$ , and $n$ varies from $200$ to $5000$ .
<details>
<summary>2304.01098v4/updated_figures/fixn_l1_error.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Graph Analysis
## Title
**High-d case: n = 500, q = 3, s = 5**
---
## Labels and Axis Titles
- **X-axis**: Labeled as `p` (horizontal axis).
- **Y-axis**: Labeled as `L₁ estimation error` (vertical axis).
---
## Legend
- **Location**: Top-right corner of the graph.
- **Entries**:
1. **Green squares** (Line 1)
2. **Gray crosses** (Line 2)
3. **Red circles** (Line 3)
4. **Purple triangles** (Line 4)
---
## Data Series and Trends
### Line 1: Green Squares
- **Trend**: Steadily increasing from left to right.
- **Data Points**:
- `p = 512`: ~3.8
- `p = 1024`: ~4.0
- `p = 2048`: ~4.1
- `p = 4096`: ~4.3
### Line 2: Gray Crosses
- **Trend**: Gradual increase with a slight dip at `p = 2048`.
- **Data Points**:
- `p = 512`: ~3.6
- `p = 1024`: ~3.8
- `p = 2048`: ~3.9
- `p = 4096`: ~4.1
### Line 3: Red Circles
- **Trend**: Consistent upward slope.
- **Data Points**:
- `p = 512`: ~3.2
- `p = 1024`: ~3.4
- `p = 2048`: ~3.6
- `p = 4096`: ~3.8
### Line 4: Purple Triangles
- **Trend**: Gradual increase with a slower rate of growth.
- **Data Points**:
- `p = 512`: ~2.0
- `p = 1024`: ~2.2
- `p = 2048`: ~2.4
- `p = 4096`: ~2.6
### Line 5: Blue Squares (Not in Legend)
- **Trend**: Flat line with minimal variation.
- **Data Points**:
- `p = 512`: ~0.5
- `p = 1024`: ~0.5
- `p = 2048`: ~0.5
- `p = 4096`: ~0.5
---
## Spatial Grounding
- **Legend Position**: Top-right corner.
- **X-axis Markers**: `512`, `1024`, `2048`, `4096`.
- **Y-axis Range**: `0.5` to `4.0` in increments of `0.5`.
---
## Component Isolation
### Header
- Title: `High-d case: n = 500, q = 3, s = 5`.
### Main Chart
- **Axes**:
- X-axis (`p`) with logarithmic spacing.
- Y-axis (`L₁ estimation error`) with linear scaling.
- **Lines**: Four primary lines (green, gray, red, purple) and one additional blue line (not in legend).
### Footer
- No explicit footer text.
---
## Critical Observations
1. **Legend Discrepancy**: The blue line (squares) is present in the graph but not included in the legend.
2. **Trend Consistency**: All lines except the blue one show increasing trends, with varying rates of growth.
3. **Data Point Accuracy**: Cross-referenced legend colors/markers with line placements to ensure alignment (e.g., green squares match Line 1).
---
## Conclusion
The graph illustrates the relationship between `p` and `L₁ estimation error` under specified parameters (`n = 500`, `q = 3`, `s = 5`). Key trends include steady increases for most lines, with the blue line remaining constant. The legend omits the blue line, which may indicate a labeling error or additional data series.
</details>
(b) $n=500$ , and $p$ varies from $500$ to $3000$ .
Figure 2: Estimation errors $||\widehat{\beta}-\beta||_{1}$ for SIV ( $\blacksquare$ , blue), Lasso ( $\CIRCLE$ , red), Trim ( $\blacktriangle$ , purple), Null ( $\square$ , green), and IV-Lasso( $×$ , grey) based on $1000$ Monte Carlo runs.
<details>
<summary>2304.01098v4/updated_figures/fixp_fdr.png Details</summary>

### Visual Description
# Technical Document Analysis: Low-d Case Graph
## Header
- **Title**: "Low-d case: p = 100, q = 3, s = 5"
- **Language**: English (no non-English text detected)
## Main Chart
### Axes
- **X-axis**: Labeled "n" (horizontal axis)
- **Y-axis**: Labeled "FDR" (vertical axis, range 0.00–1.00)
### Data Series
1. **Red Circles** (Legend: "FDR (q=3)")
- **Trend**: Slopes upward from 0.75 to 0.95
- **Data Points**:
- n=256: 0.75
- n=512: 0.77
- n=1024: 0.80
- n=2048: 0.85
- n=4096: 0.95
2. **Green Triangles** (Legend: "FDR (q=3, s=5)")
- **Trend**: Flat line at 1.00
- **Data Points**:
- All n values: 1.00
3. **Blue Squares** (Legend: "FDR (q=3, s=5, p=100)")
- **Trend**: Flat line near 0.05
- **Data Points**:
- n=256: 0.05
- n=512: 0.04
- n=1024: 0.04
- n=2048: 0.04
- n=4096: 0.04
4. **Gray Crosses** (Legend: "FDR (q=3, s=5, p=100, n=∞)")
- **Trend**: Flat line near 0.65
- **Data Points**:
- n=256: 0.65
- n=512: 0.65
- n=1024: 0.65
- n=2048: 0.65
- n=4096: 0.65
### Legend
- **Placement**: Top-right corner
- **Color-Marker Mapping**:
- Red: Circles (q=3)
- Green: Triangles (q=3, s=5)
- Blue: Squares (q=3, s=5, p=100)
- Gray: Crosses (q=3, s=5, p=100, n=∞)
## Footer
- **Grid**: Light gray grid lines for reference
- **No additional text or annotations**
## Key Observations
1. **Convergence**: The red line (q=3) shows increasing FDR with larger n, while the green line (q=3, s=5) remains at maximum FDR (1.00).
2. **Stability**: Blue and gray lines represent stable FDR values (0.05 and 0.65, respectively) across all n.
3. **Parameter Impact**: Increasing s (from 3 to 5) and p (to 100) correlates with lower FDR values (blue and gray lines).
## Validation
- All legend colors match line colors and markers.
- Trends align with numerical data points (e.g., red line’s upward slope matches increasing values).
- Spatial grounding confirmed: Legend placement and axis labels are consistent with standard graph conventions.
</details>
(a) $p=100$ , and $n$ varies from $200$ to $5000$ .
<details>
<summary>2304.01098v4/updated_figures/fixn_fdr.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Header
- **Title**: "High-d case: p = 100, q = 3, s = 5"
- **Language**: English (no non-English text detected)
## Main Chart
### Axes
- **X-axis (Horizontal)**:
- Label: `n`
- Values: `512`, `1024`, `2048`
- **Y-axis (Vertical)**:
- Label: `FDR`
- Range: `0.00` to `1.00`
### Data Series
1. **Green Squares** (Legend: Top-right, color: green, marker: square)
- **Trend**: Flat line, consistently near `1.00` across all `n` values.
- **Data Points**:
- `n = 512`: `1.00`
- `n = 1024`: `1.00`
- `n = 2048`: `1.00`
2. **Red Circles** (Legend: Second from top, color: red, marker: circle)
- **Trend**: Slight upward slope, increasing marginally with `n`.
- **Data Points**:
- `n = 512`: `~0.85`
- `n = 1024`: `~0.87`
- `n = 2048`: `~0.88`
3. **Purple Triangles** (Legend: Third from top, color: purple, marker: triangle)
- **Trend**: Steady upward slope, increasing significantly with `n`.
- **Data Points**:
- `n = 512`: `~0.60`
- `n = 1024`: `~0.70`
- `n = 2048`: `~0.80`
4. **Blue Squares** (Legend: Bottom, color: blue, marker: square)
- **Trend**: Flat line, consistently near `0.00` across all `n` values.
- **Data Points**:
- `n = 512`: `~0.00`
- `n = 1024`: `~0.00`
- `n = 2048`: `~0.00`
### Legend
- **Placement**: Top-right corner of the chart.
- **Entries**:
- Green squares: Topmost line (`FDR ≈ 1.00`).
- Red circles: Second line (`FDR ≈ 0.85–0.88`).
- Purple triangles: Third line (`FDR ≈ 0.60–0.80`).
- Blue squares: Bottom line (`FDR ≈ 0.00`).
## Footer
- No additional text or components present.
## Key Observations
1. The green line (squares) remains constant at `FDR = 1.00`, indicating no change with increasing `n`.
2. The red line (circles) shows a minor upward trend, suggesting a slight increase in FDR as `n` grows.
3. The purple line (triangles) demonstrates a clear upward trend, with FDR rising from `~0.60` to `~0.80` as `n` increases.
4. The blue line (squares) remains at `FDR = 0.00`, indicating no measurable effect across all `n` values.
## Data Table Reconstruction
| n | Green Squares | Red Circles | Purple Triangles | Blue Squares |
|--------|---------------|-------------|------------------|--------------|
| 512 | 1.00 | ~0.85 | ~0.60 | ~0.00 |
| 1024 | 1.00 | ~0.87 | ~0.70 | ~0.00 |
| 2048 | 1.00 | ~0.88 | ~0.80 | ~0.00 |
## Notes
- All data points are visually estimated from the chart; exact numerical values are not explicitly labeled.
- The chart title parameters (`p = 100`, `q = 3`, `s = 5`) are critical contextual metadata but do not directly influence axis labels or data series.
</details>
(b) $n=500$ , and $p$ varies from $500$ to $3000$ .
Figure 3: False discovery rate(FDR) for SIV ( $\blacksquare$ , blue), Lasso ( $\CIRCLE$ , red), Trim ( $\blacktriangle$ , purple), Null ( $\square$ , green), and IV-Lasso( $×$ , grey) based on $1000$ Monte Carlo runs.
We present the $\ell_{1}$ -estimation errors of the five methods in Figures 2. In low-dimensional cases, as illustrated in Figure 2(a), the bias of the Lasso estimator stabilizes as the sample size grows. The Lasso estimator does not account for unmeasured confounding and is therefore not expected to perform well. The Lasso, Trim, and Null methods exhibit considerable $\ell_{1}$ -estimation bias relative to other methods. Compared to IV-Lasso, our SIV estimator demonstrates superior performance. This advantage arises from the $\ell_{0}$ -optimization employed in the SIV method, which enables more accurate variable selection, as illustrated in Figure 3(a) later.
To ensure a fair comparison across all methods, we use cross-validation to select $\lambda$ . However, it is well known that cross-validation can lead to overfitting and an increased false discovery rate. In practice, researchers may adopt the “one-standard-error” rule, which selects the largest $\lambda$ such that the cross-validation error remains within one standard error of its minimum (Hastie et al., 2009; Kang et al., 2016). In the simulation settings we have considered, we find that the “one-standard-error rule” allows the IV-Lasso method to perform as well as the SIV method. See Section S.7.5 of the supplementary material for details on how the “one-standard-error” rule improves the empirical performance of IV-Lasso.
For the high-dimensional settings, we can see from Figure 2(b) that our SIV method consistently outperforms the comparison methods. As discussed in Section 1, the true correlations between $X$ and $Y$ are non-sparse. This explains the large estimation errors of the Lasso method. The Null method exhibits even larger biases than the Lasso method. The Trim method, designed specifically for high-dimensional settings, outperforms both the Lasso and Null methods. However, our estimator still shows a much smaller bias than the Trim estimator. In Section S.7.8 of the supplementary material, we provide additional discussion, simulations, and comments on why the Trim estimator performs less favorably compared to our estimator.
We also observe from Figure 2(b) that the IV-Lasso estimator underperforms compared to the naive Lasso estimator in high-dimensional settings. This underperformance occurs because the naive Lasso method applies $\ell_{1}$ -penalization, which drives the coefficients of incorrectly selected $\widehat{\beta}_{i}$ towards zero. In contrast, the second step of IV-Lasso negates this shrinkage, causing the coefficients of incorrectly selected $\widehat{\beta}_{i}$ to diverge from zero, thereby increasing estimation bias.
Since the underlying $\beta$ is sparse, we also report the performance of variable selection for all the methods in Figure 3. All these methods correctly classify the true causes of the outcome as active exposures, that is, $\widehat{\mathcal{A}}⊃eq\mathcal{A}$ . Thus, we only report the average false discovery rates, denoted as $\#\{\widehat{\mathcal{A}}\setminus\mathcal{A}\}/\#\widehat{\mathcal{A}}$ , across 1000 Monte Carlo runs. It is evident that our proposal achieves the lowest false discovery rate among all the methods in both the low- and high-dimensional settings.
We further evaluate the performance of our proposed algorithm in settings with non-diagonal $\text{Cov}(\epsilon_{x})$ . We generate $\epsilon_{x,i}\sim\mathbb{N}(0,D)$ . For the low-dimensional setting, we randomly select 20 pairs of $i,j∈\{1,2,...,p\}$ and set $D_{i,j}=D_{j,i}=1$ . The list of pairs is provided in Section S.7.4 of the supplementary material. In high-dimensional settings, we set $D_{i,j}=4× 0.3^{|i-j|}$ . All other aspects of the data-generating mechanism remain unchanged. In the low-dimensional scenario, we use the Robust Principal Component Analysis method (Candès et al., 2011) to estimate the low-dimensional structure of the covariance matrix and factor loadings. For the high-dimensional scenario, we directly use the Principal Component Analysis method to estimate factor loadings. These are implemented using the R packages rpca and prcomp, respectively.
The simulation results, presented in Figures 4 and 5, illustrate the $\ell_{1}$ -estimation errors and false discovery rates for various methods. These methods exhibit similar trends in $\ell_{1}$ -error performance as observed in the previous setting with uncorrelated errors. The Trim method shows a slight improvement, achieving lower $\ell_{1}$ -errors and false discovery rates compared to the previous simulation results. The performance of the SIV method remains consistent with its performance when $D$ is a diagonal matrix. Additionally, our method continues to outperform the other comparison methods.
<details>
<summary>2304.01098v4/Figures/revision2_fig/correlated_fixp_l1_error.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Graph Analysis
## Header
**Title**: Low-d case: p = 100, q = 3, s = 5
**Language**: English (no other languages detected)
## Main Chart
### Axes
- **X-axis**: Labeled `n` (horizontal axis, integer values: 256, 512, 1024, 2048, 4096)
- **Y-axis**: Labeled `L₁ estimation error` (vertical axis, range: 0.0 to 9.0)
### Legend
- **Placement**: Top-right corner
- **Components**:
1. **Green squares**: `L₁ estimation error` (highest line)
2. **Purple triangles**: `L₁ estimation error` (second-highest line)
3. **Red circles**: `L₁ estimation error` (third line)
4. **Gray crosses**: `L₁ estimation error` (lowest line)
### Data Series Trends
1. **Green squares** (highest line):
- Starts at ~9.5 (n=256) and decreases steadily to ~2.5 (n=4096).
- Data points:
- n=256: 9.5
- n=512: 8.2
- n=1024: 7.5
- n=2048: 6.8
- n=4096: 2.5
2. **Purple triangles** (second-highest line):
- Starts at ~9.0 (n=256) and decreases to ~2.2 (n=4096).
- Data points:
- n=256: 9.0
- n=512: 8.0
- n=1024: 7.2
- n=2048: 6.5
- n=4096: 2.2
3. **Red circles** (third line):
- Starts at ~2.5 (n=256) and decreases to ~1.5 (n=4096).
- Data points:
- n=256: 2.5
- n=512: 2.0
- n=1024: 1.8
- n=2048: 1.6
- n=4096: 1.5
4. **Gray crosses** (lowest line):
- Starts at ~3.0 (n=256) and decreases to ~1.0 (n=4096).
- Data points:
- n=256: 3.0
- n=512: 2.2
- n=1024: 1.8
- n=2048: 1.4
- n=4096: 1.0
### Key Observations
- All lines exhibit a **monotonic decreasing trend** as `n` increases.
- The **green squares** (highest initial error) and **purple triangles** (second-highest) converge toward lower values but remain above the red and gray lines.
- The **red circles** and **gray crosses** show similar decay rates but start from lower initial values.
## Footer
No additional text or components detected.
## Validation
- Legend colors and markers match line colors/placements exactly.
- Data points align with visual trends (e.g., green squares start highest and decrease most steeply).
- No omitted labels or axis markers.
</details>
(a) $p=100$ , and $n$ varies from $200$ to $5000$ .
<details>
<summary>2304.01098v4/updated_figures/correlated_fixn_l1_error.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Chart Overview
- **Title**: High-d case: n = 500, q = 3, s = 5
- **Type**: Line chart
- **Purpose**: Visualizes L₁ estimation error across varying parameter `p`
---
## Axis Labels
- **X-axis**: `p` (Parameter values: 512, 1024, 2048, 4096)
- **Y-axis**: `L₁ estimation error` (Range: 0 to 3.5)
---
## Legend
- **Location**: Top-right corner
- **Entries**:
1. **Green squares**: `q=3, s=5`
2. **Gray crosses**: `q=3, s=10`
3. **Red circles**: `q=5, s=5`
4. **Purple triangles**: `q=5, s=10`
---
## Data Series & Trends
### 1. Green Squares (`q=3, s=5`)
- **Trend**: Steady upward slope
- **Data Points**:
- p=512: ~2.7
- p=1024: ~3.1
- p=2048: ~3.3
- p=4096: ~3.5
### 2. Gray Crosses (`q=3, s=10`)
- **Trend**: Gradual increase with minor fluctuations
- **Data Points**:
- p=512: ~2.6
- p=1024: ~2.8
- p=2048: ~3.1
- p=4096: ~3.2
### 3. Red Circles (`q=5, s=5`)
- **Trend**: Slight dip at p=1024, then upward trend
- **Data Points**:
- p=512: ~2.1
- p=1024: ~2.3
- p=2048: ~2.4
- p=4096: ~2.5
### 4. Purple Triangles (`q=5, s=10`)
- **Trend**: Flat with slight rise at higher `p`
- **Data Points**:
- p=512: ~1.8
- p=1024: ~1.85
- p=2048: ~1.95
- p=4096: ~1.9
---
## Key Observations
1. **Parameter Impact**:
- Higher `s` values (10 vs. 5) generally correlate with lower estimation errors.
- `q=5` configurations (red/purple lines) show more stability than `q=3`.
2. **Error Magnitude**:
- Errors scale linearly with `p` for `q=3` cases.
- `q=5` cases exhibit bounded error growth.
3. **Legend Consistency**:
- All line colors/markers match legend entries exactly.
- No mismatches detected between visual elements and labels.
---
## Spatial Grounding
- **Legend Position**: Top-right (x=0.95, y=0.95 relative to plot area)
- **Data Point Alignment**: All markers align with their respective legend symbols.
---
## Missing Elements
- No additional text blocks, tables, or annotations present.
- No secondary y-axis or colorbar.
---
## Conclusion
The chart demonstrates how `L₁ estimation error` evolves with parameter `p` under different `q` and `s` configurations. Higher `s` values reduce error growth, while increased `q` stabilizes error trajectories.
</details>
(b) $n=500$ , and $p$ varies from $500$ to $3000$ .
Figure 4: Estimation errors $||\widehat{\beta}-\beta||_{1}$ with non-diagonal $D$ for SIV ( $\blacksquare$ , blue), Lasso ( $\CIRCLE$ , red), Trim ( $\blacktriangle$ , purple), Null ( $\square$ , green), and IV-Lasso( $×$ , grey) based on $1000$ Monte Carlo runs.
<details>
<summary>2304.01098v4/Figures/revision2_fig/correlated_fixp_fdr.png Details</summary>

### Visual Description
# Technical Document Extraction: Low-d Case Analysis
## Chart Title
**Low-d case: p = 100, q = 3, s = 5**
## Axis Labels
- **X-axis**: `n` (horizontal axis, integer values)
- **Y-axis**: `FDR` (False Discovery Rate, range 0.00–1.00)
## Legend
- **Location**: Top-right corner of the chart
- **Entries**:
1. `FDR` (red line)
2. `FDR (upper bound)` (purple line)
3. `FDR (lower bound)` (green line)
## Data Series
### 1. `FDR` (Red Line)
- **Trend**: Steadily increasing from left to right
- **Data Points** (x, y):
- (256, 0.70)
- (512, 0.75)
- (1024, 0.80)
- (2048, 0.85)
- (4096, 0.90)
- (8192, 0.95)
- (16384, 0.975)
- (32768, 1.00)
### 2. `FDR (upper bound)` (Purple Line)
- **Trend**: Horizontal line at constant value
- **Data Points** (x, y):
- All x-values: 0.95 (constant across n)
### 3. `FDR (lower bound)` (Green Line)
- **Trend**: Horizontal line at constant value
- **Data Points** (x, y):
- All x-values: 1.00 (constant across n)
## Axis Markers
- **X-axis ticks**: 256, 512, 1024, 2048, 4096 (logarithmic spacing)
- **Y-axis ticks**: 0.00, 0.25, 0.50, 0.75, 1.00
## Key Observations
1. The `FDR` (red line) demonstrates a monotonic increase, approaching the `FDR (upper bound)` (purple line) asymptotically.
2. The `FDR (lower bound)` (green line) remains at the theoretical maximum (1.00) for all n.
3. The chart spans a range of `n` values from 256 to 32768, with exponential scaling on the x-axis.
## Spatial Grounding
- **Legend**: Top-right (confirmed via visual alignment with line colors)
- **Data Point Colors**:
- Red (`FDR`) matches legend entry 1
- Purple (`FDR upper bound`) matches legend entry 2
- Green (`FDR lower bound`) matches legend entry 3
## Component Isolation
1. **Header**: Chart title and subtitle (top)
2. **Main Chart**: Three data series with markers and axes (center)
3. **Footer**: No additional text or elements
## Language Notes
- No non-English text present in the image.
## Data Table Reconstruction
| n | FDR | FDR (upper bound) | FDR (lower bound) |
|---------|-------|-------------------|-------------------|
| 256 | 0.70 | 0.95 | 1.00 |
| 512 | 0.75 | 0.95 | 1.00 |
| 1024 | 0.80 | 0.95 | 1.00 |
| 2048 | 0.85 | 0.95 | 1.00 |
| 4096 | 0.90 | 0.95 | 1.00 |
| 8192 | 0.95 | 0.95 | 1.00 |
| 16384 | 0.975 | 0.95 | 1.00 |
| 32768 | 1.00 | 0.95 | 1.00 |
## Trend Verification
- **Red Line**: Confirmed upward slope (0.70 → 1.00)
- **Purple Line**: Confirmed flat (0.95 constant)
- **Green Line**: Confirmed flat (1.00 constant)
## Critical Notes
- The chart explicitly provides factual numerical data for all series.
- No ambiguous or non-data elements present.
</details>
(a) $p=100$ , and $n$ varies from $200$ to $5000$ .
<details>
<summary>2304.01098v4/updated_figures/correlated_fixn_fdr.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## 1. Labels and Axis Titles
- **Title**: "High-d case: n = 500, q = 3, s = 5"
- **X-axis**: Labeled "p" (horizontal axis)
- **Y-axis**: Labeled "FDR" (vertical axis, range: 0.00 to 1.00)
## 2. Legend and Data Series
- **Legend Placement**: Top right (implied by color/marker mapping)
- **Data Series**:
- **Green Squares**: Flat line at ~1.00 (FDR)
- **Red Circles**: Slight upward trend from ~0.80 to ~0.90
- **Purple Triangles**: Peaks at ~0.70, then dips to ~0.65
- **Gray Crosses**: Stable at ~0.50
- **Blue Squares**: Near-zero line (~0.00)
## 3. Key Trends and Data Points
- **Green Line (Squares)**:
- **Trend**: Flat (no significant change)
- **Data Points**:
- p = 512: 1.00
- p = 1024: 1.00
- p = 2048: 1.00
- **Red Line (Circles)**:
- **Trend**: Gradual increase
- **Data Points**:
- p = 512: ~0.80
- p = 1024: ~0.85
- p = 2048: ~0.90
- **Purple Line (Triangles)**:
- **Trend**: Rises to peak, then declines
- **Data Points**:
- p = 512: ~0.50
- p = 1024: ~0.70
- p = 2048: ~0.65
- **Gray Line (Crosses)**:
- **Trend**: Stable
- **Data Points**:
- p = 512: ~0.50
- p = 1024: ~0.50
- p = 2048: ~0.50
- **Blue Line (Squares)**:
- **Trend**: Flat near zero
- **Data Points**:
- p = 512: ~0.00
- p = 1024: ~0.00
- p = 2048: ~0.00
## 4. Spatial Grounding
- **Legend**: Top right (no explicit box; inferred from color/marker mapping)
- **Axis Markers**:
- X-axis ticks at p = 512, 1024, 2048
- Y-axis ticks at 0.00, 0.25, 0.50, 0.75, 1.00
## 5. Component Isolation
- **Header**: Title and subtitle ("High-d case: n = 500, q = 3, s = 5")
- **Main Chart**: Four data series with distinct markers/colors
- **Footer**: No explicit footer; axis labels and ticks dominate lower regions
## 6. Cross-Reference Verification
- **Legend Colors vs. Line Colors**:
- Green squares match green line
- Red circles match red line
- Purple triangles match purple line
- Gray crosses match gray line
- Blue squares match blue line
## 7. Trend Verification
- **Green Line**: Confirmed flat (no deviation from 1.00)
- **Red Line**: Confirmed upward trend (0.80 → 0.90)
- **Purple Line**: Confirmed peak at 1024 (0.70) followed by decline
- **Gray Line**: Confirmed stable (~0.50)
- **Blue Line**: Confirmed near-zero (no visible deviation)
## 8. Data Table Reconstruction
| p | Green Squares | Red Circles | Purple Triangles | Gray Crosses | Blue Squares |
|------|---------------|-------------|------------------|--------------|--------------|
| 512 | 1.00 | ~0.80 | ~0.50 | ~0.50 | ~0.00 |
| 1024 | 1.00 | ~0.85 | ~0.70 | ~0.50 | ~0.00 |
| 2048 | 1.00 | ~0.90 | ~0.65 | ~0.50 | ~0.00 |
## 9. Additional Notes
- **No Other Languages**: All text is in English.
- **No Embedded Diagrams/Tables**: Chart contains only line plots.
- **No Missing Data**: All series have values at all p points.
</details>
(b) $n=500$ , and $p$ varies from $500$ to $3000$ .
Figure 5: False discovery rate (FDR) with non-diagonal $D$ for SIV ( $\blacksquare$ , blue), Lasso ( $\CIRCLE$ , red), Trim ( $\blacktriangle$ , purple), Null ( $\square$ , green), and IV-Lasso( $×$ , grey) based on $1000$ Monte Carlo runs.
5.2 Simulation studies with non-linear outcome models
We then evaluate the performance of our proposed estimator (10) with nonlinear outcome models. We set $q=2$ , $s=2$ , and $p=10$ . We consider two different settings for $f(X)$ . In the first setting, $f(X;\beta)=\sum^{10}_{j=1}X_{j}^{3}\beta_{j}$ with $\beta=(0.3,0.3,0,0,...,0)^{\mathrm{\scriptscriptstyle T}}∈\mathbb{R}^{10}$ and $g(U)=U_{1}^{3}\gamma_{1}+U_{2}^{3}\gamma_{2}$ . In the second setting, $f(X;\beta)=\exp(X^{\mathrm{\scriptscriptstyle T}}\beta)$ and $g(U)=(U^{3})^{{\mathrm{\scriptscriptstyle T}}}\gamma$ . Each element in $\Lambda_{j,k}$ and $\gamma_{k}$ is independently generated from $\mathbb{N}(0,1)$ for $j=1,...,p$ and $k=1,...,q$ . The hidden variables $U_{i,k}$ follow i.i.d. standard normal distributions for $i=1,...,n$ and $k=1,...,q$ . The random errors are generated as $\epsilon_{x}\sim\mathbb{N}(0,\sigma^{2}_{x}I_{p})$ and $\epsilon_{y}\sim\mathbb{N}(0,\sigma^{2})$ , where $\sigma_{x}=2$ and $\sigma=1$ . We evaluate the performance of our estimators with $n∈\{1000,...,5000\}$ . All simulation results are based on 1000 Monte Carlo runs.
For comparison, we did not implement the other methods in Section 5.1, as they are not designed for nonlinear outcome models. Instead, we consider a popular method for addressing endogeneity in high-dimensional settings (e.g. Wang and Blei, 2019; Ouyang et al., 2023; Fan et al., 2024), which first estimates the unmeasured confounders $U$ and then directly adjusts for its estimate $\widehat{U}$ in the regression modeling. Specifically, we compare the proposed method with the following two variations of the so-called U-hat method, with the tuning parameter $k$ selected using 10-fold cross-validation in all cases:
1. (SIV): We obtain $\widehat{\beta}$ from (10). The $\ell_{0}$ -optimization is accelerated using the splicing technique (Zhang et al., 2023).
1. (U-hat1): First, we obtain $\widehat{\bm{U}}∈\mathbb{R}^{n× q}$ using the equation $\widehat{\bm{U}}=\bm{X}\widehat{\text{Cov}}(X)^{-1}\widehat{\Lambda}$ . Next, we obtain $\widehat{\beta}$ by solving the following optimization problem:
$$
(\widehat{\beta},\widehat{\gamma})=\underset{\beta\in\mathbb{R}^{p},\gamma\in\mathbb{R}^{q}}{\operatorname*{arg\,min}}\|\bm{Y}-f(\bm{X};\beta)-\widehat{\bm{U}}\gamma\|_{2}^{2}\quad\text{subject to}\quad\|\beta\|_{0}\leq k.
$$
1. (U-hat2): We first obtain $\widehat{\bm{U}}∈\mathbb{R}^{n× q}$ similarly. We then obtain $\widehat{\beta}$ by solving the following optimization problem:
$$
(\widehat{\beta},\widehat{\gamma})=\underset{\beta\in\mathbb{R}^{p},\gamma\in\mathbb{R}^{q}}{\operatorname*{arg\,min}}\|\bm{Y}-f(\bm{X};\beta)-\widehat{\bm{U}}^{3}\gamma\|_{2}^{2}\quad\text{subject to}\quad\|\beta\|_{0}\leq k,
$$
where $\widehat{\bm{U}}^{3}∈\mathbb{R}^{n× q}$ is defined as $\{\widehat{\bm{U}}^{3}\}_{i,j}=\{\widehat{\bm{U}}_{i,j}\}^{3}$ . Note that this method assumes knowledge of the specific form of $g(U)$ , which is generally not available in practice.
Figure 6 displays the $\ell_{1}$ -estimation errors for all estimators. The results indicate that U-hat1 and U-hat2 perform similarly, with their biases stabilizing as the sample size increases. In contrast, the bias of the proposed estimator decreases with larger sample sizes, demonstrating its consistency in estimating the causal parameter $\beta$ under nonlinear outcome models. In Section S.7.9 of the supplementary material, we further consider a setting with a nonlinear outcome model and a non-diagonal $\text{Cov}(\epsilon_{x})$ . The results suggest that the SIV method remains consistent in this complex setting.
**Remark 5**
*Under the linear models in Section 5.1, results from the U-hat (including U-hat1 and U-hat2) and SIV methods coincide numerically. While the $U$ -hat method has been proposed in the literature (e.g., Wang and Blei, 2019), its validity has been widely challenged (e.g., Ogburn et al., 2019; Grimmer et al., 2023). From this perspective, our results may be viewed as a justification for the $U$ -hat method in the special and unrealistic case of the linear outcome model (2). In general, our approach differs fundamentally from the U-hat methods. The U-hat1 method yields consistent estimators of $\beta$ only when the treatment–outcome relationship is linear. As shown in Section S.8 of the supplementary material, U-hat1 fails under the more general model
$$
Y=f(X;\beta)+g(U)+\epsilon_{y},
$$
where $f(X;\beta)$ may be nonlinear. In particular, we derive a necessary condition (Equation (S65)) that must hold for U-hat1 to consistently estimate $\beta$ , but this condition is typically violated in nonlinear outcome models. The U-hat2 method, on the other hand, relies on modeling the latent confounder–outcome relationship, introducing additional assumptions that are implausible in practice and do not resolve the underlying identification challenge. In contrast, our approach constructs instrumental variables, which are agnostic to the form of unmeasured confounding, even if the causal relationship is nonlinear. Because of this, we believe it is a more robust approach in this setting. To our knowledge, we are the first to apply the IV framework to this problem, offering a method that accommodates nonlinear causal effects and arbitrary dependence on unmeasured confounders $U$ . See Section S.8 of the supplementary material for further discussion and a comparison between the proposed method and the U-hat methods.*
<details>
<summary>2304.01098v4/Figures/revision_figures/non_linear_setting1.png Details</summary>

### Visual Description
# Technical Document Extraction
## Header
- **Equation**:
`Y = (X³)ᵀβ + (U³)ᵀγ + εγ`
(Text rendered in black font at the top of the image)
## Main Chart
### Axes
- **X-axis**:
- Label: `n` (black font)
- Tick marks: `1024`, `2048`, `4096`
- **Y-axis**:
- Label: `L₁ estimation error` (black font)
- Tick marks: `0.005`, `0.010`, `0.015`
### Legend
- **Placement**: Top-right corner (implied by standard chart conventions)
- **Components**:
1. **Red triangles**: `(X³)ᵀβ`
2. **Black circles**: `(U³)ᵀγ`
3. **Blue squares**: `εγ`
### Data Trends
1. **Red line (triangles)**:
- **Trend**: Slightly increasing (flatline with minor upward slope)
- **Values**:
- At `n = 1024`: ~0.012
- At `n = 2048`: ~0.0125
- At `n = 4096`: ~0.013
2. **Black line (circles)**:
- **Trend**: Flat (constant)
- **Values**:
- At `n = 1024`: ~0.011
- At `n = 2048`: ~0.011
- At `n = 4096`: ~0.011
3. **Blue line (squares)**:
- **Trend**: Decreasing (linear decline)
- **Values**:
- At `n = 1024`: ~0.0035
- At `n = 2048`: ~0.0025
- At `n = 4096`: ~0.001
## Footer
- **No additional text or components**
## Validation
- **Legend-Data Consistency**:
- Red triangles match `(X³)ᵀβ` (topmost line).
- Black circles match `(U³)ᵀγ` (middle line).
- Blue squares match `εγ` (bottom line).
- **Spatial Grounding**:
- All labels and markers align with standard Cartesian chart conventions.
- **Trend Verification**:
- Red line’s upward slope and blue line’s decline are visually confirmed.
## Notes
- **Language**: All text is in English.
- **No omitted data**: All axis labels, legends, and trends are explicitly extracted.
</details>
(a) Non-linear setting 1.
<details>
<summary>2304.01098v4/Figures/revision_figures/non_linear_setting2.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Chart Title
**Y = exp(Xᵀβ) + (U³)ᵀγ + ε_y**
*Equation representing the modeled relationship between variables.*
---
## Axes Labels
- **X-axis**: `n` (Sample size, logarithmic scale: 1024, 2048, 4096)
- **Y-axis**: `L₁ estimation error` (Range: 0.05 to 0.10)
---
## Legend
- **Placement**: Top-right corner
- **Components**:
1. **Black circles**: Model `Y = exp(Xᵀβ)`
2. **Red triangles**: Model `Y = exp(Xᵀβ) + (U³)ᵀγ`
3. **Blue squares**: Model `Y = exp(Xᵀβ) + (U³)ᵀγ + ε_y`
---
## Data Series & Trends
### 1. **Black Circles (Model: `Y = exp(Xᵀβ)`)**
- **Trend**: Peaks at `n = 2048` (L₁ error ≈ 0.10), then declines slightly.
- **Key Points**:
- `n = 1024`: ~0.095
- `n = 2048`: ~0.10
- `n = 4096`: ~0.085
### 2. **Red Triangles (Model: `Y = exp(Xᵀβ) + (U³)ᵀγ`)**
- **Trend**: Relatively flat with minor fluctuations.
- **Key Points**:
- `n = 1024`: ~0.09
- `n = 2048`: ~0.09
- `n = 4096`: ~0.085
### 3. **Blue Squares (Model: `Y = exp(Xᵀβ) + (U³)ᵀγ + ε_y`)**
- **Trend**: Steady linear decline as `n` increases.
- **Key Points**:
- `n = 1024`: 0.07
- `n = 2048`: ~0.055
- `n = 4096`: ~0.03
---
## Observations
1. **Model Complexity Impact**:
- Adding error term `ε_y` (blue squares) reduces L₁ error significantly compared to simpler models.
- Model with `(U³)ᵀγ` (red triangles) shows minimal improvement over the base model (black circles).
2. **Sample Size Sensitivity**:
- All models exhibit reduced estimation error as `n` increases, with the blue squares demonstrating the most pronounced improvement.
3. **Error Magnitude**:
- Base model (`Y = exp(Xᵀβ)`) has the highest error (~0.09–0.10).
- Full model (`Y = exp(Xᵀβ) + (U³)ᵀγ + ε_y`) achieves the lowest error (~0.03 at `n = 4096`).
---
## Spatial Grounding & Validation
- **Legend Colors**:
- Black circles match black line.
- Red triangles match red line.
- Blue squares match blue line.
- **Trend Verification**:
- Black line slopes upward then downward (peak at 2048).
- Red line remains flat.
- Blue line slopes downward consistently.
---
## Notes
- No non-English text detected.
- All axis markers and labels are explicitly transcribed.
- Data points cross-referenced with legend colors for accuracy.
</details>
(b) Non-linear setting 2.
Figure 6: Simulation results for nonlinear models with $p=10$ and $n=1000,1500,...,5000$ . The methods compared are SIV ( $\blacksquare$ , blue), U-hat1 ( $\CIRCLE$ , black), and U-hat2 ( $\blacktriangle$ , brown).
6 Real data application
To further illustrate the proposed synthetic instrumental variable method, we reanalyzed a mouse obesity dataset described by Wang et al. (2006). The study involved a cross of 334 mice between the C3H strain and the susceptible C57BL/6J (B6) strain on an ApoE-null background, which were fed a Western diet for 16 weeks. The dataset includes genotype data on 1,327 SNPs, gene expression profiles of 23,388 liver tissue genes, and clinical information such as body weights. Lin et al. (2015a) previously analyzed this dataset using regularized methods for high-dimensional instrumental variable regression, treating the SNPs as potential instruments, the gene expressions as treatments, and identifying 17 genes likely to affect mouse body weight. Gleason et al. (2021) also discussed controversies surrounding the use of SNPs as instruments for estimating the effects of gene expression. Miao et al. (2023a) applied their method to estimate the causal effects associated with these 17 genes. However, their approach cannot be used to estimate effects associated with the full set of 23,388 genes, as it only accommodates low-dimensional exposures. In our analysis, we use the same dataset to identify the genes that influence mouse body weight and to estimate the magnitude of these effects. Notably, our method does not depend on genotype data or other instrumental variables and can handle high-dimensional exposures.
We followed the procedure described in Lin et al. (2015a) to preprocess the dataset. Genes with a missing rate greater than 0.1 were removed, and the remaining missing gene expression data were imputed using nearest neighbor averaging (Troyanskaya et al., 2001). We also removed genes that could not be mapped to the Mouse Genome Database (MGD) and those with a standard deviation of gene expression levels less than $0.1$ . A marginal linear regression model was fitted between the mice’s body weight and sex to subtract the estimated sex effect from the body weight, adjusting for the effect of sex. The fitted residual was used as the outcome $Y$ , and the gene expression levels were centered and standardized as multiple treatments. After data cleaning and merging gene expression and clinical data, the final dataset comprised $p=2819$ genes from $n=306$ mice ( $154$ female and $152$ male).
The estimator proposed by Onatski (2010a) suggests that there are three unobserved latent factors, so we applied our method with $\widehat{q}=3$ , using a linear outcome model as the working model. Five genes were found to affect mouse body weight. Under the plurality rule A4, we conclude that the causal effects are identifiable as $\widehat{q}+\widehat{s}=8\ll p=2819$ . Specifically, our analysis suggests that increasing the gene expression levels of $Igfbp2$ , $Rab27a$ , $Dct$ , $Ankhd1$ , and $Gck$ by one standard deviation leads to changes of $-1.98([-2.69,-1.27])$ , $1.88([1.26,2.51])$ , $1.43([0.86,2.02])$ , $-1.33([-1.90,-0.77])$ , and $1.17([0.69,1.66])$ grams in mouse body weight, respectively. The empirical confidence intervals were constructed using the ivreg function; see Section S.7.6 for more details.
We compared our approach with six methods: (i) the Lasso method; (ii) the two-stage regularization (2SR) method (Lin et al., 2015a), which leverages SNPs as high-dimensional instrumental variables to estimate the causal effect; (iii) the auxiliary variable method (Miao et al., 2023a), which focuses on the 17 genes identified by Lin et al. (2015a) as having non-zero effects and uses the five SNPs selected by Lin et al. (2015a) as auxiliary variables; (iv) the null variable method (Miao et al., 2023a), which assumes that more than half of the 17 genes have zero effects on mouse body weight; (v) the Trim method (Ćevid et al., 2020a); and (vi) the IV-Lasso method, as detailed in Section 5. Detailed results for these comparison methods are included in Section S.6 of the supplementary material. The number of active genes found by these methods, defined as genes with non-zero effects on mouse body weight, is 87, 17, 4, 2, 4, and 14, respectively. Consistent with the simulation results, the Lasso method identifies many more active exposures than our method, and its selected set includes all genes identified by our approach. Compared to the other methods, both the 2SR and auxiliary variable methods rely on additional SNP information.
All methods, except the null variable method, identify the expression of the $Igfbp2$ gene (insulin-like growth factor binding protein 2) as a cause of obesity, which is known to prevent obesity and protect against insulin resistance (Wheatcroft et al., 2007). Additionally, we identify four other genes potentially linked to obesity. The $Rab27a$ gene (Ras-related protein Rab-27A) is involved in insulin granule docking in pancreatic $\beta$ cells, and Rab27a-mutated mice show glucose intolerance after a glucose load (Kasai et al., 2005), suggesting its positive effect on body weight. The $Dct$ gene (Dopachrome Tautomerase) has been associated with obesity and glucose intolerance (Kim et al., 2015), with overexpression observed in the visceral adipose tissue of morbidly obese patients (Randhawa et al., 2009). The $Gck$ gene (Glucokinase) plays a key role in blood glucose recognition, and its overexpression is linked to insulin resistance (Randhawa et al., 2009), which may explain its impact on body weight.
7 Discussion
In this paper, we study how to identify and estimate causal effects with a multi-dimensional treatment in the presence of unmeasured confounding. Our key assumption is a sparse causation assumption, which in many contexts serves as an appealing alternative to the widely adopted sparse association assumption. We develop a synthetic instrument approach to identify and estimate causal effects without the need to collect additional exogenous information, such as instrumental variables. Our estimation procedure can be formulated as an $\ell_{0}$ -optimization problem and can therefore be solved efficiently using off-the-shelf packages.
A distinctive feature of our framework is that it allows the use of Algorithm 2 to consistently test the sparsity condition A3 under the other model assumptions. In practice, however, we observe that this test can be unstable in finite samples, particularly in boundary cases where $p≈ s+q$ . Developing more stable tests for the sparsity condition A3 is an interesting avenue for future research.
We have focused on a linear treatment model (1). In a general nonlinear treatment model, where the relationship between treatment $X$ and the unmeasured factor $U$ is nonlinear, nonlinear factor analysis could be used to fit $X=m(U;\Lambda)+\epsilon$ . However, identifying a function $h(·)$ such that $h(X)$ is independent of $U$ remains a significant challenge. Extending this framework to accommodate nonlinear treatment models is left for future research.
We have also focused on the identification and estimation problems. Assuming that the $\ell_{0}$ -penalization procedure (7) accurately selects the true non-zero causal effects, standard M-estimation theory can be used to construct pointwise confidence intervals. However, constructing uniformly valid confidence intervals for the causal parameters remains a challenge, as statistical inference after model selection is typically not uniform (Leeb and Pötscher, 2005). One promising approach is to build on a uniformly valid inference method for the standard $\ell_{0}$ -penalization procedure, which, to the best of our knowledge, remains an open problem in the statistical literature.
Acknowledgements
We thank Xin Bing, Zhichao Jiang, Wang Miao, Thomas Richardson, James Robins, Dominik Rothenhäusler, and Xiaochuan Shi for their helpful discussions and constructive comments. We also extend our gratitude to the Editor, the Associate Editor, and the anonymous referees for their valuable and thoughtful input, which has significantly improved the quality of this manuscript.
Supplementary material
The supplementary material contains further discussions of the assumptions and additional simulation results. It also includes more examples and proofs of all the theorems and lemmas.
References
- Amemiya (1974) Amemiya, T. (1974), “The nonlinear two-stage least-squares estimator,” Journal of Econometrics, 2, 105–110.
- Anderson and Rubin (1956) Anderson, T. and Rubin, H. (1956), “Statistical inference in factor analysis,” in Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability: Held at the Statistical Laboratory, University of California, December, 1954, July and August, 1955, Univ of California Press, vol. 1, p. 111.
- Andrews (1999a) Andrews, D. W. (1999a), “Consistent moment selection procedures for generalized method of moments estimation,” Econometrica, 67, 543–563.
- Andrews (1999b) — (1999b), “Consistent moment selection procedures for generalized method of moments estimation,” Econometrica, 67, 543–563.
- Angrist et al. (1996) Angrist, J. D., Imbens, G. W., and Rubin, D. B. (1996), “Identification of causal effects using instrumental variables,” Journal of the American Statistical Association, 91, 444–455.
- Bai (2003) Bai, J. (2003), “Inferential theory for factor models of large dimensions,” Econometrica, 71, 135–171.
- Belloni et al. (2018) Belloni, A., Chernozhukov, V., Chetverikov, D., Hansen, C., and Kato, K. (2018), “High-dimensional econometrics and regularized GMM,” arXiv preprint arXiv:1806.01888.
- Bing et al. (2022) Bing, X., Ning, Y., and Xu, Y. (2022), “Adaptive estimation in multivariate response regression with hidden variables,” The Annals of Statistics, 50, 640–672.
- Burgess et al. (2017) Burgess, S., Small, D. S., and Thompson, S. G. (2017), “A review of instrumental variable estimators for Mendelian randomization,” Statistical Methods in Medical Research, 26, 2333–2355.
- Candès et al. (2011) Candès, E. J., Li, X., Ma, Y., and Wright, J. (2011), “Robust principal component analysis?” Journal of the ACM, 58, 1–37.
- Ćevid et al. (2020a) Ćevid, D., Bühlmann, P., and Meinshausen, N. (2020a), “Spectral deconfounding via perturbed sparse linear models,” Journal of Machine Learning Research, 21, 1–41.
- Ćevid et al. (2020b) — (2020b), “Spectral deconfounding via perturbed sparse linear models,” Journal of Machine Learning Research, 21, 1–41.
- Chandrasekaran et al. (2010) Chandrasekaran, V., Parrilo, P. A., and Willsky, A. S. (2010), “Latent variable graphical model selection via convex optimization,” in 2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton), IEEE, pp. 1610–1613.
- Chandrasekaran et al. (2011) Chandrasekaran, V., Sanghavi, S., Parrilo, P. A., and Willsky, A. S. (2011), “Rank-sparsity incoherence for matrix decomposition,” SIAM Journal on Optimization, 21, 572–596.
- Claassen et al. (2013) Claassen, T., Mooij, J., and Heskes, T. (2013), “Learning sparse causal models is not NP-hard,” arXiv preprint arXiv:1309.6824.
- D’Amour (2019) D’Amour, A. (2019), “On multi-cause approaches to causal inference with unobserved counfounding: Two cautionary failure cases and a promising alternative,” in The 22nd International Conference on Artificial Intelligence and Statistics, PMLR, pp. 3478–3486.
- Fan et al. (2013a) Fan, J., Liao, Y., and Mincheva, M. (2013a), “Large covariance estimation by thresholding principal orthogonal complements,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75, 603–680.
- Fan et al. (2013b) — (2013b), “Large covariance estimation by thresholding principal orthogonal complements,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75, 603–680.
- Fan et al. (2024) Fan, J., Lou, Z., and Yu, M. (2024), “Are latent factor regression and sparse regression adequate?” Journal of the American Statistical Association, 119, 1076–1088.
- Gleason et al. (2021) Gleason, K. J., Yang, F., and Chen, L. S. (2021), “A robust two-sample transcriptome-wide Mendelian randomization method integrating GWAS with multi-tissue eQTL summary statistics,” Genetic Epidemiology, 45, 353–371.
- Grimmer et al. (2023) Grimmer, J., Knox, D., and Stewart, B. (2023), “Naive regression requires weaker assumptions than factor models to adjust for multiple cause confounding,” Journal of Machine Learning Research, 24, 1–70.
- Guo et al. (2022a) Guo, Z., Ćevid, D., and Bühlmann, P. (2022a), “Doubly debiased lasso: High-dimensional inference under hidden confounding,” Annals of Statistics, 50, 1320–1347.
- Guo et al. (2022b) — (2022b), “Doubly debiased lasso: High-dimensional inference under hidden confounding,” Annals of Statistics, 50, 1320–1347.
- Guo et al. (2018) Guo, Z., Kang, H., Tony Cai, T., and Small, D. S. (2018), “Confidence intervals for causal effects with invalid instruments by using two-stage hard thresholding with voting,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80, 793–815.
- Han and Wang (2013) Han, P. and Wang, L. (2013), “Estimation with missing data: beyond double robustness,” Biometrika, 100, 417–430.
- Hastie et al. (2009) Hastie, T., Tibshirani, R., Friedman, J. H., and Friedman, J. H. (2009), The Elements of Statistical Learning: Data Mining, Inference, and Prediction, vol. 2, Springer.
- Kang et al. (2016) Kang, H., Zhang, A., Cai, T. T., and Small, D. S. (2016), “Instrumental variables estimation with some invalid instruments and its application to Mendelian randomization,” Journal of the American Statistical Association, 111, 132–144.
- Kanwal et al. (2017) Kanwal, M., Ding, X.-J., and Cao, Y. (2017), “Familial risk for lung cancer,” Oncology Letters, 13, 535–542.
- Kasai et al. (2005) Kasai, K., Ohara-Imaizumi, M., Takahashi, N., Mizutani, S., Zhao, S., Kikuta, T., Kasai, H., Nagamatsu, S., Gomi, H., Izumi, T., et al. (2005), “Rab27a mediates the tight docking of insulin granules onto the plasma membrane during glucose stimulation,” The Journal of Clinical Investigation, 115, 388–396.
- Kim et al. (2015) Kim, B.-S., Pallua, N., Bernhagen, J., and Bucala, R. (2015), “The macrophage migration inhibitory factor protein superfamily in obesity and wound repair,” Experimental & Molecular Medicine, 47, e161–e161.
- Kong et al. (2022) Kong, D., Yang, S., and Wang, L. (2022), “Identifiability of causal effects with multiple causes and a binary outcome,” Biometrika, 109, 265–272.
- Leeb and Pötscher (2005) Leeb, H. and Pötscher, B. M. (2005), “Model selection and inference: Facts and fiction,” Econometric Theory, 21, 21–59.
- Lin et al. (2015a) Lin, W., Feng, R., and Li, H. (2015a), “Regularization methods for high-dimensional instrumental variables regression with an application to genetical genomics,” Journal of the American Statistical Association, 110, 270–288.
- Lin et al. (2015b) — (2015b), “Regularization methods for high-dimensional instrumental variables regression with an application to genetical genomics,” Journal of the American Statistical Association, 110, 270–288.
- Miao et al. (2023a) Miao, W., Hu, W., Ogburn, E. L., and Zhou, X.-H. (2023a), “Identifying effects of multiple treatments in the presence of unmeasured confounding,” Journal of the American Statistical Association, 118, 1953–1967.
- Miao et al. (2023b) — (2023b), “Identifying effects of multiple treatments in the presence of unmeasured confounding,” Journal of the American Statistical Association, 118, 1953–1967.
- Mullahy (1997) Mullahy, J. (1997), “Instrumental-variable estimation of count data models: Applications to models of cigarette smoking behavior,” Review of Economics and Statistics, 79, 586–593.
- Ogburn et al. (2019) Ogburn, E. L., Shpitser, I., and Tchetgen, E. J. T. (2019), “Comment on “Blessings of multiple causes”,” Journal of the American Statistical Association, 114, 1611–1615.
- Ogburn et al. (2020) — (2020), “Counterexamples to ‘The Blessings of Multiple Causes’ by Wang and Blei,” arXiv preprint arXiv:2001.06555.
- Onatski (2010a) Onatski, A. (2010a), “Determining the number of factors from empirical distribution of eigenvalues,” The Review of Economics and Statistics, 92, 1004–1016.
- Onatski (2010b) — (2010b), “Determining the number of factors from empirical distribution of eigenvalues,” The Review of Economics and Statistics, 92, 1004–1016.
- Ouyang et al. (2023) Ouyang, J., Tan, K. M., and Xu, G. (2023), “High-dimensional inference for generalized linear models with hidden confounding,” The Journal of Machine Learning Research, 24, 14030–14090.
- Pearl (2009) Pearl, J. (2009), Causality, Cambridge university press.
- Pearl (2013) — (2013), “Linear models: A useful ‘microscope’ for causal analysis,” Journal of Causal Inference, 1, 155–170.
- Pfister and Peters (2022) Pfister, N. and Peters, J. (2022), “Identifiability of sparse causal effects using instrumental variables,” in Uncertainty in Artificial Intelligence, PMLR, pp. 1613–1622.
- Randhawa et al. (2009) Randhawa, M., Huff, T., Valencia, J. C., Younossi, Z., Chandhoke, V., Hearing, V. J., and Baranova, A. (2009), “Evidence for the ectopic synthesis of melanin in human adipose tissue,” The FASEB Journal, 23, 835–843.
- Raskutti et al. (2011) Raskutti, G., Wainwright, M. J., and Yu, B. (2011), “Minimax rates of estimation for high-dimensional linear regression over $\ell_{q}$ -balls,” IEEE Transactions on Information Theory, 57, 6976–6994.
- Shen et al. (2016) Shen, D., Shen, H., and Marron, J. (2016), “A general framework for consistency of principal component analysis,” The Journal of Machine Learning Research, 17, 5218–5251.
- Shen et al. (2013) Shen, X., Pan, W., Zhu, Y., and Zhou, H. (2013), “On constrained and regularized high-dimensional regression,” Annals of the Institute of Statistical Mathematics, 65, 807–832.
- Spirtes and Glymour (1991) Spirtes, P. and Glymour, C. (1991), “An algorithm for fast recovery of sparse causal graphs,” Social Science Computer Review, 9, 62–72.
- Sun et al. (2023) Sun, B., Liu, Z., and Tchetgen Tchetgen, E. (2023), “Semiparametric efficient G-estimation with invalid instrumental variables,” Biometrika, 110, 953–971.
- Tchetgen Tchetgen et al. (2024) Tchetgen Tchetgen, E. J., Ying, A., Cui, Y., Shi, X., and Miao, W. (2024), “An introduction to proximal causal inference,” Statistical Science, 39, 375–390.
- Tibshirani (1996) Tibshirani, R. (1996), “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 58, 267–288.
- Troyanskaya et al. (2001) Troyanskaya, O., Cantor, M., Sherlock, G., Brown, P., Hastie, T., Tibshirani, R., Botstein, D., and Altman, R. B. (2001), “Missing value estimation methods for DNA microarrays,” Bioinformatics, 17, 520–525.
- Uhler et al. (2013) Uhler, C., Raskutti, G., Bühlmann, P., and Yu, B. (2013), “Geometry of the faithfulness assumption in causal inference,” The Annals of Statistics, 436–463.
- Vershynin (2018) Vershynin, R. (2018), High-dimensional probability: An introduction with applications in data science, vol. 47, Cambridge university press.
- Wang et al. (2017) Wang, J., Zhao, Q., Hastie, T., and Owen, A. B. (2017), “Confounder adjustment in multiple hypothesis testing,” Annals of Statistics, 45, 1863–1894.
- Wang and Tchetgen Tchetgen (2018) Wang, L. and Tchetgen Tchetgen, E. (2018), “Bounded, efficient and multiply robust estimation of average treatment effects using instrumental variables,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80, 531–550.
- Wang et al. (2006) Wang, S., Yehya, N., Schadt, E. E., Wang, H., Drake, T. A., and Lusis, A. J. (2006), “Genetic and genomic analysis of a fat mass trait with complex inheritance reveals marked sex specificity,” PLOS Genetics, 2, e15.
- Wang and Blei (2019) Wang, Y. and Blei, D. M. (2019), “The blessings of multiple causes,” Journal of the American Statistical Association, 114, 1574–1596.
- Wheatcroft et al. (2007) Wheatcroft, S. B., Kearney, M. T., Shah, A. M., Ezzat, V. A., Miell, J. R., Modo, M., Williams, S. C., Cawthorn, W. P., Medina-Gomez, G., Vidal-Puig, A., et al. (2007), “IGF-binding protein-2 protects against the development of obesity and insulin resistance,” Diabetes, 56, 285–294.
- Zhang et al. (2023) Zhang, Y., Zhu, J., Zhu, J., and Wang, X. (2023), “A splicing approach to best subset of groups selection,” INFORMS Journal on Computing, 35, 104–119.
- Zhou (2009) Zhou, S. (2009), “Restricted eigenvalue conditions on subgaussian random matrices,” arXiv preprint arXiv:0912.4045.
- Zhou et al. (2014) Zhou, X.-H., Obuchowski, N. A., and McClish, D. K. (2014), Statistical Methods in Diagnostic Medicine, John Wiley & Sons.
- Zhou et al. (2024) Zhou, Y., Tang, D., Kong, D., and Wang, L. (2024), “Promises of parallel outcomes,” Biometrika, 111, 537–550.
- Zhou et al. (2010) Zhou, Z., Li, X., Wright, J., Candes, E., and Ma, Y. (2010), “Stable principal component pursuit,” in 2010 IEEE international symposium on information theory, IEEE, pp. 1518–1522.
- Zhu et al. (2020) Zhu, J., Wen, C., Zhu, J., Zhang, H., and Wang, X. (2020), “A polynomial algorithm for best-subset selection problem,” Proceedings of the National Academy of Sciences, 117, 33117–33123.
Web-based supporting materials for “The synthetic instrument: From sparse association to sparse causation”
Dingke Tang 1, Dehan Kong 2, and Linbo Wang 2 Address for correspondence: Linbo Wang, Department of Statistical Sciences, University of Toronto, 700 University Avenue, 9th Floor, Toronto, ON, Canada, M5G 1Z5 Email: linbo.wang@utoronto.ca 1 Department of Mathematics and Statistics, University of Ottawa, Ottawa, Ontario, Canada. 2 Department of Statistical Sciences, University of Toronto, Toronto, Ontario, Canada
The supplementary material is organized as follows: Section S.1 presents additional discussion for our results. Section S.2 contains the lemmas used to prove the theorems. Section S.3 includes the proofs of the lemmas stated in Section S.2. In Section S.4, we prove the main theorems stated in the paper. Section S.6 presents the real data results for all comparison methods. Section S.7 provides additional simulation results. Finally, we discuss the U-hat1 method in Section S.8.
Notation
We use $\widehat{\beta}$ to denote the solution to the $\ell_{0}$ optimization problem (7), and $\dot{\beta}$ to denote the true value of $\beta$ in model (3). Let $\mathcal{A}=\{j\mid\dot{\beta}_{j}≠ 0\}$ with $|\mathcal{A}|=s$ . Let ${\bf X}∈\mathbb{R}^{n× p}$ be the design matrix of multiple causes, and let $\widehat{\bf X}=\widehat{\mathbb{E}}({\bf X}\mid\widehat{\mathrm{SIV}})$ denote the projected design matrix.
Define $\Sigma_{X}=\mathbb{E}(XX^{→p})$ , $D=\mathrm{Cov}(\epsilon_{x})$ , $\widehat{\Sigma}_{X}={\bf X}^{→p}{\bf X}/(n-1)$ , and $\widehat{\Sigma}_{\widehat{X}}=\widehat{\bf X}^{→p}\widehat{\bf X}/(n-1)$ . For two positive sequences $a_{n}$ and $b_{n}$ , we write $a_{n}\lesssim b_{n}$ if there exists a constant $C>0$ such that $a_{n}≤ Cb_{n}$ for all $n$ ; $a_{n}\asymp b_{n}$ if both $a_{n}\lesssim b_{n}$ and $b_{n}\lesssim a_{n}$ ; and $a_{n}\ll b_{n}$ if $\limsup_{n→∞}a_{n}/b_{n}=0$ .
For a matrix $A$ , we use $A_{·,j}$ and $A_{i,·}$ to denote its $j$ th column and $i$ th row, respectively. For an index set $J$ , let $A_{J,·}$ and $A_{·,J}$ denote the submatrices of $A$ containing only rows or columns in $J$ , while $A_{-J,·}$ and $A_{·,-J}$ denote the submatrices obtained by deleting the corresponding rows or columns. We use $\|A\|_{F}$ , $\|A\|_{2}$ , and $\|A\|_{∞}$ to denote the Frobenius norm, spectral norm, and element-wise maximum norm of $A$ , respectively. We write $\mu_{i}(A)$ for the $i$ th singular value of $A$ , and $\lambda_{i}(A)$ for its $i$ th eigenvalue. The maximum and minimum eigenvalues of $A$ are denoted by $\lambda_{\max}(A)$ and $\lambda_{\min}(A)$ , respectively.
Throughout the appendix, we assume that $q$ is known; the estimation of $q$ is discussed in Section S.1.4.
In the high-dimensional setting where $p$ is allowed to diverge, consider the singular value decomposition (SVD) and principal component analysis (PCA) of ${\bf X}$ and ${\bf X}^{→p}{\bf X}/(n-1)$ , respectively:
$$
\begin{split}{\bf X}&=\sqrt{n-1}\,(\widehat{\eta}_{1}\;\widehat{\eta}_{2}\;\ldots\;\widehat{\eta}_{p})\,\mathrm{diag}(\sqrt{\widehat{\lambda}}_{1},\ldots,\sqrt{\widehat{\lambda}}_{p})({\widehat{\xi}}_{1}\;\ldots\;{\widehat{\xi}}_{p})^{\top},\\
\frac{{\bf X}^{\top}{\bf X}}{n-1}&=({\widehat{\xi}}_{1}\;\ldots\;{\widehat{\xi}}_{p})\,\mathrm{diag}(\widehat{\lambda}_{1},\ldots,\widehat{\lambda}_{p})\,({\widehat{\xi}}_{1}\;\ldots\;{\widehat{\xi}}_{p})^{\top}.\end{split}
$$
Here $\widehat{\lambda}_{1}≥\widehat{\lambda}_{2}≥·s≥\widehat{\lambda}_{k}>0=\widehat{\lambda}_{k+1}=·s=\widehat{\lambda}_{p}$ , with $k=\mathrm{Rank}({\bf X})$ . Define $\widehat{\Lambda}=(\sqrt{\widehat{\lambda}}_{1}\widehat{\xi}_{1}\;...\;\sqrt{\widehat{\lambda}}_{q}\widehat{\xi}_{q})$ , ${B}_{\widehat{\Lambda}^{\perp}}=(\widehat{\xi}_{q+1}\;\widehat{\xi}_{q+2}\;...\;\widehat{\xi}_{p})$ , and $H=(\widehat{\eta}_{1}\;\widehat{\eta}_{2}\;...\;\widehat{\eta}_{q})∈\mathbb{R}^{n× p}$ such that $H^{→p}H=I_{q}$ .
Appendix S.1 Discussion on Assumptions
S.1.1 Discussion on condition A1
Implications of Assumption A1 when $q=1$
If $\mathrm{Cov}(\epsilon_{x})$ is diagonal, then Assumption A1 implies that any $q× q$ submatrix of $\Lambda$ is invertible. In particular, when $q=1$ , this means that no element of $\Lambda$ can be zero. However, this need not hold when $\mathrm{Cov}(\epsilon_{x})$ is not diagonal. To see this, let $D:=\mathrm{Cov}(\epsilon_{x})$ . By the Woodbury matrix identity, we have
$$
\begin{split}\mathrm{Cov}(X)^{-1}\Lambda&=(D+\Lambda^{\top}\Lambda)^{-1}\Lambda\\
&=\{D^{-1}-D^{-1}\Lambda(I_{q}+\Lambda^{\top}D^{-1}\Lambda)^{-1}\Lambda^{\top}D^{-1}\}\Lambda\\
&=D^{-1}\Lambda-D^{-1}\Lambda(I_{q}+\Lambda^{\top}D^{-1}\Lambda)^{-1}\Lambda^{\top}D^{-1}\Lambda\\
&=D^{-1}\Lambda-D^{-1}\Lambda(I_{q}+\Lambda^{\top}D^{-1}\Lambda)^{-1}(\Lambda^{\top}D^{-1}\Lambda+I_{q}-I_{q})\\
&=D^{-1}\Lambda(I_{q}+\Lambda^{\top}D^{-1}\Lambda)^{-1}.\end{split}
$$
If $D$ is diagonal and Assumption A1 holds—so that any $q× q$ submatrix of $\mathrm{Cov}(X)^{-1}\Lambda∈\mathbb{R}^{p× q}$ is invertible—then any $q× q$ submatrix of $\Lambda$ must also be invertible. For $q=1$ , this implies that no element of $\Lambda$ is zero.
Nevertheless, even if some $\Lambda_{j}$ are zero, the causal parameter $\beta$ remains identifiable under analogous conditions, and the SIV method can still be used to identify and estimate the parameter of interest.
We now discuss how to identify $\beta$ under these weaker conditions. Since $\Lambda$ is identifiable up to a rotation, we can still identify the set $\{j:\Lambda_{j}=0\}$ . Define $\mathcal{A}=\{j:\;\beta_{j}≠ 0\}$ and $\mathcal{B}=\{j:\;\Lambda_{j}≠ 0\}$ . Let $p=\dim(X)$ , $p_{0}=\#\mathcal{B}$ , $s=\|\beta\|_{0}$ , and $s_{0}=\#(\mathcal{A}\cap\mathcal{B})$ . Because $\Lambda_{\mathcal{B}^{c}}=0$ , there is no confounder that can affect $X_{\mathcal{B}^{c}}$ and $Y$ , so $\beta_{\mathcal{B}^{c}}$ can be obtained directly via regression of $Y$ on $X_{\mathcal{B}^{c}}$ . Identification of $\beta_{\mathcal{B}}$ then follows from Theorem 1. Specifically, $\beta_{\mathcal{B}}$ is identifiable if $s_{0}≤(p_{0}-1)/2$ under the majority rule, or if $s_{0}<(p_{0}-1)$ under the plurality rule. These conditions parallel Assumptions A3 and A3’ in the main text.
Next, we illustrate how to use the SIV method to estimate $\beta$ when some $\Lambda_{j}=0$ . Consider a scenario suggested by a reviewer: we have five treatments $(X_{1},...,X_{5})$ . Among them, $X_{1}$ , $X_{4}$ , and $X_{5}$ affect $Y$ , while $X_{4}$ and $X_{5}$ are not confounded by $U$ . This setting is shown in Figure S1.
$U$ $X_{2}$ $X_{1}$ $X_{3}$ $X_{4}$ $X_{5}$ $Y$ $\Lambda_{1}$ $\Lambda_{2}$ $\Lambda_{3}$ $\dot{\beta}_{1}$ $\dot{\beta}_{4}$ $\dot{\beta}_{5}$ ${\gamma}$
Figure S1: Graphical illustration for unconfounded treatments.
Suppose $\Lambda_{1}=\Lambda_{2}=1$ , $\Lambda_{3}=2$ , and $\Lambda_{4}=\Lambda_{5}=0$ . Then
$$
\Lambda=\begin{pmatrix}1\\
1\\
2\\
0\\
0\end{pmatrix},\quad B_{\Lambda^{\perp}}=\begin{pmatrix}\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{3}}&0&0\\
-\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{3}}&0&0\\
0&-\frac{1}{\sqrt{3}}&0&0\\
0&0&1&0\\
0&0&0&1\end{pmatrix},\quad SIV=B_{\Lambda^{\perp}}^{\top}X=\begin{pmatrix}\frac{X_{1}-X_{2}}{\sqrt{2}}\\
\frac{X_{1}+X_{2}-X_{3}}{\sqrt{3}}\\
X_{4}\\
X_{5}\end{pmatrix}. \tag{11200}
$$
In this setting, variables $\{X_{i}\}$ with $i∈\{1,2,3\}$ are uncorrelated with those $\{X_{j}\}$ with $j∈\{4,5\}$ . Linear regression of $X$ on $SIV$ yields $\widehat{X}=(\widehat{X}_{1},...,\widehat{X}_{5})^{→p}$ , where $\widehat{X}_{i}=\mathbb{E}(X_{i}\mid(X_{1}-X_{2})/\sqrt{2},(X_{1}+X_{2}-X_{3})/\sqrt{3})$ for $i∈\{1,2,3\}$ , while $\widehat{X}_{4}=X_{4}$ and $\widehat{X}_{5}=X_{5}$ . Note that in this case, the two unconfounded treatments are themselves SIVs, and $(\widehat{X}_{1},\widehat{X}_{2},\widehat{X}_{3})$ remain uncorrelated with $(\widehat{X}_{4},\widehat{X}_{5})$ .
To show that regressing $Y$ on $\widehat{X}$ is equivalent to running two separate regressions—one of $Y$ on $(\widehat{X}_{1},\widehat{X}_{2},\widehat{X}_{3})$ with a sparsity constraint, and another of $Y$ on $(X_{4},X_{5})$ —consider
$$
\widehat{\beta}=\underset{\beta\in\mathbb{R}^{5},\|\beta\|_{0}\leq k}{\arg\min}\;\mathbb{E}\left(Y-\sum_{j=1}^{5}\widehat{X}_{j}\beta_{j}\right)^{2}.
$$
Let $\dot{\beta}=(\dot{\beta}_{1},0,0,\dot{\beta}_{4},\dot{\beta}_{5})^{→p}$ , $Y_{1}=\dot{\beta}_{1}X_{1}+U\gamma+\epsilon_{y}$ , and $Y_{2}=\dot{\beta}_{4}X_{4}+\dot{\beta}_{5}X_{5}$ , so $Y=Y_{1}+Y_{2}$ . Then
$$
\begin{split}\mathbb{E}\Big(Y-\sum_{j=1}^{5}\widehat{X}_{j}\beta_{j}\Big)^{2}&=\mathbb{E}\Big\{(Y_{1}-\sum_{j=1}^{3}\widehat{X}_{j}\beta_{j})+(Y_{2}-\sum_{j=4}^{5}\widehat{X}_{j}\beta_{j})\Big\}^{2}\\
&=\mathbb{E}(Y_{1}-\sum_{j=1}^{3}\widehat{X}_{j}\beta_{j})^{2}+\mathbb{E}(Y_{2}-\sum_{j=4}^{5}X_{j}\beta_{j})^{2},\end{split}
$$
where the last equality holds because the two terms are uncorrelated. Thus,
$$
\widehat{\beta}=\underset{\beta\in\mathbb{R}^{5},\|\beta\|_{0}\leq k}{\arg\min}\Big\{\mathbb{E}(Y_{1}-\sum_{j=1}^{3}\widehat{X}_{j}\beta_{j})^{2}+\mathbb{E}(Y_{2}-\sum_{j=4}^{5}\widehat{X}_{j}\beta_{j})^{2}\Big\}.
$$
Minimizing over $\beta_{4,5}∈\mathbb{R}^{2}$ gives $(\widehat{\beta}_{4},\widehat{\beta}_{5})=(\dot{\beta}_{4},\dot{\beta}_{5})$ , while minimizing over $\beta_{1,2,3}∈\mathbb{R}^{3}$ with $\|\beta_{1,2,3}\|_{0}≤ 1$ yields $(\widehat{\beta}_{1},\widehat{\beta}_{2},\widehat{\beta}_{3})=(\dot{\beta}_{1},0,0)$ by Theorem 1. Hence, our algorithm successfully identifies and estimates $\beta$ even when some $\Lambda_{j}=0$ . This argument extends directly to the more general case where $p>3$ , $q=1$ , and some $\Lambda_{j}=0$ .
Discussion of Assumption A1 when $q≥ 2$
If $q≥ 2$ and $\text{Cov}(\epsilon_{x})$ is a diagonal matrix, Assumption A1 implies that any $q× q$ submatrix of $\Lambda$ should be invertible. We now discuss cases where Assumption A1 is violated when $q≥ 2$ :
(i.) If all $q× q$ submatrices of $\Lambda$ are not invertible, we may find $\tilde{U}$ with $\text{dim}(\tilde{U})=q_{0}<q$ to quantify the unmeasured confounding, thus simplifying the analysis to the $q_{0}<q$ scenario. Here, $\text{Rank}(\Lambda)=q_{0}<q$ , and the matrix can be decomposed using a rank factorization as $\Lambda=\widetilde{\Lambda}· C$ , where $\widetilde{\Lambda}$ is a $p× q_{0}$ matrix with full column rank, and $C$ is a $q_{0}× q$ matrix with full row rank. We define $\widetilde{U}=CU$ , indicating that the treatments $X$ are confounded by $\widetilde{U}∈\mathbb{R}^{q_{0}}$ .
$$
X=\Lambda U+\epsilon_{x}=\widetilde{\Lambda}\widetilde{U}+\epsilon_{x}
$$
(ii.) When a row $\Lambda_{i}∈\mathbb{R}^{q}$ is zero, our algorithm remains effective. In such cases, a column in $B_{\Lambda^{\perp}}$ corresponds to $e_{i}$ , where $e_{i}∈\mathbb{R}^{p}$ is the unit vector with its $i$ th element as 1 and all other elements as 0. Consequently, a variable in SIV will be $X_{i}$ itself, and $\widehat{X}_{i}$ will also be $X_{i}$ . Regression of $Y$ on $\widehat{X}$ will yield the causal parameter $\beta$ , similar to the scenario when $q=1$ .
(iii.) In cases where some $q× q$ submatrices of $\Lambda∈\mathbb{R}^{p× q}$ are not invertible, we argue that such scenarios are rare. It would require identifying two groups of treatments, $\{X_{i1},X_{i2},...,X_{iq}\}$ and $\{X_{j1},X_{j2},...,X_{jq}\}$ , where the effects from $U$ to $\{X_{i1},X_{i2},...,X_{iq}\}$ are linearly dependent, while those to $\{X_{j1},X_{j2},...,X_{jq}\}$ are linearly independent. In practice, finding confounders with such a structure is unlikely.
S.1.2 Discussion on Condition A4
We now discuss the plurality rule. First, consider the simplest case where $q=1$ . Define the adjusted loading matrix $\widetilde{\Lambda}=\text{Cov}^{-1}(X)\Lambda∈\mathbb{R}^{p× 1}$ . The plurality rule is violated only if there exist two indices $1≤ i,j≤ p$ such that
$$
\displaystyle\beta_{i}\neq 0,\;\;\beta_{j}\neq 0, \displaystyle\frac{\beta_{i}}{\widetilde{\Lambda}_{i}}=\frac{\beta_{j}}{\widetilde{\Lambda}_{j}}.
$$
Equation (b) implies that the causal effects of $X_{i}$ and $X_{j}$ on the outcome $Y$ are exactly proportional to their corresponding adjusted factor loadings, which is unlikely to occur in practice.
We now discuss the more general setting where $q≥ 2$ . Define the adjusted loading matrix $\widetilde{\Lambda}=\text{Cov}^{-1}(X)\Lambda∈\mathbb{R}^{p× q}$ . Let $C^{*}_{(1)},...,C^{*}_{(q+1)}$ be subsets of $\{1,2,...,p\}$ with cardinality $q$ and $\dot{\beta}_{C^{*}_{(i)}}≠ 0$ . The plurality rule is violated only if the following equation holds:
$$
\widetilde{\Lambda}^{-1}_{\{C^{*}_{(1)},\cdot\}}\beta_{C^{*}_{(1)}}=\ldots=\widetilde{\Lambda}^{-1}_{\{C^{*}_{(q+1)},\cdot\}}\beta_{C^{*}_{(q+1)}}. \tag{1}
$$
It is unlikely to find $q+1$ different subsets $C^{*}_{(1)},...,C^{*}_{(q+1)}$ such that equation (c) holds.
These conditions are similar in spirit to the faithfulness assumption commonly assumed in the causal discovery literature (Pearl, 2009); we refer interested readers to Uhler et al. (2013) for more discussions related to this topic.
We now present an example where the plurality rule is violated.
**Example 1**
*Assume $q=1$ . $(X,Y)$ are generated via the equations $X=\Lambda U+\epsilon_{x}$ and $Y=X^{→p}\dot{\beta}+U\gamma+\epsilon_{y}$ , with parameters $\Lambda=(1,1,...,1)^{→p}∈\mathbb{R}^{p× 1}$ , $\dot{\beta}_{1}=\dot{\beta}_{2}=...=\dot{\beta}_{s}=1$ , $\dot{\beta}_{s+1}=...=\dot{\beta}_{p}=0$ , $\gamma=1$ , and random variables $U,\epsilon_{y}\sim\mathbb{N}(0,1)$ , $\epsilon_{x}\sim\mathbb{N}(0,I_{p})$ . We examine how the identifiability of $\dot{\beta}$ varies with different values of $s$ . Let $SIV=B_{\Lambda^{\perp}}^{→p}X$ and $\widetilde{X}=\mathbb{E}(X\mid SIV)$ . We have the following result for regression $Y\sim\widetilde{X}$ :
$$
\begin{split}&\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{p},\;\|\beta\|_{0}<p-1}\mathbb{E}(Y-\widetilde{X}\beta)^{2}=\{\dot{\beta},\;1_{p}-\dot{\beta}\}.\\
\end{split}
$$*
We now provide the proof of this example. From Lemma S.3, we know that
$$
\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{p}}\mathbb{E}(Y-\widetilde{X}\beta)^{2}=\{\dot{\beta}+\Sigma_{X}^{-1}\Lambda\alpha\mid\alpha\in\mathbb{R}\}.
$$
In this example, $\Sigma_{X}^{-1}\Lambda=(I_{p}+\Lambda\Lambda^{→p})^{-1}\Lambda=\Lambda/(1+p)$ . Adding the sparsity constraint, we have
$$
\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{p},\;||\beta||_{0}<p-1}\mathbb{E}(Y-\widetilde{X}\beta)^{2}=\{\dot{\beta}+\Lambda\alpha\mid\alpha\in\mathbb{R},\;||\dot{\beta}+\Lambda\alpha||_{0}<p-1\}=\{\dot{\beta},\;1_{p}-\dot{\beta}\},\\
$$
So far, we have proven the equations in the above example. This example demonstrates that $s<p-q$ does not guarantee the identifiability of $\beta$ , as the regression yields two possible solutions: $\dot{\beta}$ and $1_{p}-\dot{\beta}$ . In scenarios where the plurality rule is violated, as shown in this example, reduced-rank regression with the constraint $s<p-q$ cannot uniquely determine $\beta$ .
S.1.3 Weak instruments and identification
Instead of the setting described in Section S.1.1 and Figure S1, one reviewer pointed out a challenging scenario in which some variables are unconfounded and have no effect on the outcome. Specifically, consider the case where $p=5$ , $q=1$ , and $s=3$ , with only $X_{1}$ , $X_{2}$ , and $X_{3}$ affecting $Y$ , and loadings $\Lambda_{4}=\Lambda_{5}=0$ . In this setting, the parameter of interest is not identifiable. We now discuss the performance of our estimator under this challenging case.
It is still possible to test whether the parameter is identifiable in this scenario. We describe an approach based on assessing the uniqueness of the solution to the relevant optimization problem, which serves as a proxy for the identifiability of $\beta$ .
Setting, Observation, and Algorithm
Consider the scenario you suggested, where the latent confounder $U$ affects treatments $X_{1}$ , $X_{2}$ , and $X_{3}$ , and these variables also have nonzero causal effects on the outcome. Figure S2 provides a graphical illustration of this setting.
$U$ $X_{2}$ $X_{1}$ $X_{3}$ $X_{4}$ $X_{5}$ $Y$ $\Lambda_{1}$ $\Lambda_{2}$ $\Lambda_{3}$ ${\dot{\beta}_{1}}$ ${\dot{\beta}_{2}}$ ${\dot{\beta}_{3}}$ ${\gamma}$
Figure S2: Graphical illustration of the setting described in Section S.1.3.
When applying our algorithm, we obtain $\widehat{s}=\|\widehat{\beta}\|_{0}=2$ . Hence the sparsity condition $\widehat{s}+\widehat{q}=3<4=5-1=p-q$ is satisfied. A direct application of the sparsity check in equation (6) could therefore lead to the erroneous conclusion that $\beta$ is identifiable. Instead, motivated by Theorem 1, we conclude that $\beta$ is not identifiable by observing that the solution set
$$
\left\{\widehat{\beta}:\widehat{\beta}\in\underset{\beta\in\mathbb{R}^{5},\,\|\beta\|_{0}=2}{\operatorname*{arg\,min}}\|Y-\widehat{X}\beta\|_{2}^{2}\right\}
$$
is not unique. This non-uniqueness arises both at the population level and in finite-sample numerical settings.
Specifically, consider the following minimizers:
| | $\displaystyle\widehat{\beta}^{(1,2)}=\underset{\beta_{1}≠ 0,\;\beta_{2}≠ 0,\;\beta_{3}=\beta_{4}=\beta_{5}=0}{\operatorname*{arg\,min}}\|Y-\widehat{X}\beta\|_{2}^{2},$ | |
| --- | --- | --- |
We observe that $\widehat{\beta}^{(1,2)}$ , $\widehat{\beta}^{(1,3)}$ , and $\widehat{\beta}^{(2,3)}$ all belong to the solution set (S4), confirming its non-uniqueness. Based on this observation, we develop Algorithm S3 to formally test the identifiability of $\beta$ . We also perform simulation studies, which demonstrate that this test performs well in finite samples.
Algorithm S3 Testing Identifiability of Causal Effects via Synthetic Instruments and Uniqueness
Input: ${\bf X}∈\mathbb{R}^{n× p}$ (centered), ${\bf Y}∈\mathbb{R}^{n× 1}$
1: Obtain $\widehat{\bm{X}}$ using Algorithm 2 in the manuscript;
2: Obtain $\widehat{\beta}$ and $\widehat{s}$ by solving (7) in the manuscript with cross-validation;
3: Set Identifiability = True;
4: Define the collection of index sets $M=\{\mathcal{A}\mid\mathcal{A}⊂\{1,2,3,...,p\},|\mathcal{A}|=\widehat{s}\}$ ;
5: For $\mathcal{A}∈ M$ , define the optimizer:
$$
\widetilde{\beta}^{(\mathcal{A})}:=\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{p}:\beta_{\mathcal{A}^{c}}=0}\|\bm{Y}-\widehat{\bm{X}}\beta\|_{2}^{2}.
$$
6: if $∃\;\widetilde{\beta}^{(\mathcal{A})}$ , where $\mathcal{A}∈ M$ , such that
$$
\|\widetilde{\beta}^{(\mathcal{A})}-\widehat{\beta}\|_{0}\neq 0,\quad\text{and}\quad\|\bm{Y}-\widehat{\bm{X}}\;\widetilde{\beta}^{(\mathcal{A})}\|_{2}^{2}-\|\bm{Y}-\widehat{\bm{X}}\;\widehat{\beta}\|_{2}^{2}\leq tol
$$
then Identifiability = False;
7: else if $\widehat{s}+\widehat{q}≥ p$ then
8: Identifiability = False;
9: return Identifiability.
Simulation and Results
We now evaluate the performance of Algorithm S3 in a finite-sample setting. We conducted a simulation study with 1,000 repetitions, each involving $p=5$ predictors. We considered scenarios in which $s=1,2,$ or $3$ of the predictors were causal. The data included a single unobserved confounder ( $q=1$ ), specified by a loading matrix $\Lambda=(3,1,2,0,0)^{→p}$ and an effect size of $\gamma=1$ . The true coefficients for the relevant predictors were $\beta_{1}=·s=\beta_{s}=1$ . In each repetition, we generated the unobserved confounder $U\sim\mathcal{N}(0,1)$ , followed by the treatments $X=U\Lambda+\epsilon_{X}$ , where $\epsilon_{X}\sim\mathcal{N}(0,I_{5})$ . The outcome variable was then generated as $Y=X\beta+U\gamma+\epsilon_{Y}$ , where $\epsilon_{Y}\sim\mathcal{N}(0,1)$ .
We applied our algorithm to determine whether the parameter $\beta$ is identifiable. The parameters described above were fixed, and the sample size varied from 2,000 to 10,000. To account for numerical precision, we set $tol=10^{-5}$ . The simulation results are presented in Figure S3. As shown in the figure, when $s=1$ and the parameter is identifiable, the algorithm correctly recognizes identifiability with probability close to 1 across all settings. When $s=2$ or $3$ , where the parameter is not identifiable, the algorithm correctly detects non-identifiability with high probability as the sample size increases. These results confirm that our algorithm performs well in finite samples.
<details>
<summary>2304.01098v4/Figures/revision2_fig/probabilty.png Details</summary>

### Visual Description
# Technical Document: Identifiability Test via Uniqueness
## Chart Description
The image presents a line chart titled **"Identifiability Test via Uniqueness"**, analyzing the relationship between **n** (x-axis) and **Probability** (y-axis) across three sparsity levels (s=1, s=2, s=3). The chart uses distinct colors to differentiate data series, with a legend positioned on the right.
---
### **Key Components**
1. **Title**:
- **Text**: "Identifiability Test via Uniqueness"
- **Location**: Top center of the chart.
2. **Axes**:
- **X-axis (Horizontal)**:
- **Label**: "n"
- **Range**: 2000 to 10000 (in increments of 2000).
- **Markers**: 2000, 4000, 6000, 8000, 10000.
- **Y-axis (Vertical)**:
- **Label**: "Probability"
- **Range**: 0.00 to 1.00 (in increments of 0.25).
- **Markers**: 0.00, 0.25, 0.50, 0.75, 1.00.
3. **Legend**:
- **Location**: Right side of the chart.
- **Entries**:
- **Red (s=1)**: "s=1"
- **Green (s=2)**: "s=2"
- **Blue (s=3)**: "s=3"
---
### **Data Series Analysis**
#### **1. s=1 (Red Line)**
- **Trend**: Horizontal line at **y=1.00** across all x-values.
- **Data Points**:
- At n=2000: 1.00
- At n=4000: 1.00
- At n=6000: 1.00
- At n=8000: 1.00
- At n=10000: 1.00
#### **2. s=2 (Green Line)**
- **Trend**: Steady decline from **y=0.50** at n=2000 to near **y=0.00** at n=10000.
- **Data Points**:
- At n=2000: 0.50
- At n=4000: 0.20
- At n=6000: 0.10
- At n=8000: 0.05
- At n=10000: 0.03
#### **3. s=3 (Blue Line)**
- **Trend**: Gradual decline from **y=0.55** at n=2000 to near **y=0.00** at n=10000.
- **Data Points**:
- At n=2000: 0.55
- At n=4000: 0.15
- At n=6000: 0.08
- At n=8000: 0.03
- At n=10000: 0.02
---
### **Cross-Reference Verification**
- **Legend Colors vs. Data Points**:
- Red (s=1) consistently matches the horizontal line at y=1.00.
- Green (s=2) aligns with the steeply declining curve.
- Blue (s=3) corresponds to the slower decline compared to s=2.
---
### **Spatial Grounding**
- **Legend Placement**: Right side of the chart, adjacent to the y-axis.
- **Data Point Consistency**:
- All red data points (s=1) are at y=1.00.
- Green (s=2) and blue (s=3) points decrease monotonically with increasing n.
---
### **Trend Verification**
1. **s=1**: No variation; probability remains constant at 1.00.
2. **s=2**: Sharpest decline, dropping from 0.50 to 0.03.
3. **s=3**: Moderate decline, dropping from 0.55 to 0.02.
---
### **Additional Notes**
- **Language**: All text is in English.
- **No Embedded Diagrams/Tables**: The chart contains only line plots and a legend.
- **Critical Insight**: Higher sparsity (s=1) maintains perfect identifiability (probability=1.00), while lower sparsity (s=2, s=3) shows diminishing identifiability with increasing n.
---
### **Final Output**
The chart demonstrates that identifiability (probability) is preserved at s=1 regardless of n, but degrades significantly for s=2 and s=3 as n increases. This suggests sparsity level critically impacts the test's reliability.
</details>
Figure S3: The probability that Algorithm S3 concludes that the model is identifiable for different $s$ and $n$ . Results are based on 1,000 Monte Carlo runs.
S.1.4 Determining $q$
In practice, when the number of unmeasured confounders is unknown, it is necessary to conduct a test to determine this quantity. Our treatment model utilizes a factor model with $q$ latent factors, suggesting that techniques from factor analysis can be employed to identify the number of latent factors, and consequently, the number of unmeasured confounders. We adopt the estimator introduced by Onatski (2010b), which determines the number of unobserved confounders based on the maximum eigengap. The proposed criterion for the eigenvalue difference is as follows:
$$
\widehat{q}=\max\{i\leq r_{\max}:\widehat{\lambda}_{i}-\widehat{\lambda}_{i+1}\geq t_{0}\},
$$
where $t_{0}$ is a given threshold, $r_{\max}$ is the prespecified maximum number of factors, and $\widehat{\lambda}_{1},\widehat{\lambda}_{2},...$ are the eigenvalues of the matrix $\bm{X^{\mathrm{\scriptscriptstyle T}}X}/n$ in decreasing order. We select $t_{0}$ using the method provided in their paper and set $r_{\max}=10$ . We directly apply their algorithm in our simulations, which performs well in both high- and low-dimensional settings.
Appendix S.2 Lemmas
S.2.1 Lemmas for Identification Result
We state Lemmas S.1 to S.4 for Theorem 1.
**Lemma S.1**
*Under conditions A1 – A2, let $\widetilde{X}=\mathbb{E}(X\mid SIV)$ . We have:
$$
\widetilde{X}=FX,
$$
where $F=\Sigma_{X}B_{\Lambda^{\perp}}(B_{\Lambda^{\perp}}^{\mathrm{\scriptscriptstyle T}}\Sigma_{X}B_{\Lambda^{\perp}})^{-1}B_{\Lambda^{\perp}}^{\mathrm{\scriptscriptstyle T}}$ .*
**Lemma S.2**
*The matrix $F$ has the following properties:
1. $F^{2}=F$ ;
1. $F\Sigma_{X}F^{\mathrm{\scriptscriptstyle T}}=FDF^{\mathrm{\scriptscriptstyle T}}=FD=F\Sigma_{X}$ ;
1. $F=\Sigma_{X}B_{\Lambda^{\perp}}(B_{\Lambda^{\perp}}^{\mathrm{\scriptscriptstyle T}}\Sigma_{X}B_{\Lambda^{\perp}})^{-1}B_{\Lambda^{\perp}}^{\mathrm{\scriptscriptstyle T}}=DB_{\Lambda^{\perp}}(B_{\Lambda^{\perp}}^{\mathrm{\scriptscriptstyle T}}DB_{\Lambda^{\perp}})^{-1}B_{\Lambda^{\perp}}^{\mathrm{\scriptscriptstyle T}}$ .*
**Lemma S.3**
*Let $\Phi(\widetilde{\beta})=\mathbb{E}\{Y-\widetilde{X}^{\mathrm{\scriptscriptstyle T}}\widetilde{\beta}\}^{2}$ . Under models (1)–(3) and Conditions A1 – A2, we have:
$$
\underset{\widetilde{\beta}\in\mathbb{R}^{p}}{\operatorname*{arg\,min}}\Phi(\widetilde{\beta})=\{\widetilde{\beta}\mid F\Sigma_{X}(\widetilde{\beta}-\dot{\beta})=0\}=\{\dot{\beta}+\Sigma_{X}^{-1}\Lambda\alpha,\alpha\in\mathbb{R}^{q}\}.
$$*
**Lemma S.4**
*(Uniqueness of Experts’ Solution) Under Conditions A1 – A2, for a set $C⊂\{1,2,...,p\}$ where $|C|=q$ , the optimization problem:
$$
\underset{\widetilde{\beta}_{C}=0,\widetilde{\beta}\in\mathbb{R}^{p}}{\operatorname*{arg\,min}}\mathbb{E}\{Y-\widetilde{X}^{\mathrm{\scriptscriptstyle T}}\widetilde{\beta}\}^{2}
$$
has a unique solution.*
S.2.2 Lemmas for the low dimensional setting
We state Lemmas S.5 – S.11 for the low dimensional setting, where $p$ is fixed.
**Lemma S.5**
*In the low dimensional setting where $\widehat{\Lambda}$ is obtained from maximum likelihood estimation, we have:
$$
\widehat{\mathbf{X}}=\widehat{\mathbb{E}}(\mathbf{X}\mid\widehat{SIV})=\mathbf{X}\widehat{F}^{\mathrm{\scriptscriptstyle T}},
$$
where $\widehat{F}=\widehat{D}B_{\widehat{\Lambda}^{\perp}}(B_{\widehat{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}\widehat{D}B_{\widehat{\Lambda}^{\perp}})^{-1}B_{\widehat{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}$ , $\widehat{D}=\frac{\mathbf{X}^{\mathrm{\scriptscriptstyle T}}\mathbf{X}}{n-1}-\widehat{\Lambda}\widehat{\Lambda}^{\mathrm{\scriptscriptstyle T}}$ , and $\widehat{B}_{\Lambda^{\perp}}∈\mathbb{R}^{p× q}$ is any semi-orthogonal matrix whose column space is orthogonal to the column space of $\widehat{\Lambda}$ .*
**Lemma S.6**
*Assuming $\widehat{F}$ is defined in Lemma S.5, it has the following properties:
1. $\widehat{F}^{2}=\widehat{F}$ ;
1. $\widehat{F}\mathbf{X}^{\mathrm{\scriptscriptstyle T}}\mathbf{X}\widehat{F}^{\mathrm{\scriptscriptstyle T}}=\widehat{F}\mathbf{X}^{\mathrm{\scriptscriptstyle T}}\mathbf{X}$ .*
**Lemma S.7**
*Assume we have three matrices $A∈\mathbb{R}^{p× q}$ , $B∈\mathbb{R}^{p×(p-q)}$ , and $W∈\mathbb{R}^{p× p}$ such that:
- Both $(A\;B)$ and $W$ are invertible.
- $A^{\mathrm{\scriptscriptstyle T}}WB=A^{\mathrm{\scriptscriptstyle T}}W^{\mathrm{\scriptscriptstyle T}}B=0$ .
We then have:
$$
I_{p}=A(A^{\mathrm{\scriptscriptstyle T}}WA)^{-1}A^{\mathrm{\scriptscriptstyle T}}W+B(B^{\mathrm{\scriptscriptstyle T}}WB)^{-1}B^{\mathrm{\scriptscriptstyle T}}W
$$*
**Lemma S.8**
*Assuming $\widehat{F}$ is defined in Lemma S.5, under assumptions B1 – B3, we have $||F-\widehat{F}||_{2}=O_{p}({1/\sqrt{n}})$ .*
**Lemma S.9**
*Under conditions B1 – B3, let $E=({\epsilon}_{y,1},...,{\epsilon}_{y,n})^{\mathrm{\scriptscriptstyle T}}∈\mathbb{R}^{n}$ be the vector of i.i.d. random variables in models (1) and (3). We have:
$$
\left|\left|\frac{\widehat{\mathbf{X}}^{\mathrm{\scriptscriptstyle T}}E}{n}\right|\right|_{\infty}=O_{p}\left(\frac{1}{\sqrt{n}}\right).
$$*
**Lemma S.10**
*Under conditions B1 – B3, $||g(\mathbf{U})^{\mathrm{\scriptscriptstyle T}}\mathbf{X}\widehat{F}^{\mathrm{\scriptscriptstyle T}}/n||_{2}=O_{p}(1/\sqrt{n})$ .*
**Lemma S.11**
*(Sparse Eigenvalue Condition, Low Dimensional Setting) There exists a constant $\pi_{0}>0$ such that:
$$
\liminf_{n}\mathbb{P}\{||\mathbf{\widehat{X}}\theta||_{2}\geq\pi_{0}\sqrt{n}||\theta||_{2},\forall||\theta||_{0}\leq 2s\}=1,
$$
under conditions B1 – B4.*
S.2.3 Lemmas for the high dimensional setting
We state Lemmas S.12 – S.21 for the high dimensional setting, in which $p$ is allowed to diverge. Note that in our proof, we assume $q$ , the number of unmeasured confounders, is known to us.
**Lemma S.12**
*In the high dimensional setting where $\widehat{\Lambda}$ is obtained from the principal component analysis, we have
$$
\widehat{\bf{X}}=\widehat{\mathbb{E}}({\bf{X}}\mid\widehat{SIV})={\bf{X}}\widehat{F}^{\mathrm{\scriptscriptstyle T}},
$$
where $\widehat{F}={B}_{{\widehat{\Lambda}}^{\perp}}{B}_{{\widehat{\Lambda}}^{\perp}}^{T}$ .*
**Lemma S.13**
*Assume $\widehat{F}$ is defined at Lemma S.12, we have
1. $\widehat{F}^{2}=\widehat{F}$ ;
1. $\widehat{F}{\bf{{X}^{\mathrm{\scriptscriptstyle T}}{X}}}\widehat{F}^{\mathrm{\scriptscriptstyle T}}/n=\widehat{F}{\bf{{X}^{\mathrm{\scriptscriptstyle T}}{X}}}/n$ .*
**Lemma S.14**
*Under conditions C1 – C3. Let $E=({\epsilon}_{y,i},...,{\epsilon}_{y,n})^{\mathrm{\scriptscriptstyle T}}∈\mathbb{R}^{n}$ be the vector of i.i.d. random variables defined at (3). Let us define
$$
\tau=A\sigma\sqrt{\frac{\log(p)}{n}}.
$$
for a positive constant A. Under conditions C1 – C3, we have
$$
\mathbb{P}(||\frac{\widehat{\bf{X}}^{\mathrm{\scriptscriptstyle T}}E}{n}||_{\infty}\leq\tau)\geq 1-2p^{1-\frac{A^{2}}{C_{10}}}-p\exp(-C_{11}n),
$$
for some positive constants $C_{10},C_{11}$ .*
**Lemma S.15**
*Recall that $\gamma=\mathbb{E}(Ug(U))∈\mathbb{R}^{q× 1}$ . Under conditions C1 – C3, we have
$$
\left|\left|\frac{g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\bm{X}}{n}-\gamma^{\mathrm{\scriptscriptstyle T}}\Lambda^{\mathrm{\scriptscriptstyle T}}\right|\right|_{\infty}=O_{p}(\sqrt{\frac{\log(p)}{n}})
$$*
**Lemma S.16**
*Define $O∈\mathbb{R}^{q× q}$ :
$$
O=\frac{1}{n}\text{diag}(1/\widehat{\lambda}_{1},\ldots,1/\widehat{\lambda}_{q})\widehat{U}^{\mathrm{\scriptscriptstyle T}}U\Lambda^{\mathrm{\scriptscriptstyle T}}\Lambda,
$$
where $\widehat{U}=(\widehat{\eta}_{1}\;...\;\widehat{\eta}_{q})∈\mathbb{R}^{n× q}$ . Under conditions C1 – C3, we have
$$
||O^{\mathrm{\scriptscriptstyle T}}O-I_{q}||_{2}=O_{p}(\frac{1}{\sqrt{p}}+\frac{1}{\sqrt{n}})
$$
$$
||OO^{\mathrm{\scriptscriptstyle T}}-I_{q}||_{2}=O_{p}(\frac{1}{\sqrt{p}}+\frac{1}{\sqrt{n}}).
$$*
**Lemma S.17**
*Let $\widehat{\Lambda}=(\sqrt{\widehat{\lambda}_{1}}\xi_{1}\;...\;\sqrt{\widehat{\lambda}_{q}}\xi_{q})$ . Under conditions C1 – C3, let $\Lambda_{j,·}$ , $\widehat{\Lambda}_{j,·}∈\mathbb{R}^{q}$ be the jth row of $\Lambda$ and $\widehat{\Lambda}$ respectively. We have,
$$
\max_{1\leq j\leq p}||O\Lambda_{j,\cdot}-\widehat{\Lambda}_{j,\cdot}||_{2}=O_{p}(\frac{1}{\sqrt{p}}+\sqrt{\frac{\log p}{n}}).
$$*
**Lemma S.18**
*Under conditions C1 – C3, there exist a vector $d∈\mathbb{R}^{1× p}$ such that
$$
\frac{g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\bm{X}}{n}\widehat{F}^{\mathrm{\scriptscriptstyle T}}(\widehat{\beta}-\dot{\beta})\leq(1+q)||d||_{\infty}||\widehat{\beta}-\dot{\beta}||_{1},
$$
with $||d||_{∞}=O_{p}(\sqrt{\log(p)/n})$ .*
**Lemma S.19**
*Let ${\bf X}={\bf U}\Lambda+E_{x}$ , where ${\bf U}=(U_{1}\;U_{2}\;...\;U_{n})^{\mathrm{\scriptscriptstyle T}}$ , $E_{x}=(\epsilon_{x,1}\;\epsilon_{x,2}\;...\;\epsilon_{x,n})^{\mathrm{\scriptscriptstyle T}}∈\mathbb{R}^{n× p}$ . Under Conditions C1 – C2, with probability $1-\exp(-cn)$ for some $c>0$ , we have
$$
\displaystyle\min_{||\theta||_{2}=1,||\theta||_{0}\leq 2s}\theta^{\mathrm{\scriptscriptstyle T}}\frac{E_{x}^{\mathrm{\scriptscriptstyle T}}E_{x}}{n}\theta\geq 0.9\lambda_{\min}(D); \displaystyle\max_{||\theta||_{2}=1,||\theta||_{0}\leq 2s}\theta^{\mathrm{\scriptscriptstyle T}}\frac{E_{x}^{\mathrm{\scriptscriptstyle T}}E_{x}}{n}\theta\leq 1.1\lambda_{\max}(D).
$$*
**Lemma S.20**
*( $\ell_{1}$ error rate inequality) Under conditions C1 – C3, assume $\widehat{\beta}$ is obtained via (7) with $k=s$ , we have:
$$
||\widehat{\beta}-\dot{\beta}||_{1}=O_{p}\left(s\left\{(1+q)||d||_{\infty}+||\frac{E^{\mathrm{\scriptscriptstyle T}}\widehat{\bm{X}}}{n}||_{\infty}\right\}\right)
$$
for sufficiently large $n$ , where $d$ is defined at Lemma S.18, and $E=(\epsilon_{y,1},...,\epsilon_{y,n})^{\mathrm{\scriptscriptstyle T}}$ .*
**Lemma S.21**
*(Sparse eigenvalue condition) Under conditions C1 – C4, there exists a constant $\pi_{0}>0$ such that
$$
\liminf_{n}\mathbb{P}\{{||{\bf\widehat{X}}\theta||_{2}}\geq\pi_{0}\sqrt{n}{||\theta||_{2}},\forall||\theta||_{0}\leq 2s\}=1.
$$*
Appendix S.3 Proofs Proposition 2 and Lemmas
Proof of Proposition 2
We now verify that the SIV satisfies the three core assumptions required for an instrumental variable: exclusion restriction, instrumental relevance, and unconfoundedness. By definition, the constructed SIV is given by $\text{SIV}=B_{\Lambda^{\perp}}^{\mathrm{\scriptscriptstyle T}}X=B_{\Lambda^{\perp}}^{\mathrm{\scriptscriptstyle T}}\epsilon_{x}$ .
First, conditional on all the treatments $X$ , SIV is constant, meaning it can only affect the outcome through the treatment, thus satisfying the exclusion restriction. Moreover, SIV is relevant to $X$ because it is a linear combination of $X$ , ensuring instrumental relevance. Finally, since SIV is a linear combination of $\epsilon_{x}$ , it is independent of $U$ , thereby satisfying the unconfoundedness assumption.
Proof of Lemma S.1
Recall that $SIV=B_{\Lambda^{\perp}}^{→p}X$ . Given $\widetilde{X}=\mathbb{E}(X\mid SIV)$ and using the definition of the least squares method, we obtain:
| | $\displaystyle\widetilde{X}$ | $\displaystyle=\mathbb{E}(X\mid SIV)=\text{Cov}(X,SIV)\text{Var}^{-1}(SIV)SIV$ | |
| --- | --- | --- | --- |
Proof of Lemma S.2
These properties can be directly verified from the definition of $F$ .
Proof of Lemma S.3
By the decomposition of $\Phi(\widetilde{\beta})$ and Lemma S.1, we have:
$$
\Phi(\widetilde{\beta})=E(Y^{2})+\widetilde{\beta}^{\top}\mathbb{E}(FXX^{\top}F)\widetilde{\beta}-2\mathbb{E}(YX^{\top}F^{\top})\widetilde{\beta}.
$$
By Lemma S.2, we further have:
$$
\mathbb{E}(FXX^{\top}F^{\top})=F\Sigma_{X}F^{\top}=F\Sigma_{X}.
$$
By models (1), (3), and condition A1, we have:
| | $\displaystyle\mathbb{E}(YX^{→p})F^{→p}$ | $\displaystyle=\mathbb{E}[(\dot{\beta}^{→p}X+g(U)+\epsilon_{y})X^{→p}]F^{→p}$ | |
| --- | --- | --- | --- |
where the last equality holds as $F\Lambda=\Sigma_{X}B_{\Lambda^{\perp}}(B_{\Lambda^{\perp}}^{→p}\Sigma_{X}B_{\Lambda^{\perp}})^{-1}B_{\Lambda^{\perp}}^{→p}\Lambda=0$ . Then, we have:
| | $\displaystyle\frac{∂\Phi(\widetilde{\beta})}{∂\widetilde{\beta}}$ | $\displaystyle=2F\Sigma_{X}\widetilde{\beta}-2F\Sigma_{X}\dot{\beta}$ | |
| --- | --- | --- | --- |
Thus, $\underset{\widetilde{\beta}∈\mathbb{R}^{p}}{\operatorname*{arg\,min}}\;\Phi(\widetilde{\beta})=\{\widetilde{\beta}\mid F\Sigma_{X}(\widetilde{\beta}-\dot{\beta})=0\}$ . Since $\text{Rank}(F)=p-q$ and $F\Lambda=0$ , it follows that $\{x\mid Fx=0\}=\{\Lambda\alpha,\alpha∈\mathbb{R}^{q}\}$ . Consequently, the second equality follows: $\{\widetilde{\beta}\mid F\Sigma_{X}(\widetilde{\beta}-\dot{\beta})=0\}=\{\dot{\beta}+\Sigma_{X}^{-1}\Lambda\alpha\mid\alpha∈\mathbb{R}^{q}\}$ .
Proof of Lemma S.4
Given Lemma S.3, $\{\dot{\beta}+\Sigma_{X}^{-1}\Lambda\alpha\}$ is the set of minimizers for the unconstrained problem:
$$
\underset{\widetilde{\beta}\in\mathbb{R}^{p}}{\operatorname*{arg\,min}}\;\mathbb{E}\{Y-\widetilde{X}^{\top}\widetilde{\beta}\}^{2}.
$$
Given Condition A1, the matrix $\{\Sigma_{X}^{-1}\Lambda\}_{C,·}$ is invertible. Thus, there exists a unique solution $\alpha_{C}$ for the matrix equation $\{\Sigma_{X}^{-1}\Lambda\}_{C,·}\alpha=-\dot{\beta}_{C}$ .
S.3.1 Proofs of Lemmas S.5 – S.11
We provide proofs for Lemmas S.5 – S.11 for the low dimensional setting, where $p$ is fixed.
Proof of Lemma S.5
Given that $\widehat{\mathbf{X}}$ is derived from ordinary least squares, we have the following relationship:
$$
\widehat{\mathbb{E}}(\mathbf{X}\mid\widehat{SIV})=\widehat{SIV}\widehat{\beta}_{OLS},
$$
where $\widehat{\beta}_{OLS}∈\mathbb{R}^{(p-q)× p}$ represents the linear regression coefficients between $\mathbf{X}$ and $\widehat{SIV}$ . We observe that:
$$
\widehat{SIV}=\mathbf{X}B_{\widehat{\Lambda}^{\perp}}.
$$
We then derive:
| | $\displaystyle\widehat{\mathbf{X}}$ | $\displaystyle=\mathbf{X}B_{\widehat{\Lambda}^{\perp}}\left(\frac{B_{\widehat{\Lambda}^{\perp}}^{→p}\mathbf{X}^{→p}\mathbf{X}B_{\widehat{\Lambda}^{\perp}}}{n-1}\right)^{-1}\frac{B_{\widehat{\Lambda}^{\perp}}^{→p}\mathbf{X}^{→p}\mathbf{X}}{n-1}$ | |
| --- | --- | --- | --- |
where $\widehat{F}=\widehat{D}B_{\widehat{\Lambda}^{\perp}}(B_{\widehat{\Lambda}^{\perp}}^{→p}\widehat{D}B_{\widehat{\Lambda}^{\perp}})^{-1}B_{\widehat{\Lambda}^{\perp}}^{→p}$ .
Proof of Lemma S.6
The properties of $\widehat{F}$ can be directly verified from its definition, including:
| | $\displaystyle\widehat{F}\frac{\mathbf{X}^{→p}\mathbf{X}}{n-1}\widehat{F}^{→p}$ | $\displaystyle=\widehat{F}\frac{\mathbf{X}^{→p}\mathbf{X}}{n-1},$ | |
| --- | --- | --- | --- |
This follows because $\widehat{F}$ is a projection matrix based on its construction from $\widehat{D}$ and $B_{\widehat{\Lambda}^{\perp}}$ .
Proof of Lemma S.7
Given matrices $A$ , $B$ , and $W$ that satisfy the conditions stated in the lemma, the identity matrix can be decomposed as follows:
| | $\displaystyle I_{p}$ | $\displaystyle=A(A^{→p}WA)^{-1}A^{→p}W+B(B^{→p}WB)^{-1}B^{→p}W,$ | |
| --- | --- | --- | --- |
demonstrating the orthogonality of the projections onto the subspaces spanned by $A$ and $B$ .
Proof of Lemma S.8
Let $w∈\mathbb{R}^{p}$ such that $||w||_{2}=1$ , we aim to show that
$$
\sup_{||w||_{2}=1}||(F^{\mathrm{\scriptscriptstyle T}}-\widehat{F}^{\mathrm{\scriptscriptstyle T}})w||_{2}=O_{p}(1/\sqrt{n}).
$$
By Lemma S.7, Let $A=D^{-1}\Lambda$ , $B=B_{\Lambda^{\perp}}$ , $W=D$ . We have the following equation:
$$
\begin{split}w=I_{p}w&=D^{-1}\Lambda(\Lambda^{\mathrm{\scriptscriptstyle T}}D^{-1}\Lambda)^{-1}\Lambda^{\mathrm{\scriptscriptstyle T}}w+{B}_{{\Lambda}^{\perp}}({B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}D{B}_{{\Lambda}^{\perp}})^{-1}{B}_{{\Lambda}^{\perp}}^{T}Dw\\
:&=D^{-1}\Lambda\alpha_{1}+{B}_{{\Lambda}^{\perp}}\alpha_{2},\end{split}
$$
where $\alpha_{1}=(\Lambda^{\mathrm{\scriptscriptstyle T}}D^{-1}\Lambda)^{-1}\Lambda^{\mathrm{\scriptscriptstyle T}}w$ , $\alpha_{2}=({B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}D{B}_{{\Lambda}^{\perp}})^{-1}{B}_{{\Lambda}^{\perp}}^{T}Dw$ . The LHS of (S8) satisfies
$$
\sup_{||w||_{2}=1}||(F^{\mathrm{\scriptscriptstyle T}}-\widehat{F}^{\mathrm{\scriptscriptstyle T}})w||_{2}\leq\sup_{||w||_{2}=1}||(F^{\mathrm{\scriptscriptstyle T}}-\widehat{F}^{\mathrm{\scriptscriptstyle T}})D^{-1}\Lambda\alpha_{1}||_{2}+\sup_{||w||_{2}=1}||(F^{\mathrm{\scriptscriptstyle T}}-\widehat{F}^{\mathrm{\scriptscriptstyle T}})B_{\Lambda^{\perp}}\alpha_{2}||_{2}.
$$
For the first term on the RHS of (S10), we derive the following equation (S11) since:
$F^{\mathrm{\scriptscriptstyle T}}D^{-1}\Lambda=D{B}_{{\Lambda}^{\perp}}({B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}{B}_{{\Lambda}^{\perp}})^{-1}{B}_{{\Lambda}^{\perp}}^{T}DD^{-1}\Lambda=0$ .
$$
\begin{split}&\sup_{||w||_{2}=1}||(F^{\mathrm{\scriptscriptstyle T}}-\widehat{F}^{\mathrm{\scriptscriptstyle T}})D^{-1}\Lambda\alpha_{1}||_{2}\\
&=\sup_{||w||_{2}=1}||\widehat{F}^{\mathrm{\scriptscriptstyle T}}D^{-1}\Lambda\alpha_{1}||_{2}\\
&=\sup_{||w||_{2}=1}||\widehat{B}_{{\Lambda}^{\perp}}(\widehat{B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}\widehat{D}\widehat{B}_{{\Lambda}^{\perp}})^{-1}\widehat{B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}\widehat{D}D^{-1}\Lambda\alpha_{1}||_{2}\\
&\leq\sup_{||w||_{2}=1}||\widehat{B}_{{\Lambda}^{\perp}}(\widehat{B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}\widehat{D}\widehat{B}_{{\Lambda}^{\perp}})^{-1}\widehat{B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}\Lambda\alpha_{1}||_{2}+||\widehat{B}_{{\Lambda}^{\perp}}(\widehat{B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}\widehat{D}\widehat{B}_{{\Lambda}^{\perp}})^{-1}\widehat{B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}(I_{p}-\widehat{D}D^{-1})\Lambda\alpha_{1}||_{2}\\
&=\sup_{||w||_{2}=1}||\widehat{B}_{{\Lambda}^{\perp}}(\widehat{B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}\widehat{D}\widehat{B}_{{\Lambda}^{\perp}})^{-1}\widehat{B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}(\Lambda-\widehat{\Lambda}O^{\mathrm{\scriptscriptstyle T}})\alpha_{1}||_{2}\\
&+\sup_{||w||_{2}=1}||\widehat{B}_{{\Lambda}^{\perp}}(\widehat{B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}\widehat{D}\widehat{B}_{{\Lambda}^{\perp}})^{-1}\widehat{B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}(D-\widehat{D})D^{-1}\Lambda\alpha_{1}||_{2}\\
&\leq||\widehat{B}_{{\Lambda}^{\perp}}(\widehat{B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}\widehat{D}\widehat{B}_{{\Lambda}^{\perp}})^{-1}\widehat{B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}||_{2}(||\Lambda-\widehat{\Lambda}O^{\mathrm{\scriptscriptstyle T}}||_{2}+||D-\widehat{D}||_{2}||D^{-1}||_{2}||\Lambda||)\sup_{||w||_{2}=1}||\alpha_{1}||_{2},\end{split}
$$
where $O$ is an orthogonal matrix defined at Condition B3. Note that
$$
||\Lambda-\widehat{\Lambda}O^{\mathrm{\scriptscriptstyle T}}||_{2}=||(\Lambda O-\widehat{\Lambda})O^{\mathrm{\scriptscriptstyle T}}||_{2}=O_{p}(1/\sqrt{n}),
$$
$$
||\widehat{D}-D||_{2}\leq||{\bf{X^{\mathrm{\scriptscriptstyle T}}X}}/n-\Sigma_{X}||_{2}+||\Lambda\Lambda^{\mathrm{\scriptscriptstyle T}}-\widehat{\Lambda}\widehat{\Lambda}^{\mathrm{\scriptscriptstyle T}}||_{2}=O_{p}(1/\sqrt{n})
$$
and
$$
\begin{split}||\widehat{B}_{{\Lambda}^{\perp}}(\widehat{B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}\widehat{D}\widehat{B}_{{\Lambda}^{\perp}})^{-1}\widehat{B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}||_{2}&=||\widehat{B}_{{\Lambda}^{\perp}}(\widehat{B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}\widehat{D}\widehat{B}_{{\Lambda}^{\perp}})^{-1}\widehat{B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}\widehat{D}\widehat{D}^{-1}||_{2}\\
&\leq||\widehat{B}_{{\Lambda}^{\perp}}(\widehat{B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}\widehat{D}\widehat{B}_{{\Lambda}^{\perp}})^{-1}\widehat{B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}\widehat{D}||_{2}|||\widehat{D}^{-1}||_{2}\\
&=1/\lambda_{\min}(\widehat{D})\\
&=O_{p}(1),\end{split} \tag{1}
$$
where the second equation holds as $\widehat{B}_{{\Lambda}^{\perp}}(\widehat{B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}\widehat{D}\widehat{B}_{{\Lambda}^{\perp}})^{-1}\widehat{B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}\widehat{D}$ is a projection matrix. Since $\sup_{||w||_{2}=1}||\alpha_{1}||_{2}=\sup_{||w||_{2}=1}||(\Lambda^{\mathrm{\scriptscriptstyle T}}D^{-1}\Lambda)^{-1}\Lambda^{\mathrm{\scriptscriptstyle T}}w||_{2}≤||(\Lambda^{\mathrm{\scriptscriptstyle T}}D^{-1}\Lambda)^{-1}\Lambda^{\mathrm{\scriptscriptstyle T}}||_{2}=O(1)$ , we conclude that the first term of (S10) is $O_{p}(1/\sqrt{n})$ :
$$
\sup_{||w||_{2}=1}||(F^{\mathrm{\scriptscriptstyle T}}-\widehat{F}^{\mathrm{\scriptscriptstyle T}})D^{-1}\Lambda\alpha_{1}||_{2}=||\widehat{F}^{\mathrm{\scriptscriptstyle T}}D^{-1}\Lambda\alpha_{1}||_{2}=O_{p}(\frac{1}{\sqrt{n}}).
$$
For the second term of (S10), we let $(A,B,W)=(D^{-1}B_{\Lambda},B_{{\Lambda}^{\perp}},D)$ or $(\widehat{D}^{-1}\widehat{B}_{\Lambda},\widehat{B}_{{\Lambda}^{\perp}},\widehat{D})$ in Lemma S.7, where $B_{\Lambda}$ and $\widehat{B}_{\Lambda}∈\mathbb{R}^{p× q}$ are any semi-orthogonal matrices whose column spaces span the same column spaces of $\Lambda$ and $\widehat{\Lambda}$ , respectively. We have
| | $\displaystyle I_{p}=D^{-1}B_{\Lambda}(B_{\Lambda}^{\mathrm{\scriptscriptstyle T}}D^{-1}B_{\Lambda})^{-1}B_{\Lambda}^{\mathrm{\scriptscriptstyle T}}+B_{{\Lambda}^{\perp}}(B_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}DB_{{\Lambda}^{\perp}})^{-1}B_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}D=D^{-1}B_{\Lambda}(B_{\Lambda}^{\mathrm{\scriptscriptstyle T}}D^{-1}B_{\Lambda})^{-1}B_{\Lambda}^{\mathrm{\scriptscriptstyle T}}+F^{\mathrm{\scriptscriptstyle T}}$ | |
| --- | --- | --- |
which implies the second term of (S10) can be rewriten as
$$
\begin{split}&\sup_{||w||_{2}=1}||(F^{\mathrm{\scriptscriptstyle T}}-\widehat{F}^{\mathrm{\scriptscriptstyle T}})B_{\Lambda^{\perp}}\alpha_{2}||_{2}\\
=&\sup_{||w||_{2}=1}||\{(I_{p}-\widehat{F}^{\mathrm{\scriptscriptstyle T}})-(I_{p}-F^{\mathrm{\scriptscriptstyle T}})\}B_{\Lambda^{\perp}}\alpha_{2}||_{2}\\
=&\sup_{||w||_{2}=1}||\{\widehat{D}^{-1}\widehat{B}_{\Lambda}(\widehat{B}_{\Lambda}^{\mathrm{\scriptscriptstyle T}}\widehat{D}^{-1}\widehat{B}_{\Lambda})^{-1}\widehat{B}_{\Lambda}^{\mathrm{\scriptscriptstyle T}}-D^{-1}\Lambda(\Lambda^{\mathrm{\scriptscriptstyle T}}D^{-1}\Lambda)^{-1}\Lambda^{\mathrm{\scriptscriptstyle T}}\}B_{\Lambda^{\perp}}\alpha_{2}||_{2}\\
=&\sup_{||w||_{2}=1}||\widehat{D}^{-1}\widehat{B}_{\Lambda}(\widehat{B}_{\Lambda}^{\mathrm{\scriptscriptstyle T}}\widehat{D}^{-1}\widehat{B}_{\Lambda})^{-1}\widehat{B}_{\Lambda}^{\mathrm{\scriptscriptstyle T}}B_{\Lambda^{\perp}}\alpha_{2}||_{2}\\
\leq&\widehat{D}^{-1}\widehat{B}_{\Lambda}(\widehat{B}_{\Lambda}^{\mathrm{\scriptscriptstyle T}}\widehat{D}^{-1}\widehat{B}_{\Lambda})^{-1}||_{2}||\widehat{B}_{\Lambda}^{\mathrm{\scriptscriptstyle T}}B_{\Lambda^{\perp}}||_{2}\sup_{||w||_{2}=1}||\alpha_{2}||_{2},\end{split}
$$
where $O$ is the orthogonal matrix defined in Condition B3. We control the three terms of (S13) as follows. For the first term of equation (S13):
$$
\begin{split}||\widehat{D}^{-1}\widehat{B}_{\Lambda}(\widehat{B}_{\Lambda}^{\mathrm{\scriptscriptstyle T}}\widehat{D}^{-1}\widehat{B}_{\Lambda})^{-1}||_{2}&=||\widehat{D}^{-1}\widehat{B}_{\Lambda}(\widehat{B}_{\Lambda}^{\mathrm{\scriptscriptstyle T}}\widehat{D}^{-1}\widehat{B}_{\Lambda})^{-1}\widehat{B}_{\Lambda}^{\mathrm{\scriptscriptstyle T}}\widehat{B}_{\Lambda}||_{2}\\
&\leq||\widehat{D}^{-1}\widehat{B}_{\Lambda}(\widehat{B}_{\Lambda}^{\mathrm{\scriptscriptstyle T}}\widehat{D}^{-1}\widehat{B}_{\Lambda})^{-1}\widehat{B}_{\Lambda}^{\mathrm{\scriptscriptstyle T}}||_{2}||\widehat{B}_{\Lambda}||_{2}\\
&\leq 1\times 1\\
&=1,\end{split}
$$
where the first equation holds due to the property of semi-orthogonal matrix, and the second inequality holds as $\widehat{D}^{-1}\widehat{B}_{\Lambda}(\widehat{B}_{\Lambda}^{\mathrm{\scriptscriptstyle T}}\widehat{D}^{-1}\widehat{B}_{\Lambda})^{-1}\widehat{B}_{\Lambda}^{\mathrm{\scriptscriptstyle T}}$ is a projection matrix. For the third term of (S13), we have
$$
\sup_{||w||_{2}=1}||\alpha_{2}||_{2}=\sup_{||w||_{2}=1}||({B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}D{B}_{{\Lambda}^{\perp}})^{-1}{B}_{{\Lambda}^{\perp}}^{T}Dw||\leq||({B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}D{B}_{{\Lambda}^{\perp}})^{-1}{B}_{{\Lambda}^{\perp}}^{T}D||_{2}=O(1). \tag{1}
$$
For the second term of (S13), by the property of orthogonal matrices, we have:
$$
\begin{split}||\widehat{B}_{\Lambda}^{\mathrm{\scriptscriptstyle T}}B_{\Lambda^{\perp}}||_{2}&=1.\end{split}
$$
Take the sup of $||w||_{2}=1$ over both sides of (S13), we conclude that
$$
\sup_{||w||_{2}=1}||(F^{\mathrm{\scriptscriptstyle T}}-\widehat{F}^{\mathrm{\scriptscriptstyle T}})B_{\Lambda^{\perp}}({B}_{{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}D{B}_{{\Lambda}^{\perp}})^{-1}{B}_{{\Lambda}^{\perp}}^{T}Dw||_{2}=O_{p}(\frac{1}{\sqrt{n}}).
$$
So far we have proved the first and second terms of (S9) are of order $O_{p}(1/\sqrt{n})$ . We finish the proof of the Lemma.
Proof of Lemma S.9
We have the following decomposition over the target term:
| | $\displaystyle\left|\left|\frac{\widehat{\bm{X}}^{\mathrm{\scriptscriptstyle T}}E}{n}\right|\right|_{∞}=\left|\left|\frac{\widehat{F}\bm{X}^{\mathrm{\scriptscriptstyle T}}E}{n}\right|\right|_{∞}$ | $\displaystyle≤\left|\left|\frac{\widehat{F}\bm{X}^{\mathrm{\scriptscriptstyle T}}E}{n}\right|\right|_{2}$ | |
| --- | --- | --- | --- |
The third inequality holds as $\widehat{F}$ is a projection matrix and $||\widehat{F}||_{2}=1$ . The last ineqality is given by element-wise $\sqrt{n}$ consistency of covariance estimation between $\epsilon_{y}$ and $X$ under low dimensional setting, and $\text{Cov}(\epsilon_{y},X)=0$ .
Proof of Lemma S.10
We observe that
$$
\Lambda^{\top}F^{\top}=\Lambda^{\top}{B}_{{\Lambda}^{\perp}}({B}_{{\Lambda}^{\perp}}^{\top}{\Sigma_{X}}{B}_{{\Lambda}^{\perp}})^{-1}{B}_{{\Lambda}^{\perp}}^{\top}{{\Sigma_{X}}}=0.
$$
From Lemma S.8, we have
$$
\begin{split}\left|\left|\frac{g({\bf U})^{\top}\bm{X}\widehat{F}^{\top}}{n}\right|\right|_{2}&\leq\left|\left|\left(\frac{g({\bf U})^{\top}\bm{X}}{n}-\text{Cov}(g({U}),X)\right)\widehat{F}^{\top}\right|\right|_{2}+\left|\left|\text{Cov}(g({U}),X)F^{\top}\right|\right|_{2}\\
&+\left|\left|\text{Cov}(g({U}),X)(\widehat{F}^{\top}-F^{\top})\right|\right|_{2}\\
&\leq\left|\left|\frac{g({\bf U})^{\top}\bm{X}}{n}-\text{Cov}(g({U}),X)\right|\right|_{2}\;||\widehat{F}^{\top}||_{2}+\left|\left|\text{Cov}(g(U),U)\Lambda^{\top}F^{\top}\right|\right|_{2}\\
&+||\text{Cov}(g(U),X)||_{2}\left|\left|\widehat{F}^{\top}-F^{\top}\right|\right|_{2}\\
&=O_{p}(\frac{1}{\sqrt{n}})+0+O_{p}(\frac{1}{\sqrt{n}}),\end{split}
$$
where the first term is controlled by the element-wise $\sqrt{n}$ consistency of the sample covariance matrix under the low-dimensional setting; the second term is 0 since $\Lambda^{→p}F^{→p}=0$ ; the third term is controlled by Lemma S.8.
Proof of Lemma S.11
For any $\delta>0$ , we are going to show that there exists an $n_{0}$ such that
$$
\inf_{n>n_{0}}\mathbb{P}\left\{\|\widehat{\bf{X}}\theta\|_{2}\geq\pi_{0}\sqrt{n}\|\theta\|_{2},\forall\|\theta\|_{0}\leq 2s\right\}\geq 1-\delta.
$$
We have the following decomposition:
$$
\begin{split}\frac{\widehat{F}{\bf{X^{\top}X}}\widehat{F}^{\top}}{n}&=\frac{\widehat{F}{\bf{X^{\top}X}}}{n}\\
&=(\widehat{F}-F)\frac{{\bf{X^{\top}X}}}{n}+F\left(\frac{{\bf{X^{\top}X}}}{n}-\Sigma_{X}\right)+F\Sigma_{X}\\
&=(\widehat{F}-F)\frac{{\bf{X^{\top}X}}}{n}+F\left(\frac{{\bf{X^{\top}X}}}{n}-\Sigma_{X}\right)+F\Sigma_{X}F^{\top},\end{split}
$$
where the first and last equalities hold due to Lemma S.2 and Lemma S.6.
Given Lemma S.8 and the $\sqrt{n}$ consistency of the sample covariance under a low-dimensional setting, we have
$$
\left\|\left((\widehat{F}-F)\frac{{\bf{X^{\top}X}}}{n}+F\left(\frac{{\bf{X^{\top}X}}}{n}-\Sigma_{X}\right)\right)\right\|_{2}=O_{p}(1/\sqrt{n}),
$$
which suggests that there exists a constant $A_{\delta}$ and $n_{1}$ such that
$$
\inf_{n>n_{1}}\mathbb{P}\left(\left\|\left((\widehat{F}-F)\frac{{\bf{X^{\top}X}}}{n}+F\left(\frac{{\bf{X^{\top}X}}}{n}-\Sigma_{X}\right)\right)\right\|_{2}\leq\frac{A_{\delta}}{\sqrt{n}}\right)\geq 1-\delta.
$$
Define
$$
\pi_{1}=\inf\left\{\frac{\theta^{\top}\Sigma_{\widetilde{X}}\theta}{\|\theta\|_{2}^{2}}:\theta\in\mathbb{R}^{p},\;\|\theta\|_{0}\leq 2s\right\},
$$
which is greater than 0 by Condition B4. Let $\pi_{0}=\sqrt{\pi_{1}/2}$ , $n_{2}=4A^{2}_{\delta}/\pi_{1}^{2}$ , and $n_{0}=\max(n_{1},n_{2})$ . We have, with probability at least $1-\delta$ , for all $n>n_{0}$ and for all $\theta∈\mathbb{R}^{p}$ , $\|\theta\|_{0}≤ 2s$ ,
$$
\begin{split}\theta^{\top}\frac{\widehat{F}{\bf{X^{\top}X}}\widehat{F}^{\top}}{n}\theta&=\theta^{\top}F\Sigma_{X}F^{\top}\theta+\theta^{\top}\left((\widehat{F}-F)\frac{{\bf{X^{\top}X}}}{n}+F\left(\frac{{\bf{X^{\top}X}}}{n}-\Sigma_{X}\right)\right)\theta\\
&\geq\|\theta\|_{2}^{2}(\pi_{1}-\left\|\left((\widehat{F}-F)\frac{{\bf{X^{\top}X}}}{n}+F\left(\frac{{\bf{X^{\top}X}}}{n}-\Sigma_{X}\right)\right)\right\|_{2})=\|\theta\|_{2}^{2}\pi_{0}^{2},\end{split}
$$
which concludes the proof.
S.3.2 Proofs of Lemmas S.12 – S.21
We prove Lemmas S.12 through S.21 for the high-dimensional setting, where $p$ is allowed to diverge.
Proof of Lemma S.12
Since $\widehat{\Lambda}=(\sqrt{\lambda_{1}}\xi_{1}\;...\;\sqrt{\lambda_{q}}\xi_{q}),$ we can let ${B}_{\widehat{\Lambda}^{\perp}}=(\xi_{q+1},\;\xi_{q+2},\;...,\;\xi_{p})$ without loss of generality.
Given the singular value decomposition of $\bm{X}$ , the definition of $\widehat{SIV}$ , and the eigenvalues $\lambda_{1}≥\lambda_{2}≥...≥\lambda_{k}>0=\lambda_{k+1}=\lambda_{k+2}=...=\lambda_{p}$ , we have
$$
\begin{split}\widehat{SIV}={\bf X}{B}_{\widehat{\Lambda}^{\perp}}&=\left(\sum^{k}_{i=1}\sqrt{(n-1)\lambda_{i}}{\eta_{i}}\xi_{i}^{{\mathrm{\scriptscriptstyle T}}}\right)(\xi_{q+1}\;\ldots\;\xi_{p})\\
&=(\eta_{q+1}\sqrt{(n-1)\lambda_{q+1}},\;\ldots,\;\eta_{k}\sqrt{(n-1)\lambda_{k}},\;0,\;\ldots,\;0)\\
&=({\bf\dot{X}},\;0),\end{split}
$$
where ${\bf\dot{X}}=(\eta_{q+1}\sqrt{(n-1)\lambda_{q+1}},\;...,\;\eta_{k}\sqrt{(n-1)\lambda_{k}})∈\mathbb{R}^{n×(k-q)}$ .
Applying this in the regression framework, we find:
$$
\begin{split}\widehat{\bf{X}}=&\widehat{\mathbb{E}}({\bf X}\mid\dot{\bf X})\\
=&\dot{\bf X}\left(\widehat{Var}(\dot{\bf X})^{-1}\right)\widehat{Cov}(\dot{\bf X},{\bf X})\\
=&\dot{\bf X}\left\{\frac{\dot{\bf X}^{\mathrm{\scriptscriptstyle T}}\dot{\bf X}}{n-1}\right\}^{-1}\frac{\dot{\bf X}^{\mathrm{\scriptscriptstyle T}}{\bf X}}{n-1}\\
=&\sum^{k}_{i=q+1}\sqrt{(n-1)\lambda_{i}}\eta_{i}\xi_{i}^{\mathrm{\scriptscriptstyle T}}.\end{split}
$$
Finally, comparing this with the matrix $\bm{X}\widehat{F}^{\mathrm{\scriptscriptstyle T}}$ :
| | $\displaystyle\bm{X}\widehat{F}^{\mathrm{\scriptscriptstyle T}}$ | $\displaystyle=\sqrt{n-1}(\eta_{1},\eta_{2},...,\eta_{p})\text{diag}(\sqrt{\lambda_{1}},...,\sqrt{\lambda_{p}})(\xi_{1},...,\xi_{p})^{\mathrm{\scriptscriptstyle T}}(\xi_{q+1},...,\xi_{p})(\xi_{q+1},...,\xi_{p})^{\mathrm{\scriptscriptstyle T}}$ | |
| --- | --- | --- | --- |
We conclude the proof of Lemma S.12.
Proof of Lemma S.13
The first claim can be directly checked by definition.
For the second claim, we observe that $\frac{{\bf{X^{\mathrm{\scriptscriptstyle T}}X}}}{n-1}=\sum^{k}_{i=1}\lambda_{i}\xi_{i}\xi_{i}^{\mathrm{\scriptscriptstyle T}}$ , and with $\widehat{F}=(\xi_{q+1}\;...\;\xi_{p})(\xi_{q+1}\;...\;\xi_{p})^{\mathrm{\scriptscriptstyle T}}$ , we have:
$$
\frac{\widehat{F}{\bf X^{\mathrm{\scriptscriptstyle T}}X}\widehat{F}^{\mathrm{\scriptscriptstyle T}}}{n-1}=\frac{\widehat{F}{\bf{X^{\mathrm{\scriptscriptstyle T}}X}}}{n-1}=\sum^{k}_{i=q+1}\lambda_{i}\xi_{i}\xi_{i}^{\mathrm{\scriptscriptstyle T}}.
$$
Proof of Lemma S.14
We first work on the event
$$
\Omega=\{\max_{1\leq j\leq p}||{\bf{X}}_{\cdot,j}||^{2}_{2}/n<\max_{1\leq j\leq p}4(\Sigma_{X})_{j,j}\},
$$
where ${\bf{X}}_{·,j}$ is the $j$ th column of the design matrix $\bm{X}$ . Since $\{X_{i,j}\}_{i=1}^{n}$ is a sequence of i.i.d mean zero sub-Gaussian random variable with variance $(\Sigma_{X})_{j,j}$ , we have $\mathbb{E}\{X_{i,j}^{2}/(\Sigma_{X})_{j,j}\}=1$ for $i∈\{1,...,n\}$ . We have the following concentration result by applying the theorem 3.1.1 from Vershynin (2018):
$$
\mathbb{P}(\left|\;\left|\left|\frac{\bm{X}_{\cdot,j}}{\sqrt{(\Sigma_{X})_{j,j}}}\right|\right|_{2}-\sqrt{n}\;\right|\geq t)\leq\exp(-ct^{2}/K_{j}^{4}),
$$
where $c$ is a universal constant, $K_{j}=||\bm{X}_{i,j}/\sqrt{(\Sigma_{X})_{j,j}}||_{\psi_{2}}$ . Let $t=\sqrt{n}$ , we have
$$
\mathbb{P}(\frac{1}{n}||\bm{X}_{\cdot,j}||_{2}^{2}\geq 4(\Sigma_{X})_{j,j})\leq\exp(-cn/K_{j}^{4}).
$$
We derive the following result from Equation (S16):
$$
\begin{split}\mathbb{P}(\Omega^{c})&=\mathbb{P}\left(\max_{1\leq j\leq p}||{\bf{X}}_{\cdot,j}||^{2}_{2}/n\geq\max_{1\leq j\leq p}4(\Sigma_{X})_{j,j}\right)\\
&\leq\sum_{j=1}^{p}\mathbb{P}\left(\frac{1}{n}||{\bf{X}}_{\cdot,j}||_{2}^{2}\geq\max_{1\leq j\leq p}4(\Sigma_{X})_{j,j}\right)\\
&\leq\sum_{j=1}^{p}\mathbb{P}\left(\frac{1}{n}||{\bf{X}}_{\cdot,j}||_{2}^{2}\geq 4(\Sigma_{X})_{j,j}\right)\\
&\leq p\exp\left(-cn/\max_{j}K_{j}^{4}\right)\\
&\leq p\exp\left(-C_{10}n\right).\end{split}
$$
Since $K_{j}$ are bounded by condition C3 (where $\sigma_{j}$ is bounded), we define the positive constant $C_{10}=\frac{c}{\max_{j}K_{j}^{4}}$ in the last inequality.
Recall $E=(\epsilon_{y,1},...,\epsilon_{y,n})^{\mathrm{\scriptscriptstyle T}}$ . Conditional on $\bf X$ and the event $\Omega$ , the components of $\eta=\widehat{\bf{X}}^{\mathrm{\scriptscriptstyle T}}E/n=\widehat{F}{\bf{X}}^{\mathrm{\scriptscriptstyle T}}E/n$ are sub-Gaussian random variables, as they are linear combinations of independent sub-Gaussian random variables. The component $\eta_{j}$ is sub-Gaussian with mean zero and parameter $\theta_{j}={\sigma}||\widehat{\bf{X}}_{·,j}||_{2}/n$ , where $\widehat{\bf{X}}_{·,j}$ is the $j$ th column of $\widehat{\bf{X}}$ . Let $\tau_{1}=A\sigma\sqrt{\log(p)/n}$ . Define the event $\Omega_{1}$ :
$$
\Omega_{1}=\{||\eta||_{\infty}\geq\tau_{1}\}.
$$
From the tail bound for sub-Gaussian random variables, we have
$$
\begin{split}\mathbb{P}(\Omega_{1}\mid{\bf X},\Omega)&=\mathbb{P}(||\eta||_{\infty}\geq\tau_{1}\mid{\bf X},\Omega)\\
&\leq\sum^{p}_{i=1}\mathbb{P}(|\eta|_{j}\geq\tau_{1}\mid\bm{X},\Omega)\\
&\leq p\max_{j}\;2\exp(-\frac{\tau^{2}_{1}}{\theta_{j}^{2}})\\
&=2p\exp(-\frac{\tau^{2}_{1}}{\max_{j}\theta_{j}^{2}}).\end{split}
$$
We now attempt to bound $\theta_{j}$ from above, given the event $\Omega$ . Recall the following principal component decomposition:
$$
\begin{split}\frac{{\bf X}^{\mathrm{\scriptscriptstyle T}}{\bf X}}{n-1}&=\begin{pmatrix}{\xi}_{1}&\ldots&{\xi}_{p}\end{pmatrix}\text{diag}({\lambda}_{1},\ldots,{\lambda}_{p})\begin{pmatrix}{\xi}_{1}^{\mathrm{\scriptscriptstyle T}}\\
\ldots\\
{\xi}_{p}^{\mathrm{\scriptscriptstyle T}}\end{pmatrix}=\sum_{i=1}^{p}{\lambda}_{i}{\xi}_{i}{\xi}_{i}^{\mathrm{\scriptscriptstyle T}}\\
\frac{\bf\widehat{X}^{\mathrm{\scriptscriptstyle T}}\widehat{X}}{n}&=\begin{pmatrix}{\xi}_{q+1}&\ldots&{\xi}_{p}\end{pmatrix}\begin{pmatrix}{\xi}_{q+1}^{\mathrm{\scriptscriptstyle T}}\\
\ldots\\
{\xi}_{p}^{\mathrm{\scriptscriptstyle T}}\end{pmatrix}\begin{pmatrix}{\xi}_{1}&\ldots&{\xi}_{p}\end{pmatrix}\text{diag}({\lambda}_{1},\ldots,{\lambda}_{p})\begin{pmatrix}{\xi}_{1}^{\mathrm{\scriptscriptstyle T}}\\
\ldots\\
{\xi}_{p}^{\mathrm{\scriptscriptstyle T}}\end{pmatrix}\begin{pmatrix}{\xi}_{q+1}&\ldots&{\xi}_{p}\end{pmatrix}\begin{pmatrix}{\xi}_{q+1}^{\mathrm{\scriptscriptstyle T}}\\
\ldots\\
{\xi}_{p}^{\mathrm{\scriptscriptstyle T}}\end{pmatrix}\\
&=\begin{pmatrix}{\xi}_{q+1}&\ldots&{\xi}_{p}\end{pmatrix}\text{diag}({\lambda}_{q+1},\ldots,{\lambda}_{p})\begin{pmatrix}{\xi}_{q+1}^{\mathrm{\scriptscriptstyle T}}\\
\ldots\\
{\xi}_{p}^{\mathrm{\scriptscriptstyle T}}\end{pmatrix}=\sum_{i=q+1}^{p}{\lambda}_{i}{\xi}_{i}{\xi}_{i}^{\mathrm{\scriptscriptstyle T}}.\end{split}
$$
Given the above equality, we have:
| | $\displaystyle||\widehat{\bf X}_{·,j}||_{2}^{2}$ | $\displaystyle=({\bm{\widehat{X}}}^{\mathrm{\scriptscriptstyle T}}{\bf\widehat{X}})_{j,j}=(n-1)\sum_{i=q+1}^{p}{\lambda}_{i}{\xi}_{i,j}^{2},$ | |
| --- | --- | --- | --- |
from which we can infer that $||\widehat{\bm{X}}_{·,j}||^{2}_{2}≤||{\bm{X}}_{·,j}||_{2}^{2}$ . This implies
$$
\theta_{j}^{2}={\sigma}^{2}||\widehat{\bf{X}}_{\cdot,j}||^{2}_{2}/n^{2}\leq\frac{{\sigma}^{2}}{n}4(\Sigma_{X})_{j,j}\leq\frac{4{\sigma}^{2}C_{6}^{2}}{n},
$$
under event $\Omega$ , where $C_{6}$ is defined in Condition C3. So we have
$$
\exp(-\frac{\tau^{2}_{1}}{\max_{j}\theta_{j}^{2}})\leq\exp(-\frac{A^{2}\log(p)/n}{4C_{6}^{2}/n})=p^{-\frac{A^{2}}{4C_{6}^{2}}}.
$$
Define the positive constant $C_{11}=4C_{6}^{2}$ . Together with (S18), under the event $\Omega$ , we have
$$
\mathbb{P}(\Omega_{1}\mid\bm{X},\Omega)\leq 2p\exp(-\frac{\tau_{1}^{2}}{\max_{j}\theta_{j}^{2}})\leq 2p^{1-\frac{A^{2}}{C_{11}}}.
$$
Take the expectation over the conditional distribution of $X\mid\Omega$ , we have
$$
\begin{split}\mathbb{P}(\Omega_{1}\mid\Omega)\leq 2p\exp(-\frac{\tau_{1}^{2}}{\max_{j}\theta_{j}^{2}})\leq 2p^{1-\frac{A^{2}}{C_{10}}}.\end{split}
$$
Given equations (S17), and (S18), we have the following probability statement
$$
\begin{split}\mathbb{P}(||\frac{\widehat{\bf{X}}^{\mathrm{\scriptscriptstyle T}}E}{n}||_{\infty}\geq\tau)&\leq\mathbb{P}(\Omega_{1})\\
&\leq\mathbb{P}(\Omega_{1}\mid\Omega)P(\Omega)+\mathbb{P}(\Omega_{1}\mid\Omega^{c})P(\Omega^{c})\\
&\leq\mathbb{P}(\Omega_{1}\mid\Omega)+P(\Omega^{c})\\
&\leq 2p^{1-\frac{A^{2}}{C_{11}}}+p\exp(-C_{10}n),\end{split}
$$
which concludes the proof.
Proof of Lemma S.15
Recall $\bm{X}=\bm{U}\Lambda^{\mathrm{\scriptscriptstyle T}}+E_{x}$ , where $\bm{U}=(U_{1}\;U_{2}\;...\;U_{n})^{\mathrm{\scriptscriptstyle T}}∈\mathbb{R}^{n× q}$ , and $E_{x}=(\epsilon_{x,1}\;\epsilon_{x,2}\;...\;\epsilon_{x,n})^{\mathrm{\scriptscriptstyle T}}∈\mathbb{R}^{n× p}$ .
The target quantity can be decomposed as follows:
$$
\left|\left|\frac{g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\bm{X}}{n}-\gamma^{\mathrm{\scriptscriptstyle T}}\Lambda^{\mathrm{\scriptscriptstyle T}}\right|\right|_{\infty}\leq\left|\left|\left(\frac{g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\bm{U}}{n}-\gamma^{\mathrm{\scriptscriptstyle T}}\right)\Lambda^{\mathrm{\scriptscriptstyle T}}\right|\right|_{\infty}+\left|\left|\frac{g(\bm{U})^{\mathrm{\scriptscriptstyle T}}E_{x}}{n}\right|\right|_{\infty}.
$$
We will control the first and second terms on the RHS of Equation (S22) separately.
For the first term on RHS of equation (S22), we have
$$
\begin{split}&\left|\left|\left(\frac{g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\bm{U}-\gamma^{\mathrm{\scriptscriptstyle T}}}{n}\right)\Lambda^{\mathrm{\scriptscriptstyle T}}\right|\right|_{\infty}\\
=&\max_{1\leq j\leq p}\left|\left(\frac{g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\bm{U}}{n}-\gamma^{\mathrm{\scriptscriptstyle T}}\right)\Lambda_{j,\cdot}\right|\\
\leq&\left|\left|\frac{g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\bm{U}}{n}-\gamma^{\mathrm{\scriptscriptstyle T}}\right|\right|_{2}\max_{1\leq j\leq p}||\Lambda_{j,\cdot}||_{2},\end{split}
$$
where $\Lambda_{j,·}∈\mathbb{R}^{q}$ is the jth row of $\Lambda$ . The inequality follows the Cauchy-Schwarz inequality. We observed that $\frac{g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\bm{U}}{n}-\gamma^{\mathrm{\scriptscriptstyle T}}$ is the empirical average minus expectation for a $q× 1$ vector. Define vector $a∈\mathbb{R}^{q× 1}$ , whose jth element is given by $a_{j}=g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\bm{U}_{·,j}/n-\gamma_{j}$ . Note that $\mathbb{E}(a_{j})=0$ and $\text{Var}(a_{j})=\text{Var}(U_{j}g(U))/n=\Gamma_{j,j}/n$ , where $\Gamma$ is defined as condition C3. Let $t_{n}=\sqrt{\log(p)/n}$ for some positive constant $A$ , we have the following result:
$$
\begin{split}&P(\left|\left|\frac{g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\bm{U}}{n}-\gamma^{\mathrm{\scriptscriptstyle T}}\right|\right|_{2}\geq t_{n})\\
=&P(\left|\left|\frac{g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\bm{U}}{n}-\gamma^{\mathrm{\scriptscriptstyle T}}\right|\right|_{2}^{2}\geq t_{n}^{2})\\
=&P(\left|\left|a^{\mathrm{\scriptscriptstyle T}}\right|\right|_{2}^{2}\geq t_{n}^{2})\\
\leq&\frac{\sum^{q}_{j=1}\mathbb{E}(a_{j}^{2})}{t_{n}^{2}}\;\;(Markov^{\prime}s\;\;inequality)\\
\leq&\frac{\sum_{j=1}^{q}\Gamma_{j,j}}{nt_{n}^{2}}\\
\leq&\frac{C_{3}}{\log p}.\;\;(Condition\;\;\ref{C2})\end{split}
$$
The quantity ${C_{3}}/{\log p}$ goes to $0$ as $p$ goes to infinity. So far, we have shown that
$$
\left|\left|\frac{g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\bm{U}}{n}-\gamma^{\mathrm{\scriptscriptstyle T}}\right|\right|_{2}=O_{p}(\sqrt{\log(p)/{n}}).
$$
We observe that $\max_{1≤ j≤ p}||\Lambda_{j,·}||_{2}^{2}≤\max_{1≤ j≤ p}\text{Var}(X_{j})≤ C_{6}$ by Condition C3. Combining the above equations, we have
$$
\left|\left|\left(\frac{g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\bm{U}}{n}-\gamma^{\mathrm{\scriptscriptstyle T}}\right)\Lambda^{\mathrm{\scriptscriptstyle T}}\right|\right|_{\infty}=O_{p}(\sqrt{\frac{\log(p)}{n}})
$$
We now control the second term on the RHS of Equation (S22). The proof approach is very similar to that used for proving Lemma S.14. Let $\zeta=g(\bm{U})^{\mathrm{\scriptscriptstyle T}}E_{x}/n∈\mathbb{R}^{p}$ . We aim to show that:
$$
||\zeta||_{\infty}=O_{p}\left(\sqrt{\frac{\log(p)}{n}}\right).
$$
First, we demonstrate that $||g(\bm{U})||_{2}/\sqrt{n}$ is bounded with high probability. For a constant $A_{1}$ , consider the event $\Omega_{g}:\left\{\frac{||g(\bm{U})||_{2}^{2}}{n}<A_{1}\sigma_{g}^{2}\right\}$ . We apply Markov’s inequality to obtain the following result:
$$
\mathbb{P}(\Omega_{g}^{c})\leq\frac{1}{A_{1}}.
$$
Note that $\epsilon_{x,1},...,\epsilon_{x,n}$ are i.i.d. sub-Gaussian random vectors, and $g(\bm{U})$ is independent of $\epsilon_{x,i}$ . Consider the $j$ th element of the vector $\zeta$ :
$$
\zeta_{j}=\sum_{i=1}^{n}\frac{g(U_{i})\epsilon_{x,ij}}{n}.
$$
Conditional on $\bm{U}$ , $\zeta_{j}$ is a linear combination of $n$ independent sub-Gaussian random variables $\epsilon_{x,1j},\epsilon_{x,2j},...,\epsilon_{x,nj}$ , which suggests that $\zeta_{j}$ is sub-Gaussian with mean 0 and parameter $\theta_{j}=\widetilde{\sigma}_{j}||g(\bm{U})/n||_{2}$ .
Recall that $t_{n}=\sqrt{\log(p)/n}$ . For a positive constant $A_{2}$ , from the tail bound of sub-Gaussian random variables:
| | $\displaystyle P(||\zeta||_{∞}≥ A_{2}t_{n}\mid\bm{U},\Omega_{g})$ | $\displaystyle≤\sum_{j=1}^{p}P(|\zeta_{j}|≥ A_{2}t_{n}\mid\bm{U},\Omega_{g})$ | |
| --- | --- | --- | --- |
The last inequality holds because under the event $\Omega_{g}$ , we have $\theta_{j}^{2}=\widetilde{\sigma}_{g}^{2}\frac{||g(\bm{U})||_{2}^{2}}{n^{2}}≤\frac{C_{3}A_{1}}{n}.$ Integrating this quantity with respect to $\bm{U}$ , conditional on $\Omega_{g}$ , we derive
$$
P(||\zeta||_{\infty}\geq A_{2}t_{n}\mid\Omega_{g})\leq 2p^{1-\frac{A_{2}^{2}}{C_{3}A_{1}}}.
$$
Combining Equations (S24) and (S25), we obtain
$$
\begin{split}\mathbb{P}(||\zeta||_{\infty}\geq A_{2}t_{n})&=\mathbb{P}(||\zeta||_{\infty}\geq A_{2}t_{n}\mid\Omega_{g})\mathbb{P}(\Omega_{g})+\mathbb{P}(||\zeta||_{\infty}\geq A_{2}t_{n}\mid\Omega_{g}^{c})\mathbb{P}(\Omega_{g}^{c})\\
&\leq\mathbb{P}(||\zeta||_{\infty}\geq A_{2}t_{n}\mid\Omega_{g})+\mathbb{P}(\Omega_{g}^{c})\\
&\leq 2p^{1-\frac{A_{2}^{2}}{C_{3}A_{1}}}+\frac{1}{A_{1}}.\end{split}
$$
We can select positive constants $A_{1}$ and $A_{2}$ to make the probability $\mathbb{P}(||\zeta||_{∞}≥ A_{2}t_{n})$ arbitrarily small. Thus far, we have demonstrated that
$$
\left|\left|\frac{g(\bm{U})^{\mathrm{\scriptscriptstyle T}}E_{x}}{n}\right|\right|_{\infty}=O_{p}\left(\sqrt{\frac{\log(p)}{n}}\right).
$$
Combining Equations (S23) and (S26), we conclude the proof of Lemma S.15.
Proof of Lemma S.16
Refer to Lemma C.10 in Fan et al. (2013b). Our conditions C1 – C3 are sufficient to verify their assumptions, and $q$ is known in our setting.
Proof of Lemma S.17
Refer to Theorem 3.3 in Fan et al. (2013b). Our conditions C1 – C3 are sufficient to verify their assumptions, and $q$ is known in our setting.
Proof of Lemma S.18
Recall that $\widehat{\Lambda}=(\sqrt{\widehat{\lambda}_{1}}\widehat{\xi}_{1}\;...\;\sqrt{\widehat{\lambda}_{q}}\widehat{\xi}_{q})$ and $\widehat{\Lambda}^{\mathrm{\scriptscriptstyle T}}\widehat{F}^{\mathrm{\scriptscriptstyle T}}=0$ . Consider the following decomposition for the target quantity:
$$
\begin{split}&\frac{g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\bm{X}}{n}\widehat{F}^{\mathrm{\scriptscriptstyle T}}(\widehat{\beta}-\dot{\beta})\\
&=\left[\left\{\frac{g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\bm{X}}{n}-\gamma^{\mathrm{\scriptscriptstyle T}}\Lambda^{\mathrm{\scriptscriptstyle T}}\right\}+\gamma^{\mathrm{\scriptscriptstyle T}}\Lambda^{\mathrm{\scriptscriptstyle T}}\right]\widehat{F}^{\mathrm{\scriptscriptstyle T}}(\widehat{\beta}-\dot{\beta})\\
&=\left[\left\{\frac{g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\bm{X}}{n}-\gamma^{\mathrm{\scriptscriptstyle T}}\Lambda^{\mathrm{\scriptscriptstyle T}}\right\}+\left\{\gamma^{\mathrm{\scriptscriptstyle T}}(I-O^{\mathrm{\scriptscriptstyle T}}O)\Lambda^{\mathrm{\scriptscriptstyle T}}\right\}+\gamma^{\mathrm{\scriptscriptstyle T}}O^{\mathrm{\scriptscriptstyle T}}(O\Lambda^{\mathrm{\scriptscriptstyle T}}-\widehat{\Lambda}^{\mathrm{\scriptscriptstyle T}})\right]\widehat{F}^{\mathrm{\scriptscriptstyle T}}(\widehat{\beta}-\dot{\beta})\\
&=(a+b+c)\widehat{F}^{\mathrm{\scriptscriptstyle T}}(\widehat{\beta}-\dot{\beta}),\end{split}
$$
where $a,b,c∈\mathbb{R}^{1× p}$ , and $O∈\mathbb{R}^{p× p}$ is defined in Lemma S.16. We now control the infinity norms of $a,b,$ and $c$ respectively. Note $||a||_{∞}=O_{p}(\sqrt{\log(p)/n})$ given Lemma S.15.
Let $\Lambda_{j,·}∈\mathbb{R}^{q× 1}$ be the $j$ th row of $\Lambda∈\mathbb{R}^{p× q}$ . For $||b||_{∞}$ , we have the following result:
$$
\begin{split}||b||_{\infty}&=\max_{1\leq j\leq p}|b_{j}|\\
&=\max_{1\leq j\leq p}|\gamma^{\mathrm{\scriptscriptstyle T}}(I-O^{\mathrm{\scriptscriptstyle T}}O)\Lambda_{j,\cdot}|\\
&\leq||\gamma^{\mathrm{\scriptscriptstyle T}}(I-O^{\mathrm{\scriptscriptstyle T}}O)||_{2}\max_{1\leq j\leq p}||\Lambda_{j,\cdot}||_{2}\\
&\leq||\gamma^{\mathrm{\scriptscriptstyle T}}||_{2}||(I-O^{\mathrm{\scriptscriptstyle T}}O)||_{2}\max_{1\leq j\leq p}||\Lambda_{j,\cdot}||_{2}\\
&\leq C_{3}C_{6}\;O_{p}\left(\frac{1}{\sqrt{n}}\right)\\
&=O_{p}\left(\frac{1}{\sqrt{n}}\right),\end{split}
$$
where the first inequality follows from the Cauchy-Schwarz inequality. The last inequality is derived using conditions C2, C3, and Lemma S.16, while considering condition C1: $n=O(p)$ .
For $||c||_{∞}$ , we first note that $||O^{\mathrm{\scriptscriptstyle T}}||_{2}=\sqrt{||OO^{\mathrm{\scriptscriptstyle T}}||_{2}}=O_{p}(1)$ . Consequently, we have the following bound for $||c||_{∞}$ :
$$
\begin{split}||c||_{\infty}&=\max_{1\leq j\leq p}|c_{j}|\\
&=\max_{1\leq j\leq p}|\gamma^{\mathrm{\scriptscriptstyle T}}O^{\mathrm{\scriptscriptstyle T}}(O\Lambda_{j,\cdot}-\widehat{\Lambda}_{j,\cdot})|\\
&\leq||\gamma^{\mathrm{\scriptscriptstyle T}}O^{\mathrm{\scriptscriptstyle T}}||_{2}\max_{1\leq j\leq p}||O\Lambda_{j,\cdot}-\widehat{\Lambda}_{j,\cdot}||_{2}\\
&\leq O_{p}(\frac{1}{\sqrt{p}}+\sqrt{\frac{\log(p)}{n}}),\end{split}
$$
The first inequality follows from the Cauchy-Schwarz inequality, and the last inequality follows from $||O^{\mathrm{\scriptscriptstyle T}}||_{2}=O_{p}(1)$ , Condition C2, and Lemma S.17. Note that Condition C1 gives $n=O(p)$ , which implies $O_{p}\left(\frac{1}{\sqrt{p}}+\sqrt{\frac{\log p}{n}}\right)=O_{p}\left(\sqrt{\frac{\log p}{n}}\right)$ . Combining Lemmas S.15 and equations (S28) and (S29), we have:
$$
||a+b+c||_{\infty}=O_{p}(\sqrt{\frac{\log(p)}{n}})
$$
We now control $(a+b+c)F^{\mathrm{\scriptscriptstyle T}}(\widehat{\beta}-\dot{\beta})$ . Observe that $\widehat{F}^{\mathrm{\scriptscriptstyle T}}={B}_{{\widehat{\Lambda}}^{\perp}}{B}_{{\widehat{\Lambda}}^{\perp}}^{T}=\sum^{p}_{i=q+1}\widehat{\xi}_{i}\widehat{\xi}_{i}^{\mathrm{\scriptscriptstyle T}}$ and $I_{p}-\widehat{F}^{\mathrm{\scriptscriptstyle T}}=\sum^{q}_{i=1}\widehat{\xi}_{i}\widehat{\xi}_{i}^{\mathrm{\scriptscriptstyle T}}$ , we have
$$
\begin{split}&(a+b+c)\widehat{F}^{\mathrm{\scriptscriptstyle T}}(\widehat{\beta}-\dot{\beta})\\
=&(a+b+c)(\widehat{\beta}-\dot{\beta})-(a+b+c)(\sum^{q}_{i=1}\widehat{\xi}_{i}\widehat{\xi}_{i}^{\mathrm{\scriptscriptstyle T}})(\widehat{\beta}-\dot{\beta})\\
\leq&||(a+b+c)||_{\infty}||(\widehat{\beta}-\dot{\beta})||_{1}-\text{Trace}\{(a+b+c)(\sum^{q}_{i=1}\widehat{\xi}_{i}\widehat{\xi}_{i}^{\mathrm{\scriptscriptstyle T}})(\widehat{\beta}-\dot{\beta})\}\\
=&||(a+b+c)||_{\infty}||(\widehat{\beta}-\dot{\beta})||_{1}-\text{Trace}\{(\sum^{q}_{i=1}\widehat{\xi}_{i}\widehat{\xi}_{i}^{\mathrm{\scriptscriptstyle T}})(\widehat{\beta}-\dot{\beta})(a+b+c)\}\\
\leq&||(a+b+c)||_{\infty}||(\widehat{\beta}-\dot{\beta})||_{1}+|\text{Trace}(\sum^{q}_{i=1}\widehat{\xi}_{i}\widehat{\xi}_{i}^{\mathrm{\scriptscriptstyle T}})|\;|\text{Trace}\{(\widehat{\beta}-\dot{\beta})(a+b+c)\}|\\
=&||(a+b+c)||_{\infty}||(\widehat{\beta}-\dot{\beta})||_{1}+|\text{Trace}(\sum^{q}_{i=1}\widehat{\xi}_{i}\widehat{\xi}_{i}^{\mathrm{\scriptscriptstyle T}})|\;|\text{Trace}\{(a+b+c)(\widehat{\beta}-\dot{\beta})\}|\\
=&||(a+b+c)||_{\infty}||(\widehat{\beta}-\dot{\beta})||_{1}+|\text{Trace}(\sum^{q}_{i=1}\widehat{\xi}_{i}\widehat{\xi}_{i}^{\mathrm{\scriptscriptstyle T}})|\;|\{(a+b+c)(\widehat{\beta}-\dot{\beta})\}|\\
\leq&(1+q)||(a+b+c)||_{\infty}||(\widehat{\beta}-\dot{\beta})||_{1}.\end{split}
$$
Line 2 follows from $\widehat{F}=I_{p}-\sum_{i=1}^{q}\widehat{\xi}_{i}\widehat{\xi}_{i}^{\mathrm{\scriptscriptstyle T}}$ . Line 3 uses Hölder’s inequality. Lines 4 and 6 employ the trace property: $\operatorname{Trace}(AB)=\operatorname{Trace}(BA)$ . Line 5 follows from the Cauchy–Schwarz inequality for the trace. The last line follows from the fact that the trace of a matrix equals the sum of its eigenvalues. Define $d=a+b+c∈\mathbb{R}^{1× p}$ . This concludes the proof.
Proof of Lemma S.19
Since $0<C_{1}≤\lambda_{\min}(D)≤\lambda_{\max}(D)≤ C_{2}<∞$ , the matrix $D$ satisfies Equations (1.12) and (1.16) with finite $K(s,1,D)$ and $\rho_{\max}(s)$ of Zhou (2009).
Let $\theta_{T_{0}}$ be the subvector of $\theta$ confined to the locations of its $s$ largest coefficients. Define $E_{s}=\{\theta∈\mathbb{R}^{p}:\|D^{1/2}\theta\|_{2}=1,\|\theta_{T_{0}}\|_{1}≤\|\theta_{T_{0}^{C}}\|_{1}\}$ . We observe that each column of $E_{x}D^{-1/2}$ is an independent copy of an isotropic $\phi_{2}$ random vector on $\mathbb{R}^{p}$ . It is evident that under our Condition C1, $n\gg s\log(p)$ . From Theorem 1.6 of Zhou (2009) (with $k_{0}$ in this theorem taken as 1), with probability $1-\exp(-cn)$ for some $c>0$ , we have for all $\theta∈ E_{s}$ ,
$$
0.9^{1/2}\leq\frac{\|(E_{x}D^{-1/2})D^{1/2}\theta\|_{2}}{\sqrt{n}}=\frac{\|E_{x}\theta\|_{2}}{\sqrt{n}}\leq 1.1^{1/2}.
$$
We notice that
| | $\displaystyle\max_{||D^{1/2}\theta||_{2}=1,||\theta_{T_{0}}||_{1}≤||\theta_{T_{0}^{C}}||_{1}}\frac{||E_{x}\theta||_{2}}{\sqrt{n}}=\max_{||D^{1/2}\theta||_{2}=1,||\theta_{T_{0}}||_{1}≤||\theta_{T_{0}^{C}}||_{1}}\frac{||E_{x}\theta||_{2}}{\sqrt{n}||D^{1/2}\theta||_{2}}=\max_{||\theta_{T_{0}}||_{1}≤||\theta_{T_{0}^{C}}||_{1}}\frac{||E_{x}\theta||_{2}}{\sqrt{n}||D^{1/2}\theta||_{2}}$ | |
| --- | --- | --- |
Equation (S31) implies that
| | $\displaystyle\max_{\theta∈ E_{s}^{A}}\frac{\theta^{\mathrm{\scriptscriptstyle T}}E_{x}^{\mathrm{\scriptscriptstyle T}}E_{x}\theta}{n}≤ 1.1\lambda_{\max}(D),$ | |
| --- | --- | --- |
where $E_{s}^{A}=\{\theta∈\mathbb{R}^{p}:||\theta||_{2}=1,||\theta_{T_{0}}||_{1}≤||\theta_{T_{0}^{C}}||_{1}\}$ . We let $E_{s}^{B}=\{\theta∈\mathbb{R}^{p}:||\theta||_{2}=1,0<||\theta||_{0}≤ 2s\}$ . Note that any $\theta∈ E_{s}^{A}$ , one must have $\theta∈ E_{s}^{B}$ given the definition of $T_{0}$ , which means $E_{s}^{B}⊂ E_{s}^{A}$ . Thus, with the same probability,
| | $\displaystyle\max_{\theta∈ E_{s}^{B}}\frac{\theta^{\mathrm{\scriptscriptstyle T}}E_{x}^{\mathrm{\scriptscriptstyle T}}E_{x}\theta}{n}≤\max_{\theta∈ E_{s}^{A}}\frac{\theta^{\mathrm{\scriptscriptstyle T}}E_{x}^{\mathrm{\scriptscriptstyle T}}E_{x}\theta}{n}≤ 1.1\lambda_{\max}(D),$ | |
| --- | --- | --- |
Proof of Lemma S.20
Let $E=({\epsilon}_{y,i},...,{\epsilon}_{y,n})^{\mathrm{\scriptscriptstyle T}}∈\mathbb{R}^{n}$ . We have
$$
\bm{Y}={\bf X}\dot{\beta}+g(\bm{U})+E
$$
Due to the optimality of $\widehat{\beta}$ , we have
$$
\begin{split}&\qquad\frac{||\bm{Y}-\widehat{\bf{X}}\widehat{\beta}||^{2}_{2}}{2n}\leq\frac{||Y-\widehat{\bf{X}}\dot{\beta}||^{2}_{2}}{2n}\\
\Longleftrightarrow&\qquad\frac{||\widehat{\bf{X}}(\widehat{\beta}-\dot{\beta})||^{2}_{2}}{2n}\leq\frac{(Y-\widehat{\bf{X}}\dot{\beta})^{\mathrm{\scriptscriptstyle T}}\widehat{\bf{X}}(\widehat{\beta}-\dot{\beta})}{n}.\end{split}
$$
We can decompose the RSH of (S32) as
$$
\begin{split}\frac{(Y-\widehat{\bf{X}}\dot{\beta})^{\mathrm{\scriptscriptstyle T}}\widehat{\bf{X}}(\widehat{\beta}-\dot{\beta})}{n}&=\frac{(\bm{X}\dot{\beta}+g(\bm{U})+E-\bm{X}\widehat{F}^{\mathrm{\scriptscriptstyle T}}\dot{\beta})^{\mathrm{\scriptscriptstyle T}}\bm{X}\widehat{F}^{\mathrm{\scriptscriptstyle T}}(\widehat{\beta}-\dot{\beta})}{n}\\
&=\frac{\{\bm{X}(I-\widehat{F}^{\mathrm{\scriptscriptstyle T}})\dot{\beta}+g(\bm{U})+E\}^{\mathrm{\scriptscriptstyle T}}\bm{X}\widehat{F}^{\mathrm{\scriptscriptstyle T}}(\widehat{\beta}-\dot{\beta})}{n}\\
&=\frac{(g(\bm{U})\bm{X}\widehat{F}^{\mathrm{\scriptscriptstyle T}}+E^{\mathrm{\scriptscriptstyle T}}\bm{X}\widehat{F}^{\mathrm{\scriptscriptstyle T}})(\widehat{\beta}-\dot{\beta})}{n}\\
&=\frac{(g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\widehat{\bf{X}}+E^{\mathrm{\scriptscriptstyle T}}\widehat{\bf{X}})(\widehat{\beta}-\dot{\beta})}{n}\\
&\leq(1+q)||d||_{\infty}||\widehat{\beta}-\dot{\beta}||_{1}+||\frac{E^{\mathrm{\scriptscriptstyle T}}\widehat{\bm{X}}}{n}||_{\infty}||\widehat{\beta}-\dot{\beta}||_{1}.\end{split}
$$
The third equality follows Lemma S.13 and $d∈\mathbb{R}^{1× p}$ is defined at Lemma S.18.
Note that $||\widehat{\beta}-\dot{\beta}||_{0}≤ 2s$ , we have $||\widehat{\beta}-\dot{\beta}||_{1}≤\sqrt{2s}||\widehat{\beta}-\dot{\beta}||_{2}$ .
We now process the LHS of (S32). Given Lemma S.21, for any $\delta$ , there exists an integer $n_{1}$ such that $\dot{\Omega}:=\{||\widehat{X}(\widehat{\beta}-\dot{\beta})||^{2}_{2}/n≥\pi_{0}^{2}||\widehat{\beta}-\dot{\beta}||_{2}^{2}\}$ with $\mathbb{P}(\dot{\Omega})≥ 1-\delta$ for $n>n_{1}$ .
Conditional on event $\dot{\Omega}$ , and combining with equation (S33), we can rewrite (S32) into
$$
\begin{split}\frac{\pi_{0}^{2}}{2}||\widehat{\beta}-\dot{\beta}||_{2}^{2}&\leq\frac{||\widehat{X}(\widehat{\beta}-\dot{\beta})||^{2}_{2}}{2n}\\
&\leq\left\{(1+q)||d||_{\infty}+||\frac{E^{\mathrm{\scriptscriptstyle T}}\widehat{\bm{X}}}{n}||_{\infty}\right\}||\widehat{\beta}-\dot{\beta}||_{1}\\
&\leq\left\{(1+q)||d||_{\infty}+||\frac{E^{\mathrm{\scriptscriptstyle T}}\widehat{\bm{X}}}{n}||_{\infty}\right\}\sqrt{2s}||\widehat{\beta}-\dot{\beta}||_{2}.\end{split}
$$
Cancel a common factor $||\widehat{\beta}-\dot{\beta}||_{2}$ and notice that $||\widehat{\beta}-\dot{\beta}||_{1}≤\sqrt{2s}||\widehat{\beta}-\dot{\beta}||_{2}$ , we have
$$
\begin{split}||\widehat{\beta}-\dot{\beta}||_{1}&\leq\sqrt{2s}||\widehat{\beta}-\dot{\beta}||_{2}\\
&\leq\sqrt{2s}\;\frac{2}{\pi_{0}^{2}}\left\{(1+q)||d||_{\infty}+||\frac{E^{\mathrm{\scriptscriptstyle T}}\widehat{\bm{X}}}{n}||_{\infty}\right\}\sqrt{2s}\\
&=\frac{4s}{\pi_{0}^{2}}\left\{(1+q)||d||_{\infty}+||\frac{E^{\mathrm{\scriptscriptstyle T}}\widehat{\bm{X}}}{n}||_{\infty}\right\}\end{split}
$$
with high probability, which is the desired result.
Proof of Lemma S.21
For any $\delta>0$ , we are going to show that there exist an integer $n_{0}$ such that
$$
\inf_{n>n_{0}}\mathbb{P}\{{||\widehat{\bf{X}}\theta||_{2}}\geq\pi_{0}\sqrt{n}{||\theta||_{2}},\forall||\theta||_{0}\leq 2s\}\geq 1-\delta
$$
We have ${\bf X}={\bf U}\Lambda+E_{x}$ , where ${\bf U}=(U_{1}\;U_{2}\;...\;U_{n})^{\mathrm{\scriptscriptstyle T}}$ , $E_{x}=(\epsilon_{x,1}\;\epsilon_{x,2}\;...\;\epsilon_{x,n})^{\mathrm{\scriptscriptstyle T}}∈\mathbb{R}^{n× p}$ . We also notice that
$$
\begin{split}\widehat{\bf X}&={\bf X}\widehat{F}^{\mathrm{\scriptscriptstyle T}}={\bf X}-{\bf X}(I_{p}-\widehat{F}^{\mathrm{\scriptscriptstyle T}})\\
&=E_{x}+R,\end{split}
$$
where $R={\bf U}\Lambda-{\bf X}(I_{p}-\widehat{F}^{\mathrm{\scriptscriptstyle T}})$ . By (S1) and Lemma S.12, we have ${\bf X}(I_{p}-\widehat{F}^{\mathrm{\scriptscriptstyle T}})=\sum^{q}_{i=1}\sqrt{\lambda_{i}(n-1)}\eta_{i}\xi_{i}^{\mathrm{\scriptscriptstyle T}}$ . We further have the following decomposition:
$$
\frac{\bf\widehat{X}^{\mathrm{\scriptscriptstyle T}}\widehat{X}}{n}=\frac{E_{x}^{\mathrm{\scriptscriptstyle T}}E_{x}}{n}+\frac{E_{x}^{\mathrm{\scriptscriptstyle T}}R}{n}+\frac{R^{\mathrm{\scriptscriptstyle T}}E_{x}}{n}+\frac{R^{\mathrm{\scriptscriptstyle T}}R}{n},
$$
which implies
$$
\begin{split}&\min_{||\theta||_{2}=1,||\theta||_{0}\leq 2s}\theta^{\mathrm{\scriptscriptstyle T}}\frac{\bf\widehat{X}^{\mathrm{\scriptscriptstyle T}}\widehat{X}}{n}\theta\geq\min_{||\theta||_{2}=1,||\theta||_{0}\leq 2s}\theta^{\mathrm{\scriptscriptstyle T}}\frac{E_{x}^{\mathrm{\scriptscriptstyle T}}E_{x}}{n}\theta\\
&-2\max_{||\theta||_{2}=1,||\theta||_{0}\leq 2s}\theta^{\mathrm{\scriptscriptstyle T}}\frac{E_{x}^{\mathrm{\scriptscriptstyle T}}R}{n}\theta-\max_{||\theta||_{2}=1,||\theta||_{0}\leq 2s}\theta^{\mathrm{\scriptscriptstyle T}}\frac{R^{\mathrm{\scriptscriptstyle T}}R}{n}\theta.\\
\end{split}
$$
We now control each term one by one:
For $\theta^{\mathrm{\scriptscriptstyle T}}{R^{\mathrm{\scriptscriptstyle T}}R}\theta/n$ , we have
$$
\begin{split}\max_{||\theta||_{2}=1,||\theta||_{0}\leq 2s}\theta^{\mathrm{\scriptscriptstyle T}}\frac{R^{\mathrm{\scriptscriptstyle T}}R}{n}\theta&=\max_{||\theta||_{2}=1,||\theta||_{0}\leq 2s}\frac{1}{n}\sum^{n}_{i=1}(\sum^{p}_{j=1}R_{i,j}\theta_{j})^{2}\\
&\leq\max_{||\theta||_{2}=1,||\theta||_{0}\leq 2s}\max_{i}(\sum^{p}_{j=1}R_{i,j}\theta_{j})^{2};\\
&\leq\max_{||\theta||_{2}=1,||\theta||_{0}\leq 2s}(||R||_{\infty}||\theta||_{1})^{2}\\
&\leq 2s||R||_{\infty}^{2}.\end{split}
$$
Given equation (S7), we have
$$
\begin{split}\max_{||\theta||_{2}=1,||\theta||_{0}\leq 2s}\theta^{\mathrm{\scriptscriptstyle T}}\frac{R^{\mathrm{\scriptscriptstyle T}}E_{x}}{n}\theta&\leq\max_{||\theta||_{2}=1,||\theta||_{0}\leq 2s}\sqrt{\theta^{\mathrm{\scriptscriptstyle T}}\frac{R^{\mathrm{\scriptscriptstyle T}}R}{n}\theta}\sqrt{\theta^{\mathrm{\scriptscriptstyle T}}\frac{E_{x}^{\mathrm{\scriptscriptstyle T}}E_{x}}{n}\theta}\\
&\leq\sqrt{1.1s\lambda_{\max}(D)}||R||_{\infty}.\end{split}
$$
Combining equations (S6), (S7), (S36), and (S37), we can rewrite (S35) as
$$
\min_{||\theta||_{2}=1,||\theta||_{0}\leq 2s}\theta^{\mathrm{\scriptscriptstyle T}}\frac{\bf\widehat{X}^{\mathrm{\scriptscriptstyle T}}\widehat{X}}{n}\theta\geq 0.9\lambda_{\min}(D)-2s||R||^{2}_{\infty}-\sqrt{2.2s\lambda_{\max}(D)}||R||_{\infty},
$$
with probability $1-\exp(-cn)$ .
Hence, we only need to show that $s||R||^{2}_{∞}\overset{p}{→}0$ , which has been guaranteed by Lemma 6 of Guo et al. (2022b).
Appendix S.4 Proof of Theorems
S.4.1 Proof of Theorem 1
We aim to demonstrate that if there exists a vector
$$
{\beta}^{*}\in\underset{{\widetilde{\beta}}\in\mathbb{R}^{p}}{\operatorname*{arg\,min}}\;\mathbb{E}\{Y-\widetilde{X}^{\mathrm{\scriptscriptstyle T}}\widetilde{\beta}\}^{2}\text{, such that }||{\widetilde{\beta}}||_{0}\leq p-q-1,
$$
and ${\beta}^{*}\not=\dot{\beta}$ , then a contradiction arises.
Given Lemma S.3, there exists an $\alpha∈\mathbb{R}^{q}$ such that ${\beta}^{*}=\dot{\beta}+\Sigma_{X}^{-1}\Lambda\alpha$ . Define $C:=\{j\mid{\beta}_{j}^{*}=0\}$ and $M:=\{j\mid{\beta}_{j}^{*}\not=0\}$ . We have $|M|≤ p-q-1$ and $|C|≥ q+1$ because $||{\beta}^{*}||_{0}≤ p-q-1$ . We first establish the following claim:
1. $|C\cap\mathcal{A}|≥ 2$
If not, we would have $|C\cap\mathcal{A}|≤ 1$ . Given that $|C|≥ q+1$ , it must follow that $|C\cap\mathcal{A}^{c}|≥ q$ . Consider $\widetilde{C}⊂ C\cap\mathcal{A}^{c}$ such that $|\widetilde{C}|=q$ . Examining an element within $\widetilde{C}$ , we find:
$$
\begin{split}0={\beta}^{*}_{\widetilde{C}}&=[\dot{\beta}+\Sigma_{X}^{-1}\Lambda\alpha]_{\widetilde{C}}\\
&=\dot{\beta}_{\widetilde{C}}+[\Sigma_{X}^{-1}\Lambda\alpha]_{\widetilde{C}}\\
&=[\Sigma_{X}^{-1}\Lambda\alpha]_{\widetilde{C}}.\end{split}
$$
By the invertibility Assumption A1, we consequently have $\alpha=0$ , which implies ${\beta}^{*}=\dot{\beta}+\Sigma_{X}^{-1}\Lambda\alpha=\dot{\beta}$ .
We have $|C|≥ q+1$ , and $|C\cap A|≥ 2$ . Let $\{C^{(i)}\}_{i=1}^{q+1}$ be such that each $C^{(i)}$ is a proper subset of $C$ , $|C^{(i)}|=q$ , and $C^{(i)}\cap A≠\emptyset$ . Define ${\beta}^{(i)}$ as follows:
$$
\beta^{(i)}=\underset{\widetilde{\beta}_{C^{(i)}}=0,\widetilde{\beta}\in\mathbb{R}^{p}}{\operatorname*{arg\,min}}\mathbb{E}\{Y-\widetilde{X}^{\mathrm{\scriptscriptstyle T}}\widetilde{\beta}\}^{2},\quad i=1,2,\ldots,q+1.
$$
From Lemma S.4, the set $\{\beta^{(i)}\}_{i=1}^{q+1}$ is uniquely defined. Given that $\emptyset⊂neqq C^{(i)}⊂neqq C$ , we have:
$$
\underset{\widetilde{\beta}\in\mathbb{R}^{p}}{\min}\mathbb{E}\{Y-\widetilde{X}^{\mathrm{\scriptscriptstyle T}}\widetilde{\beta}\}^{2}\leq\underset{\widetilde{\beta}_{C^{(i)}}=0,\widetilde{\beta}\in\mathbb{R}^{p}}{\min}\mathbb{E}\{Y-\widetilde{X}^{\mathrm{\scriptscriptstyle T}}\widetilde{\beta}\}^{2}\leq\underset{\widetilde{\beta}_{C}=0,\widetilde{\beta}\in\mathbb{R}^{p}}{\min}\mathbb{E}\{Y-\widetilde{X}^{\mathrm{\scriptscriptstyle T}}\widetilde{\beta}\}^{2}.
$$
Since ${\beta}^{*}∈\underset{\widetilde{\beta}∈\mathbb{R}^{p}}{\operatorname*{arg\,min}}\;\mathbb{E}\{Y-\widetilde{X}^{\mathrm{\scriptscriptstyle T}}\widetilde{\beta}\}^{2}$ and ${\beta}^{*}∈\underset{\widetilde{\beta}_{C}=0,\widetilde{\beta}∈\mathbb{R}^{p}}{\operatorname*{arg\,min}}\mathbb{E}\{Y-\widetilde{X}^{\mathrm{\scriptscriptstyle T}}\widetilde{\beta}\}^{2}$ , the equation (S38) can be rewritten as:
$$
\underset{\widetilde{\beta}\in\mathbb{R}^{p}}{\min}\mathbb{E}\{Y-\widetilde{X}^{\mathrm{\scriptscriptstyle T}}\widetilde{\beta}\}^{2}=\underset{\widetilde{\beta}_{C^{(i)}}=0,\widetilde{\beta}\in\mathbb{R}^{p}}{\min}\mathbb{E}\{Y-\widetilde{X}^{\mathrm{\scriptscriptstyle T}}\widetilde{\beta}\}^{2}=\underset{\widetilde{\beta}_{C}=0,\widetilde{\beta}\in\mathbb{R}^{p}}{\min}\mathbb{E}\{Y-\widetilde{X}^{\mathrm{\scriptscriptstyle T}}\widetilde{\beta}\}^{2},
$$
which implies that $\beta^{*}∈\underset{\widetilde{\beta}_{C^{(i)}}=0,\widetilde{\beta}∈\mathbb{R}^{p}}{\operatorname*{arg\,min}}\mathbb{E}\{Y-\widetilde{X}^{\mathrm{\scriptscriptstyle T}}\widetilde{\beta}\}^{2}=\{\beta^{(i)}\}$ , where the last equality is validated by Lemma S.4. Thus, we have $\beta^{*}=\beta^{(i)}$ for $i∈\{1,2,...,q+1\}$ , which violates Condition A4.
This contradiction indicates that the set $\{\underset{{\widetilde{\beta}}∈\mathbb{R}^{p}}{\operatorname*{arg\,min}}\;\mathbb{E}\{Y-\widetilde{X}^{\mathrm{\scriptscriptstyle T}}\widetilde{\beta}\}^{2}\text{ s.t. }||{\widetilde{\beta}}||_{0}≤ p-q-1\}=\{\dot{\beta}\}$ if $s≤ p-q-1$ , while $\{\underset{{\widetilde{\beta}}∈\mathbb{R}^{p}}{\operatorname*{arg\,min}}\;\mathbb{E}\{Y-\widetilde{X}^{\mathrm{\scriptscriptstyle T}}\widetilde{\beta}\}^{2}\text{ s.t. }||{\widetilde{\beta}}||_{0}≤ p-q-1\}=\emptyset$ if $s≥ p-q$ , corresponding to the two cases in Theorem 1.
S.4.2 Proofs of Theorem 2
Proof of the first part of Theorem 2
We are going to show that for any $\delta>0$ , there exist $A_{\delta}$ and $n_{0}$ such that
$$
||\widehat{\beta}-\dot{\beta}||_{1}\leq A_{\delta}/\sqrt{n}
$$
with probability at least $1-\delta$ for all $n>n_{0}$ .
Due to the optimality of $\widehat{\beta}$ , we have
$$
\begin{split}&\qquad\frac{||Y-\widehat{\bf{X}}\widehat{\beta}||^{2}_{2}}{2n}\leq\frac{||Y-\widehat{\bf{X}}\dot{\beta}||^{2}_{2}}{2n}\\
\Longleftrightarrow&\qquad\frac{||\widehat{\bf{X}}(\widehat{\beta}-\dot{\beta})||^{2}_{2}}{2n}\leq\frac{(Y-\widehat{\bf{X}}\dot{\beta})^{\mathrm{\scriptscriptstyle T}}\widehat{\bf{X}}(\widehat{\beta}-\dot{\beta})}{n}.\end{split}
$$
By the models (1) and (3), we can decompose the first term of RSH of (S39) as
$$
\begin{split}\frac{(Y-\widehat{\bf{X}}\dot{\beta})^{\mathrm{\scriptscriptstyle T}}\widehat{\bf{X}}(\widehat{\beta}-\dot{\beta})}{n}&=\frac{(\bm{X}\dot{\beta}+g(\bm{U})+E-\bm{X}\widehat{F}^{\mathrm{\scriptscriptstyle T}}\dot{\beta})^{\mathrm{\scriptscriptstyle T}}\bm{X}\widehat{F}^{\mathrm{\scriptscriptstyle T}}(\widehat{\beta}-\dot{\beta})}{n}\\
&=\frac{\{\bm{X}(I-\widehat{F}^{\mathrm{\scriptscriptstyle T}})\dot{\beta}+g(\bm{U})+E\}^{\mathrm{\scriptscriptstyle T}}\bm{X}\widehat{F}^{\mathrm{\scriptscriptstyle T}}(\widehat{\beta}-\dot{\beta})}{n}\\
&=\frac{(g(\bm{U})^{\mathrm{\scriptscriptstyle T}}{\bf{X}}\widehat{F}^{\mathrm{\scriptscriptstyle T}}+E^{\mathrm{\scriptscriptstyle T}}{\bf{X}}\widehat{F}^{\mathrm{\scriptscriptstyle T}})(\widehat{\beta}-\dot{\beta})}{n}\\
&=\frac{(g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\widehat{\bf{X}}+E^{\mathrm{\scriptscriptstyle T}}\widehat{\bf{X}})(\widehat{\beta}-\dot{\beta})}{n}.\end{split}
$$
The third equality follows by Lemma S.6.
Note that $||\widehat{\beta}-\dot{\beta}||_{0}≤ 2s$ , we have $||\widehat{\beta}-\dot{\beta}||_{1}≤\sqrt{2s}||\widehat{\beta}-\dot{\beta}||_{2}$ . Given Lemma S.11, there exists an integer $n_{1}$ such that $||\widehat{\bm{X}}(\widehat{\beta}-\dot{\beta})||^{2}_{2}/n≥\pi_{0}^{2}||\widehat{\beta}-\dot{\beta}||_{2}^{2}$ with probability at least $1-\delta/2$ for $n>n_{1}$ . Equation (S39) yields.
$$
\begin{split}\pi_{0}^{2}||\widehat{\beta}-\dot{\beta}||_{2}^{2}&\leq||{\bf\widehat{X}}(\widehat{\beta}-\dot{\beta})||^{2}_{2}/n\\
&\leq\frac{(g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\widehat{\bf{X}}+E^{\mathrm{\scriptscriptstyle T}}\widehat{\bf{X}})(\widehat{\beta}-\dot{\beta})}{n}\\
&\leq||\widehat{\beta}-\beta||_{1}\left(\left|\left|\frac{{g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\widehat{\bf{X}}}}{n}\right|\right|_{\infty}+\left|\left|\frac{{\bf\widehat{X}}^{\mathrm{\scriptscriptstyle T}}E}{n}\right|\right|_{\infty}\right)\\
&\leq\sqrt{2s}||\widehat{\beta}-\dot{\beta}||_{2}\left(\left|\left|\frac{{g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\widehat{\bf{X}}}}{n}\right|\right|_{\infty}+\left|\left|\frac{{\bf\widehat{X}}^{\mathrm{\scriptscriptstyle T}}E}{n}\right|\right|_{\infty}\right),\end{split}
$$
with probability at least $1-\delta/2$ , where $\left|\left|{{\bf\widehat{X}}^{\mathrm{\scriptscriptstyle T}}E}/{n}\right|\right|_{∞}=O_{p}(1/\sqrt{n})$ by Lemma S.9. Besides, given Lemma S.10, we have
$$
\left|\left|\frac{{g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\widehat{\bf{X}}}}{n}\right|\right|_{\infty}\leq\left|\left|\frac{{g(\bm{U})^{\mathrm{\scriptscriptstyle T}}\widehat{\bf{X}}}}{n}\right|\right|_{2}=O_{p}(\frac{1}{\sqrt{n}}).
$$
Cancelling a factor $||\widehat{\beta}-\dot{\beta}||_{2}$ of (S40) and noting that $||\widehat{\beta}-\dot{\beta}||_{1}≤||\widehat{\beta}-\dot{\beta}||_{2}\sqrt{2s}$ yield the claim of first part of Theorem 2.
Proof of the second part of Theorem 2
Let $C_{16}=\min_{j∈\mathcal{A}}|\beta_{j}|≥ 0$ . Considering $k=s$ , the event $\{\widehat{\mathcal{A}}≠\mathcal{A}\}⊂\{||\widehat{\beta}-\dot{\beta}||_{1}≥ C_{16}\}$ . From the first part of Theorem 2, we know that $||\widehat{\beta}-\dot{\beta}||_{1}=O_{p}\left(\frac{1}{\sqrt{n}}\right)$ , which means that for any $\delta>0$ , there exists a constant $M_{\delta}$ and an integer $n_{1}>0$ such that
$$
\sup_{n>n_{1}}\mathbb{P}\left(||\widehat{\beta}-\dot{\beta}||_{1}\geq\frac{M_{\delta}}{\sqrt{n}}\right)\leq\delta.
$$
For $n≥ n_{2}:=\max(n_{1},\frac{M_{\delta}^{2}}{C_{16}^{2}})$ , $C_{16}≥\frac{M_{\delta}}{\sqrt{n}}$ , so $\{\widehat{\mathcal{A}}≠\mathcal{A}\}⊂\{||\widehat{\beta}-\dot{\beta}||_{1}≥ C_{16}\}⊂\{||\widehat{\beta}-\dot{\beta}||_{1}≥\frac{M_{\delta}}{\sqrt{n}}\}$ .
Thus, we have
$$
\sup_{n\geq n_{2}}\mathbb{P}(\widehat{\mathcal{A}}\neq\mathcal{A})\leq\sup_{n\geq n_{2}}\mathbb{P}\left(||\widehat{\beta}-\dot{\beta}||_{1}\geq\frac{M_{\delta}}{\sqrt{n}}\right)\leq\sup_{n\geq n_{1}}\mathbb{P}\left(||\widehat{\beta}-\dot{\beta}||_{1}\geq\frac{M_{\delta}}{\sqrt{n}}\right)\leq\delta.
$$
Given the arbitrariness of $\delta$ , we conclude that $\mathbb{P}(\widehat{\mathcal{A}}≠\mathcal{A})→ 0$ as $n→∞$ .
S.4.3 Proofs of Theorem 3
Proof of the first part of Theorem 3
By Lemma S.20, we have
$$
||\widehat{\beta}-\dot{\beta}||_{1}=O_{p}\left(s\left\{(1+q)||d||_{\infty}+||\frac{E^{\mathrm{\scriptscriptstyle T}}\widehat{\bm{X}}}{n}||_{\infty}\right\}\right).
$$
Since $||d||_{∞}=O_{p}(\sqrt{{\log(p)}/{n}})$ by Lemma S.18, $||E^{\mathrm{\scriptscriptstyle T}}\widehat{\bm{X}}/n||_{∞}=O_{p}(\sqrt{{\log(p)}/{n}})$ by Lemma S.14, we have shown the desired result:
$$
||\widehat{\beta}-\dot{\beta}||_{1}=O_{p}\left(s(1+q)\sqrt{\frac{\log(p)}{n}}\right).\qquad\square
$$
Proof of the second part of Theorem 3
Recall from Assumption C4, there exist constants $C_{7},C_{8}>0$ such that $\underset{i∈\mathcal{A}}{\min}|\dot{\beta}_{i}|≥ n^{C_{7}-1/2}$ and $s^{2}(1+q)^{2}\log{p}≤ n^{2C_{7}-C_{8}}$ . Considering $k=s$ and the event $\{\widehat{\mathcal{A}}≠\mathcal{A}\}⊂\{||\widehat{\beta}-\dot{\beta}||_{1}≥ n^{C_{7}-1/2}\}$ . From Theorem 3 (a), we know that $||\widehat{\beta}-\dot{\beta}||_{1}=O_{p}(s\sqrt{\log(p)/n})$ , which means $∀\delta>0$ , there exists a constant $M_{\delta}$ and an integer $n_{1}>0$ such that
$$
\sup_{n>n_{1}}\mathbb{P}\left(||\widehat{\beta}-\dot{\beta}||_{1}\geq M_{\delta}s(q+1)\sqrt{\frac{\log(p)}{n}}\right)\leq\delta.
$$
For $n≥ n_{2}:=\max(n_{1},M_{\delta}^{2/C_{8}})$ , we have
| | $\displaystyle n^{C_{7}-1/2}$ | $\displaystyle=n^{C_{7}-C_{8}/2}n^{-1/2}n^{C_{8}/2}$ | |
| --- | --- | --- | --- |
which implies $\{\widehat{\mathcal{A}}≠\mathcal{A}\}⊂\{||\widehat{\beta}-\dot{\beta}||_{1}≥ n^{C_{7}-1/2}\}⊂\{||\widehat{\beta}-\dot{\beta}||_{1}≥ M_{\delta}s(q+1)\sqrt{\log(p)/n}\}$ .
We thus have
$$
\sup_{n\geq n_{2}}\mathbb{P}(\widehat{\mathcal{A}}\neq\mathcal{A})\leq\sup_{n\geq n_{2}}\mathbb{P}\left(||\widehat{\beta}-\dot{\beta}||_{1}\geq M_{\delta}s\sqrt{\frac{\log(p)}{n}}\right)\leq\sup_{n\geq n_{1}}\mathbb{P}\left(||\widehat{\beta}-\dot{\beta}||_{1}\geq M_{\delta}s\sqrt{\frac{\log(p)}{n}}\right)\leq\delta.
$$
Let $\delta→ 0$ , we conclude $\mathbb{P}(\widehat{\mathcal{A}}≠\mathcal{A})→ 0$ as $n→∞$ .
S.4.4 Proof of Theorem 4
The proof is very similar to the one we provided for Theorem 1.
We are going to show that if there exists a vector
$$
{\beta}^{*}\in\underset{{\widetilde{\beta}}\in\mathbb{R}^{p}}{\operatorname*{arg\,min}}\;G(\widetilde{\beta})\text{, such that }||{\widetilde{\beta}}||_{0}\leq p-q-1,
$$
and ${\beta}^{*}\not=\dot{\beta}$ , we will have a contradiction.
(i.) We first show that $\beta^{*}=\dot{\beta}+\mathbb{E}^{-1}[X\frac{∂ f}{∂\beta^{\mathrm{\scriptscriptstyle T}}}|_{\beta=\widetilde{\beta}^{*}}]\Lambda\alpha^{*}$ for some $\alpha^{*}∈\mathbb{R}^{q}$ and $\widetilde{\beta}^{*}$ be a vector between $\beta^{*}$ and $\dot{\beta}$ .
Since $G(\dot{\beta})=0$ , we must have $G(\beta^{*})=0$ . We have the following equation:
| | $\displaystyle G(\beta^{*})$ | $\displaystyle=||\mathbb{E}[SIV\{Y-f(X;\beta^{*})\}]||_{2}^{2}$ | |
| --- | --- | --- | --- |
where $\widetilde{\beta}^{*}$ is a vector between $\beta^{*}$ and $\dot{\beta}$ . The third equality holds because $SIV\perp\!\!\!\perp g(U)+\epsilon_{y}$ , and the last equality follows the mean value theorem. Since $0=G(\beta^{*})=||B_{\Lambda^{\perp}}^{\mathrm{\scriptscriptstyle T}}\mathbb{E}(X\frac{∂ f}{∂\beta^{\mathrm{\scriptscriptstyle T}}}|_{\beta=\widetilde{\beta}^{*}})(\dot{\beta}-\beta^{*})\}||_{2}^{2}$ and $\mathbb{E}(X\frac{∂ f}{∂\beta^{\mathrm{\scriptscriptstyle T}}}|_{\beta=\widetilde{\beta}^{*}})$ is an invertible matrix by assumption D1, we must have (i.)
$$
(i.)\;\;\beta^{*}=\dot{\beta}+\mathbb{E}^{-1}(X\frac{\partial f}{\partial\beta^{\mathrm{\scriptscriptstyle T}}}|_{\beta=\widetilde{\beta}^{*}})\Lambda\alpha^{*}
$$
for some $\alpha^{*}∈\mathbb{R}^{q}$ .
Let $C:=\{j\mid{\beta}_{j}^{*}=0\}$ , $M:=\{j\mid{\beta}_{j}^{*}\not=0\}$ . We have $|M|≤ p-q-1$ and $|C|≥ q+1$ by $||{\beta}^{*}||_{0}≤ p-q-1$ .
(ii.) We next show the following inequality: $|C\cap\mathcal{A}|≥ 2$ .
Otherwise, we have $|C\cap\mathcal{A}|≤ 1$ . Since $|C|≥ q+1$ , we must have $|C\cap\mathcal{A}^{c}|≥ q$ . Let $\widetilde{C}⊂ C\cap\mathcal{A}^{c}$ such that $|\widetilde{C}|=q$ . We consider an element inside $\widetilde{C}$ :
$$
\begin{split}0={\beta}^{*}_{\widetilde{C}}&=[\dot{\beta}+\mathbb{E}^{-1}(X\frac{\partial f}{\partial\beta^{\mathrm{\scriptscriptstyle T}}}|_{\beta=\widetilde{\beta}^{*}})\Lambda\alpha^{*}]_{\widetilde{C}}\\
&=\dot{\beta}_{\widetilde{C}}+[\mathbb{E}^{-1}(X\frac{\partial f}{\partial\beta^{\mathrm{\scriptscriptstyle T}}}|_{\beta=\widetilde{\beta}^{*}})\Lambda\alpha^{*}]_{\widetilde{C}}\\
&=[\mathbb{E}^{-1}(X\frac{\partial f}{\partial\beta^{\mathrm{\scriptscriptstyle T}}}|_{\beta=\widetilde{\beta}^{*}})\Lambda\alpha^{*}]_{\widetilde{C}}.\end{split}
$$
By the invertibility condition D1, we hence have $\alpha^{*}=0$ , which implies ${\beta}^{*}=\dot{\beta}+\Sigma_{X}^{-1}\Lambda\alpha=\dot{\beta}$ and yields a contradition.
(iii.) Finally, we construct a contradiction.
We have $|C|≥ q+1$ , and $|C\cap A|≥ 2$ . Let $\{C^{(i)}\}^{q+1}_{i=1}$ such that $C^{(i)}⊂neqq C$ , $|C^{(i)}|=q$ , $C^{(i)}\cap A\not=\emptyset$ . Define ${\beta}^{(i)}$ as follow:
$$
\beta^{(i)}=\underset{\widetilde{\beta}_{C^{(i)}}=0,\widetilde{\beta}\in\mathbb{R}^{p}}{\operatorname*{arg\,min}}G(\widetilde{\beta}),i=1,2,\ldots,q+1.
$$
From Condition D2, $\{\beta^{(i)}\}^{q+1}_{i=1}$ are uniquely defined. We observe that $\emptyset⊂neqq C^{(i)}⊂neqq C$ , we have
$$
\underset{\widetilde{\beta}\in\mathbb{R}^{p}}{\min}G(\widetilde{\beta})\leq\underset{\widetilde{\beta}_{C^{(i)}}=0,\widetilde{\beta}\in\mathbb{R}^{p}}{\min}G(\widetilde{\beta})\leq\underset{\widetilde{\beta}_{C}=0,\widetilde{\beta}\in\mathbb{R}^{p}}{\min}G(\widetilde{\beta}).
$$
Since ${\beta}^{*}∈\underset{\widetilde{\beta}∈\mathbb{R}^{p}}{\operatorname*{arg\,min}}\;G(\widetilde{\beta})$ and ${\beta}^{*}∈\underset{\widetilde{\beta}_{C}=0,\widetilde{\beta}∈\mathbb{R}^{p}}{\operatorname*{arg\,min}}G(\widetilde{\beta})$ , the equation (S41) can be rewritten as
$$
\underset{\widetilde{\beta}\in\mathbb{R}^{p}}{\min}\;G(\widetilde{\beta})=\underset{\widetilde{\beta}_{C^{(i)}}=0,\widetilde{\beta}\in\mathbb{R}^{p}}{\min}G(\widetilde{\beta})=\underset{\widetilde{\beta}_{C}=0,\widetilde{\beta}\in\mathbb{R}^{p}}{\min}G(\widetilde{\beta}),
$$
which means $\beta^{*}∈\underset{\widetilde{\beta}_{C^{(i)}}=0,\widetilde{\beta}∈\mathbb{R}^{p}}{\operatorname*{arg\,min}}G(\widetilde{\beta})=\{\beta^{(i)}\}$ , where the last equality holds given Lemma S.4. Now we get $\beta^{*}=\beta^{(i)}$ for $i∈\{1,2,...,q+1\}$ , which violates the Condition A4.
This contradiction implies the set $\{\underset{{\widetilde{\beta}}∈\mathbb{R}^{p}}{\operatorname*{arg\,min}}\;G(\widetilde{\beta})s.t.||{\widetilde{\beta}}||_{0}≤ p-q-1\}=\{\dot{\beta}\}$ if $p≥ q+s+1$ , while $\{\underset{{\widetilde{\beta}}∈\mathbb{R}^{p}}{\operatorname*{arg\,min}}\;G(\widetilde{\beta})\;s.t.\;||{\widetilde{\beta}}||_{0}≤ p-q-1\}=\emptyset$ if $p≤ q+s$ , corresponding to the two cases in the Theorem 4.
Appendix S.5 Theoretical result for nonlinear outcome model
We now provide a theoretical result for our estimator (10). We focus on the low-dimensional setting where $p$ is fixed, and leave the high-dimensional case for future investigation.
We first clarify the notation and setting. Suppose we have $p$ treatments and $q$ latent confounders, and we observe $n$ i.i.d. samples generated from models (1) and (8), where the function $f$ is unknown and the parameter $\beta$ is to be estimated. Our goal is to investigate the properties of the estimator $\widehat{\beta}$ obtained from (10).
Let $\Sigma_{X}=\mathrm{Cov}(X)$ and $D=\mathrm{Cov}(\epsilon_{x})$ . Let $\bm{X}∈\mathbb{R}^{n× p}$ denote the design matrix, and let $\bm{Y}=(Y_{1},...,Y_{n})^{→p}∈\mathbb{R}^{n× 1}$ be the response vector. Define
$$
f(\bm{X};\beta)=\left(f(X_{1};\beta),\ldots,f(X_{n};\beta)\right)^{\top}\in\mathbb{R}^{n\times 1}
$$
as the vector of nonlinear responses, and let $\bm{SIV}∈\mathbb{R}^{n×(p-q)}$ be the matrix of synthetic instrumental variables. The projection matrix defined by $\bm{SIV}$ is given by
$$
P_{\bm{SIV}}=\bm{SIV}\left(\bm{SIV}^{\top}\bm{SIV}\right)^{-1}\bm{SIV}^{\top}\in\mathbb{R}^{n\times n}.
$$
S.5.1 Assumptions and discussion
We make the following assumptions:
**Assumption 3**
*(Assumptions for nonlinear outcome models)
1. The coefficients ${\Lambda}$ and the measurable functions $g(·)$ and $f(·;\beta)$ in models (1) and (8) are fixed and do not change as $n→∞$ .
1. $U_{i}$ , $\epsilon_{x,i}$ , and $\epsilon_{y,i}$ are independent random draws from the joint distribution of $(U,\epsilon_{x},\epsilon_{y})$ such that $E(\epsilon_{x})=\bm{0}$ , $E(U)=\bm{0}$ , $\mathrm{Cov}(\epsilon_{x})=D$ , $\mathrm{Cov}(U)=I_{q}$ , and $(U,\epsilon_{x},\epsilon_{y})$ are mutually independent. Furthermore, assume that $\mathrm{Var}({\epsilon}_{y})={\sigma}^{2}$ and $\max_{1≤ j≤ p}\mathrm{Var}(X_{j})=\sigma_{x}^{2}$ ; these parameters are fixed and do not change as $n→∞$ .
1. For the maximum likelihood estimator $\widehat{\Lambda}$ , there exists an orthogonal matrix ${O}∈\mathbb{R}^{q× q}$ such that $\|\widehat{\Lambda}-\Lambda O\|_{2}=O_{p}(1/\sqrt{n})$ .
1. $\frac{\bm{X}^{\mathrm{\scriptscriptstyle T}}}{n}\frac{∂ f}{∂\beta}\big|_{\beta=\bar{\beta}}∈\mathbb{R}^{p× p}$ converges in probability to a matrix $M_{\bar{\beta}}$ uniformly in $\bar{\beta}$ , and $\|M_{\bar{\beta}}\|_{2}$ is bounded from above for all $\bar{\beta}$ .
1. Let $\bar{\Sigma}=M_{\bar{\beta}}^{\mathrm{\scriptscriptstyle T}}B_{\Lambda^{\perp}}(B_{\Lambda^{\perp}}^{\mathrm{\scriptscriptstyle T}}\Sigma_{X}B_{\Lambda^{\perp}})^{-1}B_{\Lambda^{\perp}}^{{\mathrm{\scriptscriptstyle T}}}M_{\bar{\beta}}$ . We assume
$$
\min_{\theta\in\mathbb{R}^{p},\ 0<\|\theta\|_{0}\leq 2s}\frac{\theta^{\mathrm{\scriptscriptstyle T}}\bar{\Sigma}\theta}{\|\theta\|_{2}^{2}}>c
$$
for some positive constant $c$ .*
Assumptions E1 – E3 and E5 are standard in low-dimensional settings and are similar to those made in Assumptions B1 – B4. Assumption E4 is specifically required for nonlinear IV models (Amemiya, 1974). When $f$ is linear, we have
$$
\frac{\bm{X}^{\mathrm{\scriptscriptstyle T}}}{n}\left.\frac{\partial f}{\partial\beta}\right|_{\beta=\bar{\beta}}=\widehat{\Sigma}_{X},
$$
where $\widehat{\Sigma}_{X}$ is the sample covariance matrix of $\bm{X}$ . In this case, it converges to the population covariance matrix as $n→∞$ .
S.5.2 Theoretical results
**Theorem S.5**
*Under the conditions of Theorem 4 and Assumptions E1 – E5, if the tuning parameter satisfies $\widehat{k}=s$ , then the estimator $\widehat{\beta}$ obtained from (10) satisfies: 1. ( $\ell_{1}$ -error rate) $\|\widehat{\beta}-\dot{\beta}\|_{1}=O_{p}(n^{-1/2})$ .
1. (Variable selection consistency) Let $\mathcal{A}=\{j:\dot{\beta}_{j}≠ 0\}$ and $\widehat{\mathcal{A}}=\{j:\widehat{\beta}_{j}≠ 0\}$ . Then $\mathbb{P}(\widehat{\mathcal{A}}=\mathcal{A})→ 1$ as $n→∞$ .*
S.5.3 Lemmas and their proof
**Lemma S.22 (Convergence of a Key Matrix)**
*Under Assumptions E1–E5, we have
$$
\left\|B_{\widehat{\Lambda}^{\perp}}\left(\frac{B_{\widehat{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}\bm{X}^{\mathrm{\scriptscriptstyle T}}\bm{X}B_{\widehat{\Lambda}^{\perp}}}{n}\right)^{-1}B_{\widehat{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}-B_{\Lambda^{\perp}}\left(B_{\Lambda^{\perp}}^{\mathrm{\scriptscriptstyle T}}DB_{\Lambda^{\perp}}\right)^{-1}B_{\Lambda^{\perp}}^{\mathrm{\scriptscriptstyle T}}\right\|_{2}=O_{p}\left(\frac{1}{\sqrt{n}}\right).
$$*
Proof of Lemma S.22.
To simplify notation, define
$$
M_{1}=B_{\widehat{\Lambda}^{\perp}}\left(\frac{B_{\widehat{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}\bm{X}^{\mathrm{\scriptscriptstyle T}}\bm{X}B_{\widehat{\Lambda}^{\perp}}}{n}\right)^{-1}B_{\widehat{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}},\quad M_{2}=B_{\Lambda^{\perp}}\left(B_{\Lambda^{\perp}}^{\mathrm{\scriptscriptstyle T}}DB_{\Lambda^{\perp}}\right)^{-1}B_{\Lambda^{\perp}}^{\mathrm{\scriptscriptstyle T}}.
$$
Then,
$$
\begin{split}\|M_{1}-M_{2}\|_{2}&=\|\widehat{F}\widehat{D}^{-1}-FD^{-1}\|_{2}\\
&\leq\|(\widehat{F}-F)\widehat{D}^{-1}\|_{2}+\|F(\widehat{D}^{-1}-D^{-1})\|_{2}\\
&\leq\|\widehat{F}-F\|_{2}\cdot\|\widehat{D}^{-1}\|_{2}+\|F\|_{2}\cdot\|\widehat{D}^{-1}\|_{2}\cdot\|\widehat{D}-D\|_{2}\cdot\|D^{-1}\|_{2}\\
&=O_{p}\left(\frac{1}{\sqrt{n}}\right),\end{split}
$$
where $F$ and $\widehat{F}$ are defined in Lemma S.8, and their convergence is established therein. The final equality follows from the rate in Equation S12 together with Lemma S.8.
**Lemma S.23 (Sparse Eigenvalue Condition, Nonlinear Setting)**
*Under Conditions E1 – E5, there exists a constant $\pi_{0}>0$ such that
$$
\liminf_{n}\mathbb{P}\left\{\left\|P_{\mathrm{SIV}}\frac{\partial f}{\partial\beta}\big|_{\beta=\bar{\beta}}\theta\right\|_{2}\geq\pi_{0}\sqrt{n}\|\theta\|_{2},\;\forall\theta\in\mathbb{R}^{p},\|\theta\|_{0}\leq 2s\right\}=1.
$$*
Proof of Lemma S.23.
The proof closely follows the argument used in Lemma S.11.
Note that
$$
\frac{1}{n}\left(\frac{\partial f}{\partial\beta}\bigg|_{\beta=\bar{\beta}}\right)^{\mathrm{\scriptscriptstyle T}}P_{\mathrm{SIV}}\frac{\partial f}{\partial\beta}\bigg|_{\beta=\bar{\beta}}=\left(\frac{1}{n}\left(\frac{\partial f}{\partial\beta}\bigg|_{\beta=\bar{\beta}}\right)^{\mathrm{\scriptscriptstyle T}}\bm{X}^{\mathrm{\scriptscriptstyle T}}\right)B_{\widehat{\Lambda}^{\perp}}\left(\frac{B_{\widehat{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}\bm{X}^{\mathrm{\scriptscriptstyle T}}\bm{X}B_{\widehat{\Lambda}^{\perp}}}{n}\right)^{-1}B_{\widehat{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}}\left(\frac{1}{n}\bm{X}\frac{\partial f}{\partial\beta}\bigg|_{\beta=\bar{\beta}}\right).
$$
By Condition E4 and Lemma S.22, we have
$$
\left\|\frac{1}{n}\left(\frac{\partial f}{\partial\beta}\bigg|_{\beta=\bar{\beta}}\right)^{\mathrm{\scriptscriptstyle T}}P_{\mathrm{SIV}}\frac{\partial f}{\partial\beta}\bigg|_{\beta=\bar{\beta}}-\bar{\Sigma}\right\|_{2}=O_{p}\left(\frac{1}{\sqrt{n}}\right).
$$
This implies that there exists a constant $A_{\delta}>0$ and an integer $n_{1}$ such that
$$
\inf_{n>n_{1}}\mathbb{P}\left(\left\|\frac{1}{n}\left(\frac{\partial f}{\partial\beta}\bigg|_{\beta=\bar{\beta}}\right)^{\mathrm{\scriptscriptstyle T}}P_{\mathrm{SIV}}\frac{\partial f}{\partial\beta}\bigg|_{\beta=\bar{\beta}}-\bar{\Sigma}\right\|_{2}\leq\frac{A_{\delta}}{\sqrt{n}}\right)\geq 1-\delta.
$$
Define
$$
\pi_{1}=\inf\left\{\frac{\theta^{\mathrm{\scriptscriptstyle T}}\bar{\Sigma}\theta}{\|\theta\|_{2}^{2}}:\theta\in\mathbb{R}^{p},\|\theta\|_{0}\leq 2s\right\},
$$
which is strictly positive by Condition E5. Set $\pi_{0}=\sqrt{\pi_{1}/2}$ , $n_{2}=4A_{\delta}^{2}/\pi_{1}^{2}$ , and let $n_{0}=\max(n_{1},n_{2})$ . Then, with probability at least $1-\delta$ , for all $n>n_{0}$ and all $\theta∈\mathbb{R}^{p}$ with $\|\theta\|_{0}≤ 2s$ , we have
$$
\begin{split}\theta^{\mathrm{\scriptscriptstyle T}}\left(\frac{1}{n}\left(\frac{\partial f}{\partial\beta}\bigg|_{\beta=\bar{\beta}}\right)^{\mathrm{\scriptscriptstyle T}}P_{\mathrm{SIV}}\frac{\partial f}{\partial\beta}\bigg|_{\beta=\bar{\beta}}\right)\theta&=\theta^{\mathrm{\scriptscriptstyle T}}\bar{\Sigma}\theta+\theta^{\mathrm{\scriptscriptstyle T}}\left(\frac{1}{n}\left(\frac{\partial f}{\partial\beta}\bigg|_{\beta=\bar{\beta}}\right)^{\mathrm{\scriptscriptstyle T}}P_{\mathrm{SIV}}\frac{\partial f}{\partial\beta}\bigg|_{\beta=\bar{\beta}}-\bar{\Sigma}\right)\theta\\
&\geq\|\theta\|_{2}^{2}\left(\pi_{1}-\left\|\frac{1}{n}\left(\frac{\partial f}{\partial\beta}\bigg|_{\beta=\bar{\beta}}\right)^{\mathrm{\scriptscriptstyle T}}P_{\mathrm{SIV}}\frac{\partial f}{\partial\beta}\bigg|_{\beta=\bar{\beta}}-\bar{\Sigma}\right\|_{2}\right)\\
&\geq\|\theta\|_{2}^{2}\pi_{0}^{2}.\end{split}
$$
This completes the proof.
**Lemma S.24**
*Under Conditions E1 – E4, we have
$$
\left\|\frac{1}{n}(\bm{Y}-f(\bm{X};\dot{\beta}))^{\mathrm{\scriptscriptstyle T}}P_{\bf SIV}\left.\frac{\partial f}{\partial\beta}\right|_{\beta=\bar{\beta}}\right\|_{2}=O_{p}\left(\frac{1}{\sqrt{n}}\right),
$$
where $\bar{\beta}∈\mathbb{R}^{p}$ is an element of the parameter space.*
Proof of Lemma S.24.
Consider the data-generating mechanism $\bm{Y}=f(\bm{X};\dot{\beta})+g(\bm{U})+E_{y}$ . We decompose the target quantity as follows:
$$
\begin{split}\frac{1}{n}(\bm{Y}-f(\bm{X};\dot{\beta}))^{\mathrm{\scriptscriptstyle T}}P_{\bf SIV}\left.\frac{\partial f}{\partial\beta}\right|_{\beta=\bar{\beta}}&=\frac{(g(\bm{U})+E_{y})^{\mathrm{\scriptscriptstyle T}}\text{SIV}}{n}\left(\frac{\text{SIV}^{\mathrm{\scriptscriptstyle T}}\text{SIV}}{n}\right)^{-1}\frac{\text{SIV}^{\mathrm{\scriptscriptstyle T}}\left.\frac{\partial f}{\partial\beta}\right|_{\beta=\bar{\beta}}}{n}\\
&=ABC,\end{split}
$$
where
$$
A=\frac{(g(\bm{U})+E_{y})^{\mathrm{\scriptscriptstyle T}}\bm{X}}{n},\quad B=B_{\widehat{\Lambda}^{\perp}}\left(\frac{\text{SIV}^{\mathrm{\scriptscriptstyle T}}\text{SIV}}{n}\right)^{-1}B_{\widehat{\Lambda}^{\perp}}^{\mathrm{\scriptscriptstyle T}},\quad C=\frac{\bm{X}^{\mathrm{\scriptscriptstyle T}}\left.\frac{\partial f}{\partial\beta}\right|_{\beta=\bar{\beta}}}{n}.
$$
We now bound $\|AB\|_{2}$ and $\|C\|_{2}$ separately.
For the first term, using the definition $\text{SIV}=\bm{X}B_{\widehat{\Lambda}^{\perp}}$ , we have
$$
\begin{split}\|AB\|_{2}&=\left\|\frac{(g(\bm{U})+E_{y})^{\mathrm{\scriptscriptstyle T}}\bm{X}}{n}B\right\|_{2}\\
&=\left\|\left(\frac{(g(\bm{U})+E_{y})^{\mathrm{\scriptscriptstyle T}}\bm{X}}{n}-\text{Cov}(g(U),X)\right)B\right\|_{2}+\left\|\text{Cov}(g(U),X)B\right\|_{2}\\
&=O_{p}\left(\frac{1}{\sqrt{n}}\right)\|B\|_{2}+\left\|\text{Cov}(g(U),U)\left(\Lambda^{\mathrm{\scriptscriptstyle T}}-O^{\mathrm{\scriptscriptstyle T}}\widehat{\Lambda}^{\mathrm{\scriptscriptstyle T}}\right)B\right\|_{2}\\
&=O_{p}\left(\frac{1}{\sqrt{n}}\right)\|B\|_{2}\\
&=O_{p}\left(\frac{1}{\sqrt{n}}\right).\end{split}
$$
The second line follows from decomposing the empirical covariance. The third line uses the identity $\text{Cov}(g(U),X)=\text{Cov}(g(U),U)\Lambda^{\mathrm{\scriptscriptstyle T}}$ and the orthogonality condition $\widehat{\Lambda}^{\mathrm{\scriptscriptstyle T}}B_{\widehat{\Lambda}^{\perp}}=0.$ The fourth line follows from Condition E3, which ensures that $\text{Cov}(g(U),U)$ is bounded and that $B_{\widehat{\Lambda}^{\perp}}$ is an orthogonal matrix. The final line uses Lemma S.22.
For the second term, we have
$$
\begin{split}\|C\|_{2}&=\left\|\frac{\bm{X}^{\mathrm{\scriptscriptstyle T}}}{n}\left.\frac{\partial f}{\partial\beta}\right|_{\beta=\bar{\beta}}\right\|_{2}\\
&\leq\left\|\left(\frac{\bm{X}^{\mathrm{\scriptscriptstyle T}}}{n}\left.\frac{\partial f}{\partial\beta}\right|_{\beta=\bar{\beta}}-M_{\bar{\beta}}\right)\right\|_{2}+\left\|M_{\bar{\beta}}\right\|_{2}\\
&=O_{p}(1),\end{split} \tag{1}
$$
where the decomposition and bound follow directly from Condition E4.
Combining equations (S44) and (S45), we conclude the proof of Lemma S.24.
S.5.4 Proof of Theorem
Proof of the first part of Theorem S.5.
We focus on equation (10). Due to the optimality of $\widehat{\beta}$ compared to $\dot{\beta}$ , we have
$$
\begin{split}&\quad\frac{\|P_{\bf SIV}(\bm{Y}-f(\bm{X};\widehat{\beta}))\|^{2}_{2}}{2n}\leq\frac{\|P_{\bf SIV}(\bm{Y}-f(\bm{X};\dot{\beta}))\|^{2}_{2}}{2n}\\
\Longleftrightarrow&\quad\frac{\|P_{\bf SIV}(f(\bm{X};\widehat{\beta})-f(\bm{X};\dot{\beta}))\|^{2}_{2}}{2n}\leq\frac{(\bm{Y}-f(\bm{X};\dot{\beta}))^{\mathrm{\scriptscriptstyle T}}P_{\bf SIV}(f(\bm{X};\widehat{\beta})-f(\bm{X};\dot{\beta}))}{n}\\
\Longleftrightarrow&\quad\frac{1}{2n}\big\|P_{\bf SIV}\left.\frac{\partial f}{\partial\beta}\right|_{\beta=\bar{\beta}}(\widehat{\beta}-\dot{\beta})\big\|^{2}_{2}\leq\frac{1}{n}(\bm{Y}-f(\bm{X};\dot{\beta}))^{\mathrm{\scriptscriptstyle T}}P_{\bf SIV}\left.\frac{\partial f}{\partial\beta}\right|_{\beta=\bar{\beta}}(\widehat{\beta}-\dot{\beta}).\end{split}
$$
The last transformation follows from a Taylor expansion: there exists some $\bar{\beta}$ between $\widehat{\beta}$ and $\dot{\beta}$ such that
$$
f(\bm{X};\widehat{\beta})-f(\bm{X};\dot{\beta})=\left.\frac{\partial f}{\partial\beta}\right|_{\beta=\bar{\beta}}(\widehat{\beta}-\dot{\beta}).
$$
Since $\|\widehat{\beta}-\dot{\beta}\|_{0}≤ 2s$ , we have
$$
\|\widehat{\beta}-\dot{\beta}\|_{1}\leq\sqrt{2s}\,\|\widehat{\beta}-\dot{\beta}\|_{2}.
$$
By Lemma S.23, there exists an integer $n_{1}$ such that
$$
\frac{\|P_{\bf SIV}\left.\frac{\partial f}{\partial\beta}\right|_{\beta=\bar{\beta}}(\widehat{\beta}-\dot{\beta})\|^{2}_{2}}{2n}\geq\pi_{0}^{2}\|\widehat{\beta}-\dot{\beta}\|_{2}^{2}
$$
with probability at least $1-\delta/2$ for $n>n_{1}$ . Substituting into (S46) gives
$$
\begin{split}\pi_{0}^{2}\|\widehat{\beta}-\dot{\beta}\|_{2}^{2}&\leq\frac{\|P_{\bf SIV}\left.\frac{\partial f}{\partial\beta}\right|_{\beta=\bar{\beta}}(\widehat{\beta}-\dot{\beta})\|^{2}_{2}}{2n}\\
&\leq\frac{1}{n}(\bm{Y}-f(\bm{X};\dot{\beta}))^{\mathrm{\scriptscriptstyle T}}P_{\bf SIV}\left.\frac{\partial f}{\partial\beta}\right|_{\beta=\bar{\beta}}(\widehat{\beta}-\dot{\beta})\\
&\leq\|\widehat{\beta}-\dot{\beta}\|_{1}\left\|\frac{1}{n}(\bm{Y}-f(\bm{X};\dot{\beta}))^{\mathrm{\scriptscriptstyle T}}P_{\bf SIV}\left.\frac{\partial f}{\partial\beta}\right|_{\beta=\bar{\beta}}\right\|_{\infty}\\
&\leq\sqrt{2s}\,\|\widehat{\beta}-\dot{\beta}\|_{2}\left\|\frac{1}{n}(\bm{Y}-f(\bm{X};\dot{\beta}))^{\mathrm{\scriptscriptstyle T}}P_{\bf SIV}\left.\frac{\partial f}{\partial\beta}\right|_{\beta=\bar{\beta}}\right\|_{\infty}.\end{split}
$$
Canceling one factor of $\|\widehat{\beta}-\dot{\beta}\|_{2}$ in (S47) and using Lemma S.24, we obtain
$$
\left\|\frac{1}{n}(\bm{Y}-f(\bm{X};\dot{\beta}))^{\mathrm{\scriptscriptstyle T}}P_{\bf SIV}\left.\frac{\partial f}{\partial\beta}\right|_{\beta=\bar{\beta}}\right\|_{\infty}=O_{p}\!\left(\frac{1}{\sqrt{n}}\right),
$$
which proves the first part of Theorem S.5.
Proof of the second part of Theorem S.5.
Let $C_{16}=\min_{j∈\mathcal{A}}|\beta_{j}|≥ 0$ . Considering $k=s$ , the event $\{\widehat{\mathcal{A}}≠\mathcal{A}\}$ is contained in $\{\|\widehat{\beta}-\dot{\beta}\|_{1}≥ C_{16}\}$ . From the first part of Theorem S.5, we know that
$$
\|\widehat{\beta}-\dot{\beta}\|_{1}=O_{p}\!\left(\frac{1}{\sqrt{n}}\right),
$$
which means that for any $\delta>0$ , there exists a constant $M_{\delta}$ and an integer $n_{1}>0$ such that
$$
\sup_{n>n_{1}}\mathbb{P}\left(\|\widehat{\beta}-\dot{\beta}\|_{1}\geq\frac{M_{\delta}}{\sqrt{n}}\right)\leq\delta.
$$
For $n≥ n_{2}:=\max(n_{1},M_{\delta}^{2}/C_{16}^{2})$ , we have $C_{16}≥ M_{\delta}/\sqrt{n}$ , so
$$
\{\widehat{\mathcal{A}}\neq\mathcal{A}\}\subset\{\|\widehat{\beta}-\dot{\beta}\|_{1}\geq C_{16}\}\subset\{\|\widehat{\beta}-\dot{\beta}\|_{1}\geq M_{\delta}/\sqrt{n}\}.
$$
Thus,
$$
\sup_{n\geq n_{2}}\mathbb{P}(\widehat{\mathcal{A}}\neq\mathcal{A})\leq\sup_{n\geq n_{2}}\mathbb{P}\!\left(\|\widehat{\beta}-\dot{\beta}\|_{1}\geq\frac{M_{\delta}}{\sqrt{n}}\right)\leq\sup_{n\geq n_{1}}\mathbb{P}\!\left(\|\widehat{\beta}-\dot{\beta}\|_{1}\geq\frac{M_{\delta}}{\sqrt{n}}\right)\leq\delta.
$$
Since $\delta$ is arbitrary, we conclude that $\mathbb{P}(\widehat{\mathcal{A}}≠\mathcal{A})→ 0$ as $n→∞$ .
Appendix S.6 Results of comparison methods in the real data example
We include the genes identified by various comparison methods in the mouse obesity dataset described in Section 6.
The Lasso method identifies the following genes: Igfbp2, Ankhd1, Rab27a, Dct, Gck, Tex15, Wfdc15b, Rab6b, Avpr1a, Abca8a, F12, Arx, Gna14, Vwf, C4b, Zar1, Taf7, B4galnt4, Upk3a, Tiam2, Pex11a, Mmp1b, Cd36, Bglap-rs1, Prdm16, Olfr378, G6pc, Ccnl2, Ccnb1, Clstn3, Smok3a, Meox1, Fras1, Gstm2, Cfd, Gpx6, Efemp1, Osbpl6, Dok2, Plcl2, Cebpe, Plxnb1, Myl10, Tmem174, Insl6, Ifitm7, PqlC2, Oas1e, Itgad, Gldc, Rxfp1, Pgf, Adh7, Msr1, Vil1, Cyp26a1, Zfp30, Ggta1, Fanca, Xpo4, Doxl2, Sall2, Gprc6a, Pet2, Otop2, Epb4.2, BC029214, Frem1, Dcx, Xcl1, Olfr1033, Sntg2, Copz2, Angpt2, Il13, Dnase1l3, Olfr1501, Xdh, Rbm3, Il5ra, Galns, Nme2, Fbxo16, Egr2, Dhrs7b, Lpar2, and Npm3.
The 2SR method (Lin et al., 2015b) identifies the following genes: Igfbp2, Lamc1, Sirpa, Gstm2, Ccnl2, Glcci1, Vwf, Irx, Apoa4, Socs2, Avpr1a, Abca8a, Gpld1, Fam105a, Dscam, Slc22a3, and 2010002N04Rik.
The Auxiliary Variable method (Miao et al., 2023b) identifies the following genes: Gstm2, 2010002N04Rik, Igfbp2, and Avpr1a.
The Null Variable method (Miao et al., 2023b) identifies the following genes: Gstm2 and Dscam.
The Trim method (Ćevid et al., 2020b) identifies the following genes: Igfbp2, Ankhd1, Rab27a, and Dct.
The IV-Lasso method identifies the following genes: Igfbp2, Rab27a, Ankhd1, Hao2, Dct, Fras1, Gck, Tex15, Nox4, Insl6, Vwf, Txk, Padi2, and Gstm2.
Appendix S.7 Additional simulation results
S.7.1 Comparison between simulation and real data features
We define the signal-to-noise ratio (SNR) as
$$
\text{SNR}=\frac{\text{Var}(X\beta)}{\text{Var}(Y-X\beta)}.
$$
If $\beta$ is unknown, we estimate the signal-to-noise ratio from finite samples as
$$
\widehat{\text{SNR}}=\frac{\widehat{\text{Var}}_{n}(X\widehat{\beta})}{\widehat{\text{Var}}_{n}(Y-X\widehat{\beta})}.
$$
Table S1 presents a comparison between the application data and the simulated data, where $\sigma_{y}=5$ and $\text{Var}(\epsilon_{y})=\sigma_{y}^{2}$ . Sparsity is represented by the norms $||\beta||_{0}$ for the simulation and $||\widehat{\beta}||_{0}$ for the data. The “Number of Confounders” refers to $q$ in the simulation and $\widehat{q}$ in the data. The definition of the signal-to-noise ratio (SNR) is provided above. In the simulation, the reported SNR is the average over 1,000 Monte Carlo replications with different seeds to account for the randomness of $\gamma$ and $\Lambda$ . As shown in the table, sparsity, the number of confounders, and SNR exhibit strong similarities between the two settings.
Table S1: Comparison between application and simulated data. Sparsity is indicated by the norms $||\dot{\beta}||_{0}$ for the simulation and $||\widehat{\beta}||_{0}$ for the data. “Number of Confounders” refers to $q$ in the simulation and $\widehat{q}$ in the data. The SNR denotes the signal-to-noise ratio.
| Sparsity Number of confounders SNR | 5 3 $0.965$ | 5 3 $0.969$ |
| --- | --- | --- |
S.7.2 Simulation for weak effects
We present simulation results for dense confounding with many weak effects. In our simulations, we set $n=1000$ , $p=100$ , $q=2$ , $\beta_{1}=\beta_{2}=...=\beta_{5}=1$ , and $\beta_{6}=\beta_{7}=...=\beta_{p}=h$ , where $h$ varies from 0 to 0.15. The other parameters and variables are generated as in the simulation setting described in Section 5 of the main paper. Since the true causal parameter $\beta$ is not identifiable in this setting, we report the $\ell_{1}$ -difference between $\widehat{\beta}$ and $\beta^{\#}$ for each method, where $\beta^{\#}_{1}=\beta^{\#}_{2}=...=\beta^{\#}_{5}=1$ and $\beta^{\#}_{6}=\beta^{\#}_{7}=...=\beta^{\#}_{p}=0$ . The simulation results are presented in Figure S4.
<details>
<summary>2304.01098v4/Figures/denseconfounding.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Graph Analysis
## Header Information
- **Title**: `p = 100, q = 2, s = 5, n = 1000`
- **Legend**: Located at the top-right corner of the graph.
## Axes
- **X-axis (h)**:
- Range: `0.00` to `0.10`
- Increment: `0.02`
- Labels: `0.00`, `0.02`, `0.04`, `0.06`, `0.08`, `0.10`
- **Y-axis (L₁ error)**:
- Range: `0` to `15`
- Increment: `5`
- Labels: `0`, `5`, `10`, `15`
## Data Series
### Legend Labels (Color-Coded)
1. **Green Line**: `L₁ error (green)`
2. **Red Line**: `L₁ error (red)`
3. **Purple Line**: `L₁ error (purple)`
### Data Points (Spatial Grounding)
| h | Green Line (L₁ error) | Red Line (L₁ error) | Purple Line (L₁ error) |
|------|------------------------|----------------------|-------------------------|
| 0.00 | 1.2 | 1.1 | 1.0 |
| 0.02 | 1.8 | 1.7 | 1.6 |
| 0.04 | 3.0 | 2.9 | 2.8 |
| 0.06 | 5.5 | 5.4 | 5.3 |
| 0.08 | 9.0 | 8.9 | 8.8 |
| 0.10 | 14.5 | 14.3 | 13.8 |
## Trend Verification
1. **Green Line**:
- **Trend**: Steadily increasing slope from `h=0.00` to `h=0.10`.
- **Key Points**:
- `h=0.00` → `1.2`
- `h=0.10` → `14.5`
2. **Red Line**:
- **Trend**: Slightly less steep than green, but consistent upward trajectory.
- **Key Points**:
- `h=0.00` → `1.1`
- `h=0.10` → `14.3`
3. **Purple Line**:
- **Trend**: Flat initially (`h=0.00` to `h=0.06`), then sharp upward rise after `h=0.08`.
- **Key Points**:
- `h=0.00` → `1.0`
- `h=0.08` → `8.8`
- `h=0.10` → `13.8`
## Component Isolation
1. **Header**: Contains title and parameters (`p=100`, `q=2`, `s=5`, `n=1000`).
2. **Main Chart**:
- Grid lines for reference.
- Three data series plotted against `h` and `L₁ error`.
3. **Footer**: Legend with color-coded labels.
## Critical Observations
- **Legend Redundancy**: All legend labels are `L₁ error` with distinct colors, suggesting potential mislabeling or intentional differentiation of sub-series.
- **Purple Line Anomaly**: Sharp increase after `h=0.08` deviates from the gradual trend of green/red lines.
- **Data Consistency**: All lines converge at `h=0.00` but diverge significantly by `h=0.10`.
## Final Notes
- **Language**: English (no non-English text detected).
- **Data Completeness**: All axis labels, legend entries, and data points extracted as per the image.
</details>
(a) Estimation errors $||\widehat{\beta}-\beta^{\#}||$ .
<details>
<summary>2304.01098v4/Figures/denseconfoundingfdr.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Graph Analysis
## Image Description
The image is a line graph titled **"p = 100, q = 2, s = 5, n = 1000"**. It visualizes the relationship between the variable **h** (x-axis) and **False discovery rate** (y-axis). Three data series are plotted, differentiated by color and labeled in the legend.
---
## Key Components
### 1. **Axis Labels and Titles**
- **X-axis**: Labeled **"h"**, ranging from **0.00** to **0.15** in increments of **0.05**.
- **Y-axis**: Labeled **"False discovery rate"**, ranging from **0.00** to **1.00** in increments of **0.25**.
### 2. **Legend**
- Located at the **top-right corner** of the graph.
- **Color-to-label mapping**:
- **Red**: Method A
- **Purple**: Method B
- **Green**: Method C
### 3. **Data Series**
#### **Method A (Red Line)**
- **Trend**: Flat line starting at **~0.95** and remaining constant across all **h** values.
- **Data Points**:
- h = 0.00: 0.95
- h = 0.05: 0.95
- h = 0.10: 0.95
- h = 0.15: 0.95
#### **Method B (Purple Line)**
- **Trend**: Slight upward slope from **~0.94** at h = 0.00 to **~0.98** at h = 0.15, then plateaus.
- **Data Points**:
- h = 0.00: 0.94
- h = 0.05: 0.96
- h = 0.10: 0.98
- h = 0.15: 0.98
#### **Method C (Green Line)**
- **Trend**: Flat at **0.00** until h = 0.08, then a sharp vertical increase to **~0.95**, remaining constant thereafter.
- **Data Points**:
- h = 0.00: 0.00
- h = 0.05: 0.00
- h = 0.08: 0.00 → 0.95 (sharp increase)
- h = 0.10: 0.95
- h = 0.15: 0.95
---
## Observations
1. **Method A** maintains a consistently high false discovery rate (~0.95) regardless of **h**.
2. **Method B** shows a gradual increase in false discovery rate as **h** increases, stabilizing near 0.98.
3. **Method C** exhibits a threshold behavior: no false discoveries until **h = 0.08**, after which the rate jumps to ~0.95 and remains constant.
---
## Notes
- No embedded text or data tables are present in the image.
- All legend colors match the corresponding data series exactly.
- The graph does not include error bars or confidence intervals.
---
## Conclusion
The graph compares three methods (A, B, C) for false discovery rate across varying **h** values. Method C demonstrates a distinct threshold effect, while Methods A and B show minimal sensitivity to **h**.
</details>
(b) False discovery rate.
Figure S4: Simulation results for SIV (blue), Lasso (red), Trim (purple), and Null (green), based on 1,000 Monte Carlo runs.
The findings in Figure S4 highlight the impact of weak effects. Specifically, when weak effects are present but their magnitude is very small, our method performs comparably to its performance in sparse settings, showing superior accuracy relative to alternatives. However, as the magnitude of the weak effects increases and $h$ exceeds a certain threshold, their influence becomes dominant, and all methods converge to similar performance.
S.7.3 Comparison with the moment selection estimator
One reviewer suggested that we could apply the algorithm in Andrews (1999b) to estimate $\beta$ . We discuss this method in this section.
Comparison between our procedure and the procedure in Andrews (1999b)
Andrews (1999b) focus on the selection of true moment conditions. Suppose there are $r$ moment conditions, of which $r_{0}$ are correct, and assume that the number of parameters, denoted by $p$ , is less than $r_{0}$ . They introduce a method that identifies the $r_{0}$ correct moment conditions, based on which the true parameters can be consistently estimated. Their results are applied to the selection of invalid instrumental variables and to addressing the over-identification problem.
In contrast, our method focuses on a different task: selecting true causal variables. In our proposal, there are $p-q$ instrumental variables, which provide $p-q$ moment conditions, and importantly, all of these are valid. Thus, we work with exactly $p-q$ correct moment conditions. Since there are $p$ parameters to identify, our situation corresponds to “under-identification.” However, identification and estimation become feasible once sparsity constraints are imposed on the treatment effects. To guarantee unique identification, the number of nonzero parameters must be fewer than the number of instrumental variables, that is, $s:=\|\beta\|_{0}<p-q$ .
In the following, we first review the method of Andrews (1999b) in the classical IV setting and discuss the pitfalls of extending their approach to our context. We then present simulations comparing their method against ours. The results show that our proposed estimator outperforms theirs in the scenario we evaluated.
Review of Andrews (1999b) ’s method in the classical IV setting
Consider the classical IV setting with one treatment $X$ and three instruments $Z:=(Z^{(1)},Z^{(2)},Z^{(3)})$ . These instruments yield three moment conditions:
$$
\begin{split}g_{1}(\beta)&=\mathbb{E}\{(Y-X\beta)Z^{(1)}\},\\
g_{2}(\beta)&=\mathbb{E}\{(Y-X\beta)Z^{(2)}\},\\
g_{3}(\beta)&=\mathbb{E}\{(Y-X\beta)Z^{(3)}\}.\end{split} \tag{1}
$$
If $Z^{(i)}$ is a valid IV, then $g_{i}(\dot{\beta})=0$ at the true value $\dot{\beta}$ . In finite samples, empirical averages replace expectations, and the generalized method of moments (GMM) is used:
$$
\widehat{\beta}=\operatorname*{arg\,min}_{\beta\in\mathbb{R}}(g_{1},g_{2},g_{3})^{\mathrm{\scriptscriptstyle T}}W(g_{1},g_{2},g_{3}),
$$
where $W∈\mathbb{R}^{3× 3}$ is a weight matrix (assume $W=I_{3}$ for simplicity). Let $g(\beta)=(g_{1}(\beta),g_{2}(\beta),g_{3}(\beta))^{\mathrm{\scriptscriptstyle T}}$ . For a subset $A⊂\{1,2,3\}$ , denote by $g^{A}(\beta)$ the moment conditions using only indices in $A$ , and define $\widehat{\beta}^{A}=\operatorname*{arg\,min}_{\beta∈\mathbb{R}}(g^{A}(\beta))^{\mathrm{\scriptscriptstyle T}}g^{A}(\beta)$ .
The moment selection estimator selects the “correct” moment constraints $\widehat{A}$ by solving
$$
\widehat{A}=\operatorname*{arg\,min}_{A\subset\{1,2,3\}}n(g^{A}(\widehat{\beta}^{A}))^{\mathrm{\scriptscriptstyle T}}g^{A}(\widehat{\beta}^{A})-h(|A|)k_{n},
$$
where $|A|$ is the cardinality of $A$ , $h(·)$ is a strictly increasing function, and $k_{n}→∞$ with $k_{n}=o(n)$ . The penalty term $h(|A|)k_{n}$ rewards the use of more moment conditions.
Potential pitfalls of the extension
1. Coherence issue: The moment selection estimator becomes substantially more complex when $q≥ 2$ , as it requires accounting for logical dependencies among moment constraints. For example, with two latent confounders and index sets $A=\{1,2\}$ , $B=\{2,3\}$ , and $C=\{1,3\}$ , we can construct moment constraints $g_{A}$ , $g_{B}$ , and $g_{C}$ . If both $g_{A}$ and $g_{B}$ are accepted, then $g_{C}$ must also be valid since $C⊂ A\cup B$ . Such coherence relationships complicate the selection process dramatically as $q$ grows.
1. Computational issue: In high-dimensional settings ( $p>n$ ), directly solving the moment selection estimator is infeasible, as it requires computing GMM estimators for all possible subsets of moment conditions. To our knowledge, no efficient algorithm addresses this in high dimensions. In contrast, our estimator can be implemented efficiently using the “abess” package.
Simulation results
We compared our estimator with the moment selection estimator in a simple setting with three treatments and one unmeasured confounder ( $p=3,q=1$ ). In this case, coherence issues do not arise, and computation is feasible. Even here, our estimator outperforms the moment selection estimator.
Specifically, consider the structural model:
$$
\begin{split}X&=\Lambda U+\epsilon_{x},\\
Y&=X^{\mathrm{\scriptscriptstyle T}}\beta+U^{\mathrm{\scriptscriptstyle T}}\gamma+\epsilon_{y},\end{split}
$$
with $X=(X_{1},X_{2},X_{3})$ , one confounder $U$ , and parameters $\Lambda=(1,-1,2)^{\mathrm{\scriptscriptstyle T}}$ , $\gamma=1$ , and $\beta=(1,0,0)^{\mathrm{\scriptscriptstyle T}}$ . The errors satisfy $U_{i}\sim\mathbb{N}(0,1)$ , $\epsilon_{y,i}\sim\mathbb{N}(0,1)$ , and $\epsilon_{x,i}\sim\mathbb{N}(0,I_{3})$ . We ran simulations with $n∈\{500,1000,1500,...,5000\}$ .
The results, shown in Figure S5, report the $\ell_{1}$ error $||\widehat{\beta}-\dot{\beta}||_{1}$ . Our estimator consistently outperforms the moment selection estimator of Andrews (1999b).
<details>
<summary>2304.01098v4/Figures/revision_figures/MSestimator.png Details</summary>

### Visual Description
# Technical Document Analysis: Line Graph of L1 Error vs. n
## Title
- **Graph Title**: "β_SIV vs. β_MS"
## Axes
- **X-Axis**:
- Label: "n"
- Tick Marks: 512, 1024, 2048, 4096
- **Y-Axis**:
- Label: "L₁ error"
- Range: 0.02 to 0.06 (increments of 0.01)
## Legend
- **Position**: Top of the graph
- **Labels**:
- **Red Line**: β_SIV
- **Teal Line**: β_MS
## Data Points and Trends
### β_SIV (Red Line)
- **Trend**: Steeply decreasing slope from left to right.
- **Key Points**:
- At n=512: L₁ error ≈ 0.06
- At n=1024: L₁ error ≈ 0.048
- At n=2048: L₁ error ≈ 0.03
- At n=4096: L₁ error ≈ 0.022
### β_MS (Teal Line)
- **Trend**: Gradually decreasing slope from left to right.
- **Key Points**:
- At n=512: L₁ error ≈ 0.053
- At n=1024: L₁ error ≈ 0.042
- At n=2048: L₁ error ≈ 0.028
- At n=4096: L₁ error ≈ 0.021
## Observations
1. **Convergence**: At n=4096, β_SIV (0.022) falls below β_MS (0.021), indicating a crossover point.
2. **Rate of Decrease**: β_SIV decreases more rapidly than β_MS across all n values.
3. **Initial Values**: β_SIV starts with a higher L₁ error (0.06 vs. 0.053) at n=512.
## Spatial Grounding
- **Legend**: Top-center placement, clearly associating colors with labels.
- **Data Points**: Red and teal markers align with their respective lines and legend entries.
## Conclusion
The graph compares the L₁ error performance of β_SIV and β_MS across increasing n values. β_SIV demonstrates a steeper decline in error, surpassing β_MS at n=4096. Both metrics show improved performance as n increases, with β_SIV achieving lower error rates at larger n.
</details>
Figure S5: Comparison between the SIV estimator (blue line) and the moment selection estimator (red line) (Andrews, 1999b), based on 1000 Monte Carlo simulations.
S.7.4 Details of Simulation Settings for Nondiagonal $\text{Cov}(\epsilon_{x})$
In the simulation setup for nondiagonal $\text{Cov}(\epsilon_{x})$ , we randomly selected 20 pairs from $i,j∈\{1,2,...,p\}$ and assigned $D_{i,j}=D_{j,i}=1$ . The list of these pairs is provided below.
$(5,87)$ , $(14,38)$ , $(15,85)$ , $(25,50)$ , $(32,46)$ , $(37,75)$ , $(44,37)$ , $(45,10)$ , $(52,33)$ , $(52,37)$ , $(60,92)$ , $(66,88)$ , $(66,100)$ , $(73,55)$ , $(74,34)$ , $(86,77)$ , $(87,31)$ , $(89,53)$ , $(91,82)$ , and $(97,96)$ .
S.7.5 An Alternative Cross-Validation Strategy for the IV-Lasso Estimator
As discussed in Section 5.1, the IV-Lasso estimator performs suboptimally in our simulation setting. This is primarily because standard cross-validation tends to select overly complex models, with the Lasso estimator often including more variables than necessary. To address this issue, we consider the one-standard-error (1-se) rule (Hastie et al., 2009; Kang et al., 2016), which selects the most regularized model whose cross-validation error lies within one standard error of the minimum. This approach favours simpler models that perform comparably to the best model identified by standard cross-validation.
To evaluate the potential benefit of the 1-se rule, we conduct a series of simulation studies. The outcome model is $f(X;\beta)=X^{\mathrm{\scriptscriptstyle T}}\beta$ , and the hidden variable model is $g(U)=U^{\mathrm{\scriptscriptstyle T}}\gamma$ . We set $q=3$ , $s=5$ , and define the true coefficient vector as $\beta=(1,1,1,1,1,0,...,0)^{\mathrm{\scriptscriptstyle T}}∈\mathbb{R}^{p}$ . The elements of both $\Lambda_{j,k}$ and $\gamma_{k}$ are independently drawn from the uniform distribution on $[-1,1]$ for $j=1,...,p$ and $k=1,...,q$ . The latent variables $U_{i,k}$ are generated independently from the standard normal distribution for $i=1,...,n$ and $k=1,...,q$ . The noise terms are generated as $\epsilon_{x}\sim\mathcal{N}(0,\sigma^{2}_{x}I_{p})$ and $\epsilon_{y}\sim\mathcal{N}(0,\sigma^{2})$ , with $\sigma_{x}=2$ and $\sigma=1$ .
We assess estimator performance under two regimes: (i) low-dimensional, with $p=100$ and $n∈\{200,600,1000,...,5000\}$ ; and (ii) high-dimensional, with $n=500$ and $p∈\{500,750,1000,...,3000\}$ . All results are averaged over 1000 Monte Carlo replications.
We compare the following estimators:
- The original sparse IV estimator defined in (7).
- The IV-Lasso estimator (Section 5.1), with tuning selected by standard cross-validation.
- The IV-Lasso estimator, with tuning selected by the 1-se rule.
Figure S6 reports the $L_{1}$ estimation errors across both regimes. The original IV-Lasso method performs worse than the SIV estimator. In contrast, IV-Lasso-1SE, by applying the 1-se rule, achieves estimation accuracy comparable to that of SIV in both low- and high-dimensional settings.
<details>
<summary>2304.01098v4/Figures/fixp_l1_error_1se.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Graph Analysis
## Header
**Title**: Low-d case: p = 100, q = 3, s = 5
## Main Chart
### Axes
- **X-axis (Horizontal)**: Labeled `n` (integer values: 256, 512, 1024, 2048, 4096)
- **Y-axis (Vertical)**: Labeled `L₁ estimation error` (decimal values: 0.03125, 0.06250, 0.12500, 0.25000)
### Data Series
1. **Blue Line with Square Markers**
- **Legend Label**: `q = 3`
- **Trend**: Starts at `L₁ = 0.1375` (n=256), decreases steadily to `L₁ = 0.03125` (n=4096).
- **Key Points**:
- n=256: 0.1375
- n=512: 0.09375
- n=1024: 0.06250
- n=2048: 0.04375
- n=4096: 0.03125
2. **Black Line with Diamond Markers**
- **Legend Label**: `q = 5`
- **Trend**: Starts at `L₁ = 0.28125` (n=256), decreases sharply initially, then plateaus near `L₁ = 0.03125` (n=4096).
- **Key Points**:
- n=256: 0.28125
- n=512: 0.18750
- n=1024: 0.08125
- n=2048: 0.05000
- n=4096: 0.03125
### Legend
- **Location**: Top-right corner of the chart
- **Color-Coding**:
- Blue squares: `q = 3`
- Black diamonds: `q = 5`
### Grid
- **Structure**: Light gray grid lines with darker axis lines
- **Y-axis Ticks**: 0.03125, 0.06250, 0.12500, 0.25000
- **X-axis Ticks**: 256, 512, 1024, 2048, 4096
## Footer
- **No additional text or components**
## Observations
- Both lines exhibit a **monotonic decrease** in `L₁ estimation error` as `n` increases.
- The `q = 5` series (black diamonds) begins with a **steeper decline** compared to `q = 3` (blue squares).
- At `n = 4096`, both series converge to the same `L₁ estimation error` value of `0.03125`.
## Parameters
- `p = 100` (problem dimension)
- `q = 3` and `q = 5` (subproblem sizes)
- `s = 5` (sampling parameter)
</details>
(a) Low-dimensional case: $p=100$ , $n$ varies from $200$ to $5000$ .
<details>
<summary>2304.01098v4/Figures/fixn_l1_error_1se.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Graph Analysis
## Header
**Title**: High-d case: p = 100, q = 3, s = 5
## Main Chart
### Axes
- **X-axis**:
- Label: `p`
- Markers: `512`, `1024`, `2048`
- **Y-axis**:
- Label: `L₁ estimation error`
- Range: `0.08` to `0.14`
### Data Series
1. **Gray Line with 'x' Markers** (Legend: "High-d case"):
- **Trend**:
- Decreases from `p=512` (0.14) to `p=1024` (0.125)
- Increases to `p=2048` (0.14)
- Slight decrease to `p=4096` (0.135)
- **Key Points**:
- `p=512`: 0.14
- `p=1024`: 0.125
- `p=2048`: 0.14
- `p=4096`: 0.135
2. **Blue Line with 'Diamond' Markers** (Legend: "Low-d case"):
- **Trend**:
- Relatively flat with minor fluctuations
- Slight dip at `p=2048` (0.09)
- Slight rise at `p=4096` (0.09)
- **Key Points**:
- `p=512`: 0.09
- `p=1024`: 0.09
- `p=2048`: 0.09
- `p=4096`: 0.09
### Legend
- **Position**: Top-right corner
- **Labels**:
- Gray: "High-d case"
- Blue: "Low-d case"
## Footer
**Note**: No additional textual information present.
## Spatial Grounding & Validation
- **Legend Colors**:
- Gray matches "High-d case" line markers (`x`)
- Blue matches "Low-d case" line markers (`diamond`)
- **Trend Verification**:
- Gray line shows non-linear behavior (downward then upward trend)
- Blue line remains stable with minimal variation
## Component Isolation
1. **Header**: Title and parameters (`p=100`, `q=3`, `s=5`)
2. **Main Chart**: Dual-line graph with labeled axes and legend
3. **Footer**: No content
## Data Table Reconstruction
| p | High-d case (Gray) | Low-d case (Blue) |
|-------|--------------------|-------------------|
| 512 | 0.14 | 0.09 |
| 1024 | 0.125 | 0.09 |
| 2048 | 0.14 | 0.09 |
| 4096 | 0.135 | 0.09 |
## Language Declaration
- **Primary Language**: English
- **Translated Text**: None required (no non-English content detected)
</details>
(b) High-dimensional case: $n=500$ , $p$ varies from $500$ to $3000$ .
Figure S6: $L_{1}$ estimation errors of SIV ( $\blacksquare$ ), IV-Lasso ( $×$ ), and IV-Lasso-1SE ( $\blacklozenge$ ), based on 1000 Monte Carlo replications.
S.7.6 Simulation Results for Statistical Inference
We include additional simulation results to evaluate the performance of statistical inference procedures. Specifically, under the original setting described in Section 5.1, we assess the empirical coverage of confidence intervals for $\beta_{1}$ . To this end, we apply various methods to select the set of causal variables, denoted by $\widehat{A}$ , and construct 95% confidence intervals using the ivreg function. Specifically, to construct a confidence interval for $\beta_{1}$ , we use ivreg with the outcome specified as $Y$ , the treatment as $X_{\{1\}\cup\widehat{A}}$ (where $\widehat{A}$ denotes the set of causal variables selected by a given algorithm, such as SIV, IV-Lasso with cross-validation, or IV-Lasso-1se), and the instrument as the constructed synthetic instrument $SIV$ . and obtain $95\%$ confidence interval for $\beta_{1}$ The simulation results are summarised in Figure S7.
Our findings indicate that both the IV-Lasso-1SE and SIV methods yield reasonably accurate inference results. In contrast, the original IV-Lasso method performs poorly in the high-dimensional setting, primarily due to inconsistent variable selection when the tuning parameter is chosen via cross-validation.
<details>
<summary>2304.01098v4/Figures/revision2_fig/fixp_inference.png Details</summary>

### Visual Description
# Technical Document Extraction: Coverage Rate Analysis
## Image Description
The image is a line chart titled **"Low-d case: p = 100, q = 3, s = 5"**. It visualizes the relationship between the variable **n** (x-axis) and **coverage rate** (y-axis). The chart includes three data series and a reference line.
---
## Key Components
### 1. **Title**
- **Text**: "Low-d case: p = 100, q = 3, s = 5"
- **Purpose**: Specifies parameters for the low-dimensional case analysis.
### 2. **Axes**
- **X-axis (Horizontal)**:
- **Label**: "n"
- **Range**: 256 to 4096 (logarithmic scale implied by spacing).
- **Y-axis (Vertical)**:
- **Label**: "coverage rate"
- **Range**: 0.80 to 1.00 (increments of 0.05).
### 3. **Legend**
- **Location**: Top-right corner (spatial grounding: [x, y] = [top-right]).
- **Entries**:
- **Blue squares**: Represent one data series.
- **Black diamonds**: Represent another data series.
- **Gray crosses**: Represent a third data series.
- **Reference Line**:
- **Color**: Red dashed line.
- **Value**: Fixed at **0.95** (y-axis).
---
## Data Series Analysis
### 1. **Blue Squares**
- **Trend**:
- Starts at **0.94** (n=256).
- Peaks at **0.96** (n=1024).
- Fluctuates slightly downward but stabilizes near **0.95** for larger n.
- **Key Points**:
- n=256: 0.94
- n=512: 0.95
- n=1024: 0.96
- n=2048: 0.95
- n=4096: 0.95
### 2. **Black Diamonds**
- **Trend**:
- Starts at **0.93** (n=256).
- Peaks at **0.96** (n=1024).
- Dips slightly to **0.94** (n=2048) before recovering to **0.95** (n=4096).
- **Key Points**:
- n=256: 0.93
- n=512: 0.95
- n=1024: 0.96
- n=2048: 0.94
- n=4096: 0.95
### 3. **Gray Crosses**
- **Trend**:
- Starts at **0.91** (n=256).
- Rises to **0.95** (n=1024).
- Declines to **0.94** (n=2048) and stabilizes at **0.94** (n=4096).
- **Key Points**:
- n=256: 0.91
- n=512: 0.94
- n=1024: 0.95
- n=2048: 0.94
- n=4096: 0.94
---
## Reference Line
- **Red dashed line** at **0.95** (y-axis).
- **Purpose**: Indicates a target or threshold for coverage rate.
---
## Spatial Grounding
- **Legend Position**: Top-right corner (confirmed via visual inspection).
- **Data Series Alignment**:
- Blue squares align with the blue legend entry.
- Black diamonds align with the black legend entry.
- Gray crosses align with the gray legend entry.
---
## Observations
1. All three data series converge near the **0.95** threshold (red dashed line) for larger values of **n**.
2. The **blue squares** and **black diamonds** exhibit similar peak behavior at **n=1024**, while the **gray crosses** show a delayed peak.
3. The **gray crosses** start with the lowest coverage rate but improve significantly by **n=1024**.
---
## Notes
- No additional languages or non-English text are present.
- The chart does not include a data table; all information is derived from the plotted lines and markers.
- The logarithmic scale of the x-axis (n) is inferred from the spacing between values (e.g., 256, 512, 1024, 2048, 4096).
</details>
(a) Low-dimensional case: $p=100$ , $n$ varies from $200$ to $5000$ .
<details>
<summary>2304.01098v4/Figures/revision2_fig/fixn_inference_new.png Details</summary>

### Visual Description
# Technical Document Analysis: High-d Case Coverage Rate Chart
## Chart Title
**High-d case: n = 500, q = 3, s = 5**
## Axes Labels
- **X-axis**: `p` (values: 512, 1024, 2048)
- **Y-axis**: `coverage rate` (range: 0.80 to 1.00, increments of 0.05)
## Legend
- **Blue squares**: Data series 1
- **Black diamonds**: Data series 2
- **Gray crosses**: Data series 3
- **Red dashed line**: Reference threshold at `0.95`
## Data Series Trends
1. **Blue Line (Squares)**:
- **Trend**: Peaks at `p = 1024` (coverage rate ~0.96), dips slightly at `p = 2048` (~0.95), then stabilizes.
- **Key Points**:
- `p = 512`: ~0.95
- `p = 1024`: ~0.96
- `p = 2048`: ~0.95
2. **Black Line (Diamonds)**:
- **Trend**: Fluctuates around `0.95`, with a minor dip at `p = 1024` (~0.945), then stabilizes.
- **Key Points**:
- `p = 512`: ~0.95
- `p = 1024`: ~0.945
- `p = 2048`: ~0.95
3. **Gray Line (Crosses)**:
- **Trend**: Steady decline from `p = 512` (~0.92) to `p = 2048` (~0.87).
- **Key Points**:
- `p = 512`: ~0.92
- `p = 1024`: ~0.90
- `p = 2048`: ~0.87
## Reference Line
- **Red dashed line**: Horizontal reference at `coverage rate = 0.95`.
## Spatial Grounding
- **Legend Position**: Right-aligned, adjacent to the chart.
- **Color Consistency**:
- Blue squares match the blue line.
- Black diamonds match the black line.
- Gray crosses match the gray line.
## Component Isolation
1. **Header**: Chart title and parameters (`n = 500, q = 3, s = 5`).
2. **Main Chart**:
- Three data series with distinct markers.
- Red dashed reference line.
3. **Footer**: No additional text or annotations.
## Observations
- The blue line (squares) exhibits the highest coverage rate, peaking at `p = 1024`.
- The gray line (crosses) shows the most significant decline, dropping below `0.90` at `p = 2048`.
- The black line (diamonds) remains relatively stable, hovering near the `0.95` threshold.
## Notes
- No additional text, tables, or non-English content is present.
- All data points align with their respective legend entries.
</details>
(b) High-dimensional case: $n=500$ , $p$ varies from $500$ to $3000$ .
Figure S7: Inference results for SIV ( $\blacksquare$ , blue), IV-Lasso ( $×$ , grey), and IV-Lasso-1SE ( $\blacklozenge$ , black), based on 1000 Monte Carlo runs.
S.7.7 SIV Method for Count Data
We extend the SIV method to accommodate count data models (Mullahy, 1997). The cited work considers a Poisson regression model with unmeasured confounders, where $Y∈\{0,1,2,...\}$ and
$$
\mathbb{E}(Y\mid X,U)=\exp(X^{\top}\beta+g(U)).
$$
Because the response is count-valued, a direct logarithmic transformation is not applicable. If the confounder $U$ were observed and $g(U)$ were linear, then $\beta$ could be consistently estimated via standard Poisson regression using the glm function with a log link. However, if $U$ is unobserved, Mullahy (1997) propose using an instrumental variable $Z$ , in which case the following moment condition holds:
$$
\mathbb{E}\left\{\frac{Y}{\exp(X^{\top}\beta)}-1\mid Z\right\}=0,
$$
provided that $\beta$ equals the true parameter value.
In the absence of observed instruments, a synthetic instrument can be constructed, allowing us to proceed analogously to Equation (10) in the main manuscript. Specifically, we consider the optimization problem:
$$
\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{p}}\left\|\bm{SIV}({\bm{SIV}^{\top}\bm{SIV}})^{-1}\bm{SIV}^{\top}\left\{\frac{\bm{Y}}{\exp(\bm{X}\beta)}-1\right\}\right\|_{2}^{2}\quad\text{subject to }\|\beta\|_{0}\leq k,
$$
where $\frac{\bm{Y}}{\exp(\bm{X}\beta)}-1$ is an $n× 1$ vector with the $i$ th element given by $Y_{i}/\exp(X_{i}^{→p}\beta)-1$ . Note that Equation (10) is not directly applicable in the Poisson setting, since the residual $Y-\exp(X^{→p}\beta)$ remains dependent on the instrument under the Poisson data-generating process.
To illustrate the effectiveness of Equation (S48), we conduct a simulation study under a confounded Poisson regression framework. We set $q=2$ , $s=2$ , and $p=10$ . The treatment model is given by $X=\Lambda U+\epsilon_{x}$ , and the outcome $Y$ is generated from a Poisson distribution:
$$
Y_{i}\sim\text{Poisson}(\lambda_{i}),\quad\lambda_{i}=\exp(X_{i}^{\top}\beta+U_{i}^{\top}\gamma),
$$
where $\beta=(0.3,0.3,0,0,...,0)^{→p}∈\mathbb{R}^{10}$ . Each element of $\Lambda_{j,k}$ and $\gamma_{k}$ is independently drawn from $\mathcal{N}(0,1)$ for $j=1,...,p$ and $k=1,...,q$ . The latent variables $U_{i,k}$ are i.i.d. standard normal, and the noise terms are generated as $\epsilon_{x}\sim\mathcal{N}(0,\sigma_{x}^{2}I_{p})$ and $\epsilon_{y}\sim\mathcal{N}(0,\sigma^{2})$ , with $\sigma_{x}=2$ and $\sigma=1$ .
We assess estimation performance for $n∈\{1000,2000,...,5000\}$ . All results are based on 1,000 Monte Carlo replications. The $\ell_{1}$ estimation errors are reported in Figure S8. The results indicate that the proposed algorithm performs well in the confounded Poisson regression setting.
<details>
<summary>2304.01098v4/Figures/revision2_fig/poisson_regression.png Details</summary>

### Visual Description
# Technical Document Analysis of Line Graph
## 1. **Axis Labels and Titles**
- **X-axis**: Labeled as `n` with tick marks at values:
`1024`, `2048`, `4096`.
- **Y-axis**: Labeled as `L1 error` with tick marks at values:
`0.07`, `0.08`, `0.09`, `0.10`, `0.11`.
## 2. **Data Points and Line Characteristics**
- **Line Type**: A single straight, downward-sloping line.
- **Data Points**:
- `(1024, 0.11)`
- `(2048, 0.095)`
- `(3072, 0.085)`
- `(4096, 0.075)`
- `(4096, 0.065)`
- *Note*: The fifth data point at `n=4096` appears to be plotted at `y=0.065`, which is below the lowest labeled y-axis value (`0.07`). This may indicate an extension of the y-axis range or a data inconsistency.
## 3. **Trend Verification**
- **Visual Trend**: The line exhibits a **linear decrease** in `L1 error` as `n` increases.
- **Slope Consistency**: The slope between consecutive data points is uniform, confirming a linear relationship.
- From `n=1024` to `n=2048`: Slope = `(0.095 - 0.11) / (2048 - 1024) = -0.00001328125`
- From `n=2048` to `n=3072`: Slope = `(0.085 - 0.095) / (3072 - 2048) = -0.000009765625`
- From `n=3072` to `n=4096`: Slope = `(0.075 - 0.085) / (4096 - 3072) = -0.0000104166667`
- *Note*: Minor variations in slope may arise from rounding in the data points.
## 4. **Legend and Color Analysis**
- **Legend**: No legend is present in the image.
- **Color Matching**: Not applicable (no legend to cross-reference).
## 5. **Component Isolation**
- **Header**: No explicit header (e.g., title) is visible.
- **Main Chart**:
- Axes with labeled ticks.
- Data points connected by a straight line.
- **Footer**: No footer elements present.
## 6. **Spatial Grounding**
- **Legend Position**: Not applicable (no legend).
- **Data Point Placement**:
- All data points align with their respective `n` and `L1 error` values on the axes.
## 7. **Textual Information**
- **Embedded Text**:
- Axis labels: `n` (x-axis), `L1 error` (y-axis).
- No additional text or annotations in the image.
## 8. **Conclusion**
The graph illustrates a **linear relationship** between `n` and `L1 error`, with `L1 error` decreasing as `n` increases. The absence of a legend and the potential inconsistency in the fifth data point (`n=4096, y=0.065`) warrant further verification of the data source or axis scaling.
</details>
Figure S8: SIV method for confounded Poisson regression.
S.7.8 Additional Discussion of the Trim Method
S.7.8.1 An update on the implementation of the Trim method
Before discussing the performance of the Trim method, we note an update in the implementation code, which was adapted from https://github.com/zijguo/Doubly-Debiased-Lasso/blob/main/R/utils.R. On line 33 of their code, the coefficient is extracted as
where fit is the cv.glmnet object. However, the argument should be written with a lowercase s rather than an uppercase S. When specified as S, R treats the argument s as missing and defaults to s = "lambda.1se" within the cv.glmnet object.
In our revised implementation, we corrected this line to
$$
\texttt{betahat = as.matrix(coef(fit, s = fit\textdollar lambda.min)[-1])},
$$
consistent with the description in Ćevid et al. (2020a): “In all simulations, unless stated otherwise, the penalty level is chosen by cross-validation.” Accordingly, we updated the simulation results in Section 5.1 of the manuscript. This correction leads to a higher observed false discovery rate for the Trim method.
The Trim method
For the Trim method, the reasons for its poor performance differ between low- and high-dimensional settings:
- Low-dimensional setting: The Trim method is inconsistent because its consistency requires a stringent assumption, namely $\|b\|_{2}=O(1/\sqrt{n})$ (Ćevid et al., 2020a, Remark 5), where $b$ denotes the bias from unmeasured confounding variables. This condition typically holds in high-dimensional settings but not in fixed-dimensional (low-dimensional) cases, where $\|b\|_{2}$ remains constant.
- High-dimensional setting: The SIV method consistently outperforms the Trim method, mainly due to two factors:
- Improved variable selection – As shown in Figure 3(b) of the manuscript, the SIV estimator identifies causal variables more accurately.
- Reduced shrinkage bias – The Trim method employs $\ell_{1}$ -penalization, which induces shrinkage and biases estimates toward zero. In contrast, the SIV method uses $\ell_{0}$ -optimization, which mitigates shrinkage and better preserves signal strength.
Motivated by your Comment 2, we note that the variable selection with the Trim estimator may also be improved by replacing cross-validation with the one-standard-error (1se) rule. In addition, the penalization effect can be mitigated by refitting. To illustrate these improvements, we consider three variants of the Trim estimator:
- The original Trim transformation with the tuning parameter selected by cross-validation.
- The Trim transformation with the tuning parameter $\lambda$ selected using the 1se rule.
- The Trim-1se estimator with an additional refitting step on the selected variables:
$$
\begin{split}&\widehat{\mathcal{A}}=\{j:\widehat{\beta}_{j}^{\;\text{Trim-1se}}\neq 0,\;\;\widehat{\beta}^{\;\text{Trim-1se}}\text{ is the Trim estimator with the 1se rule}\},\\
&\widehat{\beta}^{\;\text{Trim-1se-refit}}:=\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{p},\;\beta_{\widehat{\mathcal{A}}^{c}}=0}\;\;\|F_{\text{Trim}}(Y-X\beta)\|_{2}^{2},\end{split}
$$
where $F_{\text{Trim}}∈\mathbb{R}^{n× n}$ is the trimming transformation defined in Ćevid et al. (2020a).
<details>
<summary>2304.01098v4/Figures/revision2_fig/trim_fdr.png Details</summary>

### Visual Description
# Technical Document Extraction: High-d Case Analysis
## Title
**High-d case, n = 500, s = 3**
---
## Axes Labels
- **X-axis**: `p` (ranges from 1000 to 3000)
- **Y-axis**: `FDR` (False Discovery Rate, ranges from 0.00 to 1.00)
---
## Legend
- **Position**: Right side of the chart
- **Methods**:
- **SIV**: Blue line (solid)
- **Trim**: Purple line (solid)
- **Trim+1se**: Green line (dotted, *not visible in the chart*)
- **Trim+1se+refit**: Red line (solid)
---
## Chart Components
### Main Chart
- **Type**: Line graph
- **Data Series**:
1. **SIV (Blue)**:
- **Trend**: Flat line near `FDR = 0.01` across all `p` values.
- **Data Points**:
- `p = 1000`: ~0.01
- `p = 2000`: ~0.01
- `p = 3000`: ~0.01
2. **Trim (Purple)**:
- **Trend**: Steadily increasing from `p = 1000` to `p = 3000`.
- **Data Points**:
- `p = 1000`: ~0.60
- `p = 2000`: ~0.75
- `p = 3000`: ~0.80
3. **Trim+1se+refit (Red)**:
- **Trend**: Gradual increase from `p = 1000` to `p = 3000`.
- **Data Points**:
- `p = 1000`: ~0.05
- `p = 2000`: ~0.12
- `p = 3000`: ~0.15
4. **Trim+1se (Green)**:
- **Trend**: Not visible in the chart (possibly overlapping or data not plotted).
---
## Key Observations
1. **Trim Method**: Shows the highest FDR, increasing consistently with `p`.
2. **Trim+1se+refit**: Demonstrates a moderate upward trend, significantly lower than Trim.
3. **SIV**: Maintains a near-zero FDR across all `p` values.
4. **Missing Data**: The `Trim+1se` method (green) is absent from the chart, suggesting either overlapping lines or omitted data.
---
## Spatial Grounding
- **Legend Position**: Right-aligned, outside the plot area.
- **Data Point Colors**:
- Blue (SIV) matches the flat line.
- Purple (Trim) matches the steeply rising line.
- Red (Trim+1se+refit) matches the gradual rise.
- Green (Trim+1se) is not visible, confirming absence in the chart.
---
## Notes
- The chart focuses on comparing FDR performance across methods in a high-dimensional setting (`n = 500`, `s = 3`).
- No textual blocks or additional diagrams are present.
- All extracted data is based on visual interpretation of the line graph.
</details>
(a) $n=500$ , $p$ varies from 500 to 3000.
<details>
<summary>2304.01098v4/Figures/revision2_fig/trim_l1.png Details</summary>

### Visual Description
# Technical Document Extraction: High-d Case Analysis
## Title
**High-d case, n = 500, s = 3**
## Axes Labels
- **X-axis**: `p` (ranging from 1000 to 3000)
- **Y-axis**: `L1 bias` (ranging from 0 to 3)
## Legend
- **Location**: Right side of the plot
- **Methods**:
- `SIV` (blue line with markers)
- `Trim` (purple line with markers)
- `Trim+1se` (green line with markers)
- `Trim+1se+refit` (red line with markers)
## Key Trends
1. **SIV (blue)**:
- **Trend**: Flat line with minimal variation.
- **Data Points**:
- `p = 1000`: ~0.5
- `p = 2000`: ~0.5
- `p = 3000`: ~0.5
2. **Trim (purple)**:
- **Trend**: Steadily increasing.
- **Data Points**:
- `p = 1000`: ~2.0
- `p = 2000`: ~2.6
- `p = 3000`: ~2.9
3. **Trim+1se (green)**:
- **Trend**: Slightly decreasing then stabilizing.
- **Data Points**:
- `p = 1000`: ~2.5
- `p = 2000`: ~2.3
- `p = 3000`: ~2.3
4. **Trim+1se+refit (red)**:
- **Trend**: Gradual increase.
- **Data Points**:
- `p = 1000`: ~0.7
- `p = 2000`: ~0.9
- `p = 3000`: ~1.1
## Spatial Grounding
- **Legend Position**: Right-aligned, outside the plot area.
- **Color Consistency**:
- Blue markers correspond to `SIV`.
- Purple markers correspond to `Trim`.
- Green markers correspond to `Trim+1se`.
- Red markers correspond to `Trim+1se+refit`.
## Additional Notes
- No non-English text detected.
- No embedded data tables or heatmaps present.
- All textual information is in English.
## Diagram Components
1. **Header**: Title (`High-d case, n = 500, s = 3`).
2. **Main Chart**: Line plot with four data series.
3. **Footer**: Legend with method labels and colors.
## Conclusion
The plot compares four methods (`SIV`, `Trim`, `Trim+1se`, `Trim+1se+refit`) across varying `p` values, showing distinct trends in `L1 bias`. The `Trim` method exhibits the highest bias, while `SIV` remains stable. No critical data omissions identified.
</details>
(b) $n=500$ , $p$ varies from 500 to 3000.
Figure S9: FDR and estimation results comparing the SIV method with various Trim methods. The false discovery rates of Trim-1se and Trim-1se-refit coincide because Trim-1se-refit uses the same subset of variables for refitting.
We evaluated the performance of the SIV, Trim, Trim-1se, and Trim-1se-refit estimators in the high-dimensional setting described in Section 5.1 of the manuscript. Figure S9 summarizes the simulation results. Figure 9(a) reports the false discovery rates (FDRs). The original Trim estimator exhibits a high FDR, which is substantially reduced when combined with the 1se rule. Figure 9(b) presents the $\ell_{1}$ -estimation errors. The Trim-1se-refit estimator performs significantly better than the original version, demonstrating that its performance can be empirically improved through the 1se rule and refitting. Nevertheless, the proposed SIV method remains the most favorable among all estimators considered.
Why our method still performs better than Trim+1se+refit?
We believe that the superior performance of our estimator stems from the fact that the SIV method achieves identification of the causal parameter $\beta$ in the population sense, whereas the Trim transformation does not correspond to any identifiable population target.
To illustrate this distinction, we examine several “oracle” variants of the SIV and Trim estimators, assuming that the active set $\mathcal{A}:=\{j:\beta_{j}≠ 0\}$ is known.
- Let $\widehat{X}=\widehat{\mathbb{E}}(X\mid SIV)$ . The SIV-oracle estimator is defined as
$$
\widehat{\beta}:=\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{p},\;\beta_{\mathcal{A}^{c}}=0}\|Y-\widehat{X}\beta\|_{2}^{2}.
$$
This estimator corresponds to the oracle version of the SIV method.
- Following Ćevid et al. (2020a), let $\widetilde{X}:=F_{\text{Trim}}X$ and $\widetilde{Y}:=F_{\text{Trim}}Y$ , where $F_{\text{Trim}}X$ caps all singular values of $X$ that exceed the median singular value at that threshold, while leaving smaller singular values unchanged. The Trim-oracle estimator is defined as
$$
\widehat{\beta}:=\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{p},\;\beta_{\mathcal{A}^{c}}=0}\|\widetilde{Y}-\widetilde{X}\beta\|_{2}^{2}.
$$
This estimator corresponds to the oracle version of the Trim method.
- We further introduce a new variant that directly targets the directions of unmeasured confounding. Specifically, as discussed in Ćevid et al. (2020a), the top $q$ singular values of $X$ correspond to the $q$ unmeasured confounders, while the remaining singular values capture the signal of the causal variables. Intuitively, setting the top $q$ singular values to zero removes the influence of unmeasured confounders while retaining the signal from the causal variables. Formally, let $\widetilde{X}:=F_{\text{Trim},q}X$ and $\widetilde{Y}:=F_{\text{Trim},q}Y$ , where $F_{\text{Trim},q}X$ sets the top $q$ singular values of $X$ to zero and leaves the remaining singular values unchanged. The Trim-oracle-top- $q$ estimator is defined as
$$
\widehat{\beta}:=\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{p},\;\beta_{\mathcal{A}^{c}}=0}\|\widetilde{Y}-\widetilde{X}\beta\|_{2}^{2}.
$$
We focus on the high-dimensional setting described in Section 5.1, with a minor adjustment to the outcome model:
$$
Y_{i}=X_{i}\beta+U_{i}\gamma+\epsilon_{y,i}.
$$
Here, we set $\epsilon_{y,i}=0$ to isolate the impact of unmeasured confounders, excluding the influence of independent noise. The confounding parameters are specified as $\gamma_{1}=·s=\gamma_{q}=\sqrt{p/500}$ , so that the strength of confounding grows slightly with the number of treatments $X$ . Let $b$ denote the bias for $\beta$ introduced by the unmeasured confounder $U$ . We can show that the bias for the oracle variable satisfies $\|b_{\mathcal{A}}\|_{2}^{2}=o\!\left(\tfrac{1}{p}\right)$ under the scenario considered here. All other aspects of the data-generating mechanism remain the same as in Section 5.1.
<details>
<summary>2304.01098v4/Figures/revision2_fig/trim_oracle_new.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Graph Analysis
## Labels and Axis Titles
- **Main Title**: "High-d case, n = 500, s = 3"
- **X-Axis Label**: "p" (ranges from 1000 to 3000 in increments of 1000)
- **Y-Axis Label**: "L1 bias" (ranges from 0.00 to 0.08 in increments of 0.02)
## Legend
- **Location**: Right side of the graph
- **Entries**:
- **Blue**: SIV-oracle
- **Purple**: Trim-oracle
- **Green**: Trim-oracle-top-q
## Key Trends
1. **SIV-oracle (Blue Line)**:
- **Trend**: Nearly flat across the x-axis.
- **Behavior**: Minor fluctuations around 0.023–0.024.
2. **Trim-oracle (Purple Line)**:
- **Trend**: Steadily increasing from left to right.
- **Behavior**: Starts at ~0.045 (p=1000) and rises to ~0.078 (p=3000).
3. **Trim-oracle-top-q (Green Line)**:
- **Trend**: Slightly below SIV-oracle but also flat.
- **Behavior**: Hovers around 0.022–0.023.
## Data Points (Approximate)
- **SIV-oracle**:
- p=1000: 0.023
- p=1500: 0.022
- p=2000: 0.024
- p=2500: 0.023
- p=3000: 0.023
- **Trim-oracle**:
- p=1000: 0.045
- p=1500: 0.052
- p=2000: 0.056
- p=2500: 0.062
- p=3000: 0.078
- **Trim-oracle-top-q**:
- p=1000: 0.022
- p=1500: 0.023
- p=2000: 0.024
- p=2500: 0.023
- p=3000: 0.023
## Spatial Grounding
- **Legend Position**: Right-aligned, outside the plot area.
- **Color Consistency**:
- Blue markers (SIV-oracle) match the blue line.
- Purple markers (Trim-oracle) match the purple line.
- Green markers (Trim-oracle-top-q) match the green line.
## Trend Verification
- **SIV-oracle**: Horizontal line with negligible variation (confirmed by flat slope).
- **Trim-oracle**: Linear upward slope (confirmed by consistent increase in y-values).
- **Trim-oracle-top-q**: Horizontal line slightly below SIV-oracle (confirmed by parallel trajectory).
## Component Isolation
1. **Header**: Contains the main title and subtitle.
2. **Main Chart**:
- Grid lines for reference.
- Three distinct data series with markers.
3. **Footer**: No explicit footer present.
## Additional Notes
- No non-English text detected.
- No embedded data tables or heatmaps.
- All textual elements are explicitly labeled and visually distinct.
</details>
Figure S10: Estimation results comparing various oracle estimators. The SIV-oracle and Trim-oracle-top- $q$ estimators perform nearly identically, so their lines overlap.
Figure S10 summarizes the simulation results for the oracle estimators. As shown, when $\mathcal{A}$ , the set of causal variables, is known a priori, the SIV estimator better recovers the true causal relationship $Y\sim X_{\mathcal{A}}$ compared to the Trim transformation. The comparison between Trim-oracle-top- $q$ and SIV-oracle demonstrates that our estimator is equivalent to removing the top $q$ singular values of $X$ , corresponding to the unmeasured confounders. This simulation also suggests that the Trim estimator’s performance can be improved by modifying its singular value adjustment strategy, as implemented in Trim-oracle-top- $q$ . These results reinforce our interpretation that the SIV method’s superior performance arises from its population-level identification of the causal parameter, rather than from arbitrary spectral regularization.
S.7.9 Simulation results for nonlinear outcome models with nondiagonal $\text{Cov}(\epsilon_{x})$
We provide additional simulation results to evaluate the performance of the proposed estimator in (10) under nonlinear outcome models with nondiagonal covariance structures for $\text{Cov}(\epsilon_{x})$ . Notably, the GMM procedure does not require $\text{Cov}(\epsilon_{x})$ to be diagonal. Even when $\text{Cov}(\epsilon_{x})$ is nondiagonal, the moment condition $SIV\perp\!\!\!\perp Y-f(X;\beta)$ continues to hold, allowing valid application of the GMM framework for estimating the nonlinear causal function.
When $\text{Cov}(\epsilon_{x})$ is nondiagonal, it is necessary to estimate the latent factor loading matrix $\Lambda$ using alternative methods. In low-dimensional settings where $\text{Cov}(\epsilon_{x})$ is assumed sparse, we apply the stable principal component pursuit approach (Zhou et al., 2010). For high-dimensional scenarios, the POET estimator (Fan et al., 2013a) provides a viable alternative.
In this simulation, we induce a nondiagonal covariance structure by setting $D_{i,j}=D_{j,i}=1$ for four selected pairs $(i,j)∈\{(2,4),(5,6),(5,9),(6,10)\}$ , and $D_{i,i}=4$ for $i=1,...,10$ . All other aspects of the data-generating mechanism remain unchanged. The low-rank structure $\widehat{\Lambda}\widehat{\Lambda}^{→p}$ is estimated via stable principal component pursuit, from which we recover $\widehat{\Lambda}$ . We then implement the SIV method from (10), along with the U-hat1 and U-hat2 methods described in Section 5.2.
Figure S11 presents the results. Across both nonlinear settings, only the SIV method yields consistent estimates of $\beta$ . In contrast, U-hat1 and U-hat2 exhibit substantial bias, particularly under the exponential outcome model, where their $\ell_{1}$ errors remain large even as the sample size increases.
<details>
<summary>2304.01098v4/Figures/revision2_fig/non_diagnonalX3.png Details</summary>

### Visual Description
# Technical Document Analysis of Chart
## Title
**Y = X³β + U³γ + εy**
## Axes
- **X-axis**: Labeled "n" with values:
- 1024
- 2048
- 4096
- **Y-axis**: Labeled "L1 error" with values:
- 0.008
- 0.016
- 0.024
- 0.031
## Legend
- **Location**: Top-right corner of the chart.
- **Entries**:
- **Blue**: "U³γ + εy"
- **Green**: "X³β"
- **Red**: "εy"
## Data Series
### Blue Line ("U³γ + εy")
- **Trend**: Slight upward slope.
- **Data Points**:
- At n=1024: 0.028
- At n=2048: 0.0285
- At n=4096: 0.030
### Green Line ("X³β")
- **Trend**: Gradual downward slope.
- **Data Points**:
- At n=1024: 0.024
- At n=2048: 0.021
- At n=4096: 0.017
### Red Line ("εy")
- **Trend**: Steep downward slope.
- **Data Points**:
- At n=1024: 0.012
- At n=2048: 0.0075
- At n=4096: 0.006
## Key Observations
1. **Blue Line ("U³γ + εy")**:
- Starts at 0.028 (n=1024) and increases to 0.030 (n=4096).
- Represents the highest error values across all n.
2. **Green Line ("X³β")**:
- Decreases from 0.024 (n=1024) to 0.017 (n=4096).
- Shows a consistent reduction in error as n increases.
3. **Red Line ("εy")**:
- Drops sharply from 0.012 (n=1024) to 0.006 (n=4096).
- Demonstrates the most significant error reduction.
## Data Table
| n | U³γ + εy (Blue) | X³β (Green) | εy (Red) |
|--------|------------------|-------------|----------|
| 1024 | 0.028 | 0.024 | 0.012 |
| 2048 | 0.0285 | 0.021 | 0.0075 |
| 4096 | 0.030 | 0.017 | 0.006 |
## Notes
- **Language**: All text is in English.
- **Legend Accuracy**: Colors and labels match the corresponding lines.
- **Spatial Grounding**:
- Legend is positioned at the top-right.
- Data points align with their respective lines and legend entries.
- **Trend Verification**:
- Blue line slopes upward (confirmed by increasing y-values).
- Green and red lines slope downward (confirmed by decreasing y-values).
This chart illustrates the relationship between the variable `n` and the L1 error for three distinct mathematical components of the equation `Y = X³β + U³γ + εy`. The red line ("εy") shows the most significant error reduction, while the blue line ("U³γ + εy") exhibits the least change.
</details>
(a) Nonlinear setting 1.
<details>
<summary>2304.01098v4/Figures/revision2_fig/non_diagnonalexp.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Title
**Y = exp(Xβ) + U³γ + εγ**
## Axes
- **X-axis**: Labeled "n" with logarithmic scale markers at **1024**, **2048**, and **4096**.
- **Y-axis**: Labeled "L1 error" with values ranging from **0.500** to **1.000**.
## Legend
- **Location**: Top-right corner of the chart.
- **Color-Series Mapping**:
- **Green**: Represents the highest data series.
- **Blue**: Represents the second-highest data series.
- **Red**: Represents the lowest data series.
## Data Series Trends
1. **Green Line** (Highest Series):
- **Trend**: Starts at ~1.02 (n=1024), increases to ~1.03 (n=2048), dips to ~1.00 (n=4096), then sharply rises to ~1.05 (n=4096).
- **Key Points**:
- n=1024: ~1.02
- n=2048: ~1.03
- n=4096: ~1.05
2. **Blue Line** (Second-Highest Series):
- **Trend**: Starts at ~1.01 (n=1024), decreases steadily to ~0.99 (n=2048), then drops further to ~0.80 (n=4096).
- **Key Points**:
- n=1024: ~1.01
- n=2048: ~0.99
- n=4096: ~0.80
3. **Red Line** (Lowest Series):
- **Trend**: Starts at ~0.75 (n=1024), decreases to ~0.65 (n=2048), then sharply declines to ~0.40 (n=4096).
- **Key Points**:
- n=1024: ~0.75
- n=2048: ~0.65
- n=4096: ~0.40
## Spatial Grounding
- **Legend Position**: Top-right corner (coordinates: [x=right, y=top]).
- **Color Consistency**: All data points match their assigned legend colors (green, blue, red).
## Component Isolation
1. **Header**: Contains the equation title "Y = exp(Xβ) + U³γ + εγ".
2. **Main Chart**:
- Three distinct lines (green, blue, red) plotted against logarithmic x-axis and linear y-axis.
- Gridlines visible for reference.
3. **Footer**: No additional text or components.
## Verification
- **Trend Logic-Check**:
- Green line slopes upward overall, with a dip at n=2048.
- Blue line slopes downward consistently.
- Red line slopes downward sharply after n=2048.
- **Data Point Accuracy**: All extracted values align with visual trends and legend assignments.
## Conclusion
The chart illustrates three data series with distinct trends, governed by the equation **Y = exp(Xβ) + U³γ + εγ**. The green series exhibits the highest variability, while the red series shows the steepest decline. All data points and labels are explicitly extracted and cross-verified for accuracy.
</details>
(b) Nonlinear setting 2.
Figure S11: Simulation results for nonlinear models with $p=10$ and $n=1000,2000,...,5000$ . Methods shown: SIV (red), U-hat1 (green), U-hat2 (blue).
Appendix S.8 Further discussions on the U-hat1 Method
In Section 5.2 of our manuscript, we considered the so-called U-hat1 method as comparison procedures in our simulation study. Recall that the U-hat1 method for the linear outcome model ( $Y=X^{\mathrm{\scriptscriptstyle T}}\beta+U^{\mathrm{\scriptscriptstyle T}}\gamma+\epsilon_{y}$ ) proceeds as follows:
- Estimate $U$ by $\widehat{U}=X\widehat{\gamma}$ , where $\widehat{\gamma}=\widehat{\Sigma}_{X}^{-1}\widehat{\Lambda}$ .
- Run the regression $Y\sim X+\widehat{U}$ subject to the constraint $||\beta||_{0}≤ k$ , where $k$ is a tuning parameter.
In the linear outcome model, the U-hat1 method coincides with our proposed SIV method. However, as shown in Section 5.2, under more general and realistic nonlinear models, our approach enables both the identification and estimation of $f$ , whereas the U-hat1 method does not. In what follows, we explain why the U-hat1 method aligns with our proposed approach in the linear setting and why it fails in the nonlinear setting.
Comparison of U-hat1 and SIV Methods
Equivalence under the Linear Outcome Model
We first discuss the equivalence between the U-hat and SIV methods under the linear outcome model. Let $\bm{Y}∈\mathbb{R}^{n}$ denote the vector of outcomes, $\bm{X}∈\mathbb{R}^{n× p}$ the matrix of treatments, and $\widehat{\bm{U}}∈\mathbb{R}^{n× q}$ . The U-hat1 method regresses $\bm{Y}$ on $\bm{X}$ and $\widehat{\bm{U}}$ simultaneously:
$$
\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{p},\;\gamma\in\mathbb{R}^{q}}\|\bm{Y}-\bm{X}\beta-\widehat{\bm{U}}\gamma\|^{2}_{2}\quad\text{subject to }\|\beta\|_{0}\leq k.
$$
In contrast, the second-stage regression of the proposed method solves
$$
\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{p}}\|\bm{Y}-\widehat{\bm{X}}\beta\|^{2}_{2}\quad\text{subject to }\|\beta\|_{0}\leq k.
$$
It can be shown (see Section S.8.1) that
$$
\widehat{\bm{X}}=(I_{n}-\widehat{\bm{U}}(\widehat{\bm{U}}^{\mathrm{\scriptscriptstyle T}}\widehat{\bm{U}})^{-1}\widehat{\bm{U}}^{\mathrm{\scriptscriptstyle T}})\bm{X},
$$
which coincides with the fitted residual of $\bm{X}$ on $\widehat{\bm{U}}$ . Thus, the U-hat and SIV methods yield identical results under the linear outcome model.
Inequivalence under a Nonlinear Outcome Model
We now explain why this equivalence does not extend to nonlinear outcome models. Consider the structural equation models:
$$
\displaystyle X \displaystyle=\Lambda U+\epsilon_{x}, \displaystyle Y \displaystyle=f(X;\beta)+g(U)+\epsilon_{y}.
$$
The U-hat1 method entails fitting the nonlinear regression
$$
\bm{Y}\sim f(\bm{X};\beta)+g(\widehat{\bm{U}}).
$$
Since $U$ is unmeasured, the data contain no information about the function $g$ . One might impose a working model, such as $g(U)=U$ , leading to the regression
$$
\bm{Y}\sim f(\bm{X};\beta)+\widehat{\bm{U}},
$$
or equivalently,
$$
\bm{Y}\sim(I_{n}-\widehat{\bm{U}}(\widehat{\bm{U}}^{\mathrm{\scriptscriptstyle T}}\widehat{\bm{U}})^{-1}\widehat{\bm{U}}^{\mathrm{\scriptscriptstyle T}})f(\bm{X};\beta)+\widehat{\bm{U}}.
$$
By contrast, the SIV method employs an estimating-equation approach. Since the synthetic instrument (SIV) is a linear combination of $\epsilon_{x}$ , it is independent of $U$ , and hence of any measurable function $g(U)$ :
$$
SIV\perp\!\!\!\perp U\;\;\Rightarrow\;\;SIV\perp\!\!\!\perp g(U).
$$
Using this property, we construct the moment condition
$$
\mathbb{E}[\,SIV\{Y-f(X;\beta)\}\,]=0,
$$
which is equivalent to fitting the regression
$$
\bm{Y}\sim\bm{SIV}(\bm{SIV}^{\mathrm{\scriptscriptstyle T}}\bm{SIV})^{-1}\bm{SIV}^{\mathrm{\scriptscriptstyle T}}f(\bm{X};\beta).
$$
In general (see Section S.8.1),
$$
\bm{SIV}(\bm{SIV}^{\mathrm{\scriptscriptstyle T}}\bm{SIV})^{-1}\bm{SIV}^{\mathrm{\scriptscriptstyle T}}f(\bm{X};\beta)\;\neq\;(I_{n}-\widehat{\bm{U}}(\widehat{\bm{U}}^{\mathrm{\scriptscriptstyle T}}\widehat{\bm{U}})^{-1}\widehat{\bm{U}}^{\mathrm{\scriptscriptstyle T}})f(\bm{X};\beta),
$$
whenever $f$ is nonlinear. Hence, the equivalence established in the linear case does not hold in nonlinear models.
In our manuscript, we show numerically that the U-hat1 method is inconsistent under nonlinear outcome models, both when using a working specification $g(U)=U$ and even in the unrealistic case where $g(U)$ is correctly specified. In contrast, the proposed SIV method consistently estimates the treatment parameter $\beta$ .
S.8.1 A Proposition and Its Proof
**Proposition S.3**
*Consider the low-dimensional setting where $p<n$ . Let $X∈\mathbb{R}^{n× p}$ denote the design matrix of treatments, and let $f(X;\beta)∈\mathbb{R}^{n× 1}$ be the vector of causal effects. Recall that
$$
\begin{split}&SIV=XB_{\widehat{\Lambda}^{\perp}},\\
&\widehat{U}=X\;\widehat{\text{Cov}}^{-1}(X)\widehat{\Lambda}.\end{split}
$$
We have the following results:
$$
\displaystyle\{I_{n}-SIV(SIV^{\mathrm{\scriptscriptstyle T}}SIV)^{-1}SIV^{\mathrm{\scriptscriptstyle T}}-\widehat{U}(\widehat{U}^{\mathrm{\scriptscriptstyle T}}\widehat{U})^{-1}\widehat{U}^{\mathrm{\scriptscriptstyle T}}\}X=0, \displaystyle\{I_{n}-SIV(SIV^{\mathrm{\scriptscriptstyle T}}SIV)^{-1}SIV^{\mathrm{\scriptscriptstyle T}}-\widehat{U}(\widehat{U}^{\mathrm{\scriptscriptstyle T}}\widehat{U})^{-1}\widehat{U}^{\mathrm{\scriptscriptstyle T}}\}f(X;\beta)\neq 0\quad\text{if $f(X;\beta)$ is nonlinear in $X$}.
$$*
Proof of (S54).
Note that $(B_{\widehat{\Lambda}^{\perp}},\widehat{\text{Cov}}^{-1}(X)\widehat{\Lambda})∈\mathbb{R}^{p× p}$ is invertible. The columns of $(B_{\widehat{\Lambda}^{\perp}},\widehat{\text{Cov}}^{-1}(X)\widehat{\Lambda})$ therefore form a basis of $\mathbb{R}^{p}$ . Thus, any $\alpha∈\mathbb{R}^{p}$ can be written as
$$
\alpha=B_{\widehat{\Lambda}^{\perp}}\alpha_{1}+\widehat{\text{Cov}}^{-1}(X)\widehat{\Lambda}\alpha_{2},
$$
where $\alpha_{1}∈\mathbb{R}^{p-q}$ and $\alpha_{2}∈\mathbb{R}^{q}$ . We now show that, for any $\alpha∈\mathbb{R}^{p}$ ,
$$
\{I_{n}-SIV(SIV^{\top}SIV)^{-1}SIV^{\top}-\widehat{U}(\widehat{U}^{\top}\widehat{U})^{-1}\widehat{U}^{\top}\}X\alpha=0,
$$
which establishes (S54).
For the first term of (S54), we have
$$
X\alpha=X\{B_{\widehat{\Lambda}^{\perp}}\alpha_{1}+\widehat{\text{Cov}}^{-1}(X)\widehat{\Lambda}\alpha_{2}\}=SIV\alpha_{1}+\widehat{U}\alpha_{2}.
$$
For the second term of (S54), we compute
$$
\begin{split}&\{SIV(SIV^{\mathrm{\scriptscriptstyle T}}SIV)^{-1}SIV^{\mathrm{\scriptscriptstyle T}}+\widehat{U}(\widehat{U}^{\mathrm{\scriptscriptstyle T}}\widehat{U})^{-1}\widehat{U}^{\mathrm{\scriptscriptstyle T}}\}X\alpha\\
&=\{SIV(SIV^{\mathrm{\scriptscriptstyle T}}SIV)^{-1}SIV^{\mathrm{\scriptscriptstyle T}}+\widehat{U}(\widehat{U}^{\mathrm{\scriptscriptstyle T}}\widehat{U})^{-1}\widehat{U}^{\mathrm{\scriptscriptstyle T}}\}\{SIV\alpha_{1}+\widehat{U}\alpha_{2}\}\\
&=SIV\alpha_{1}+\widehat{U}\alpha_{2},\end{split}
$$
where the last equality uses the orthogonality condition $\widehat{U}^{\mathrm{\scriptscriptstyle T}}SIV=0$ . Thus, (S54) holds.
—
Before proving (S55), we establish the following claim:
$$
X(X^{\mathrm{\scriptscriptstyle T}}X)^{-1}X^{\mathrm{\scriptscriptstyle T}}=SIV(SIV^{\mathrm{\scriptscriptstyle T}}SIV)^{-1}SIV^{\mathrm{\scriptscriptstyle T}}+\widehat{U}(\widehat{U}^{\mathrm{\scriptscriptstyle T}}\widehat{U})^{-1}\widehat{U}^{\mathrm{\scriptscriptstyle T}}.
$$
Proof of the claim.
Let $A=X(X^{\mathrm{\scriptscriptstyle T}}X)^{-1}X^{\mathrm{\scriptscriptstyle T}}$ and $B=SIV(SIV^{\mathrm{\scriptscriptstyle T}}SIV)^{-1}SIV^{\mathrm{\scriptscriptstyle T}}+\widehat{U}(\widehat{U}^{\mathrm{\scriptscriptstyle T}}\widehat{U})^{-1}\widehat{U}^{\mathrm{\scriptscriptstyle T}}$ . From (S54), we have
$$
(I_{n}-B)A=0.
$$
Moreover,
$$
\displaystyle X(X^{\mathrm{\scriptscriptstyle T}}X)^{-1}X^{\mathrm{\scriptscriptstyle T}}SIV=SIV, \displaystyle X(X^{\mathrm{\scriptscriptstyle T}}X)^{-1}X^{\mathrm{\scriptscriptstyle T}}\widehat{U}=\widehat{U}.
$$
Combining (a) and (b), we obtain
$$
\begin{split}AB&=SIV(SIV^{\mathrm{\scriptscriptstyle T}}SIV)^{-1}SIV^{\mathrm{\scriptscriptstyle T}}+\widehat{U}(\widehat{U}^{\mathrm{\scriptscriptstyle T}}\widehat{U})^{-1}\widehat{U}^{\mathrm{\scriptscriptstyle T}}\\
&=B.\end{split}
$$
Finally,
| | $\displaystyle(A-B)(A-B)^{\mathrm{\scriptscriptstyle T}}$ | $\displaystyle=(A-B)(A-B)$ | |
| --- | --- | --- | --- |
where the first equality uses the symmetry of $A$ and $B$ , the second follows from idempotence ( $A^{2}=A$ , $B^{2}=B$ ), and the last follows from (S56) and (S57). Thus, the claim is proved.
—
Proof of (S55).
Consider the decomposition of $f(X;\beta)∈\mathbb{R}^{n× 1}$ :
$$
\begin{split}f(X;\beta)&=X(X^{\mathrm{\scriptscriptstyle T}}X)^{-1}X^{\mathrm{\scriptscriptstyle T}}f(X;\beta)+(I_{n}-X(X^{\mathrm{\scriptscriptstyle T}}X)^{-1}X^{\mathrm{\scriptscriptstyle T}})f(X;\beta)\\
&=a+b,\end{split}
$$
where $a$ and $b$ denote the linear and nonlinear components of $f(X;\beta)$ , respectively.
We first show that $b≠ 0$ . If $b=0$ , then
$$
f(X;\beta)=X(X^{\mathrm{\scriptscriptstyle T}}X)^{-1}X^{\mathrm{\scriptscriptstyle T}}f(X;\beta)=X\alpha,
$$
for some $\alpha=(X^{\mathrm{\scriptscriptstyle T}}X)^{-1}X^{\mathrm{\scriptscriptstyle T}}f(X;\beta)∈\mathbb{R}^{p× 1}$ , implying that $f(X;\beta)$ is linear in $X$ . This contradicts the assumption that $f$ is nonlinear. Hence $b≠ 0$ .
Finally, using the claim above,
$$
(I_{n}-B)f(X;\beta)=(I_{n}-A)f(X;\beta)=b\neq 0,
$$
which proves (S55).
S.8.2 A Necessary Condition
To further explain why the U-hat1 method fail in the nonlinear setting, we have expanded Section S.8.2 of the supplementary material to derive a necessary condition for identification and to provide additional analysis and simulation evidence. Specifically, we show that the U-hat1 method satisfy the identification condition when $f(X;\beta)$ is linear but may fail when $f(X;\beta)$ is nonlinear, even if the unmeasured confounder–outcome relationship $g(U)$ is linear. Below, we include the detailed derivation and results added to the supplementary material.
Specifically, we derive a necessary condition (S65) that the U-hat1 method must satisfy to identify the causal parameter. We further demonstrate, through a counterexample, that this condition may fail when the treatment–outcome relationship is nonlinear, even if the unmeasured confounder–outcome relationship remains linear.
Suppose the data-generating mechanism is $Y=f(X;\beta^{*})+g(U)+\epsilon_{y},$ where $\beta^{*}$ is the true parameter of interest with $\|\beta^{*}\|_{0}=s$ , and $f(X;\beta)$ denotes the causal function parameterized by $\beta$ (e.g., $\exp(X^{→p}\beta)$ ). We assume that the functional form of $f$ is known.
We focus on the population version of the U-hat1 method, where $\widehat{\Lambda}$ and $\widehat{\text{Cov}}^{-1}(X)$ are replaced by their population counterparts, $\Lambda$ and $\text{Cov}^{-1}(X)$ , respectively. We further assume that the sparsity level of $\beta^{*}$ is known to be $s$ , and that the U-hat1 method is optimized under the constraint $\|\beta\|_{0}=s$ . The population version of the U-hat1 method is defined as
$$
\widehat{U}:=\Lambda^{\top}\text{Cov}^{-1}(X)X,\quad(\widehat{\beta},\widehat{\gamma})=\operatorname*{arg\,min}_{\|\beta\|_{0}=s,\;\gamma\in\mathbb{R}^{p}}\;\mathbb{E}\!\left[\left(Y-f(X;\beta)-\widehat{U}^{\top}\gamma\right)^{2}\right].
$$
To analyze the optimization problem of the U-hat1 method, we define the residuals $f_{r}$ and $Y_{r}$ as follows:
$$
f_{r}(X;\beta)=f(X;\beta)-\eta_{f}\widehat{U},\quad Y_{r}=Y-\eta_{Y}\widehat{U},
$$
where
$$
\eta_{f}=\text{Cov}\{f(X;\beta),\widehat{U}\}\,\text{Cov}^{-1}(\widehat{U})\in\mathbb{R}^{1\times q},\quad\eta_{Y}=\text{Cov}(Y,\widehat{U})\,\text{Cov}^{-1}(\widehat{U})\in\mathbb{R}^{1\times q}.
$$
Using (S60), we obtain
$$
\begin{split}\mathbb{E}\!\left[\big(Y-f(X;\beta)-\widehat{U}^{\top}\gamma\big)^{2}\right]&=\mathbb{E}\!\left[\big(Y_{r}-f_{r}(X;\beta)+\widehat{U}^{\top}(\eta_{Y}^{\mathrm{\scriptscriptstyle T}}-\eta_{f}^{\mathrm{\scriptscriptstyle T}}-\gamma)\big)^{2}\right]\\
&=\mathbb{E}\!\left[\big(Y_{r}-f_{r}(X;\beta)\big)^{2}\right]+\mathbb{E}\!\left[\big(\widehat{U}^{\top}(\eta_{Y}-\eta_{f}-\gamma)\big)^{2}\right],\end{split}
$$
where the last equality follows from the orthogonality conditions $\text{Cov}(\widehat{U},Y_{r})=0$ and $\text{Cov}(\widehat{U},f_{r}(X;\beta))=0$ .
Recall that
$$
\begin{split}Y_{r}&=Y-\text{Cov}(Y,\widehat{U})\,\text{Cov}^{-1}(\widehat{U})\,\widehat{U}\\
&=\big(f(X;\beta^{*})+g(U)+\epsilon_{y}\big)-\text{Cov}\!\big(f(X;\beta^{*})+g(U),\widehat{U}\big)\,\text{Cov}^{-1}(\widehat{U})\,\widehat{U}\\
&=f_{r}(X;\beta^{*})+\widetilde{\epsilon}_{y},\end{split}
$$
where
$$
\widetilde{\epsilon}_{y}:=\epsilon_{y}+g(U)-\text{Cov}\{g(U),\widehat{U}\}\,\text{Cov}^{-1}(\widehat{U})\,\widehat{U}.
$$
We can further simplify $\widetilde{\epsilon}_{y}$ :
$$
\begin{split}\widetilde{\epsilon}_{y}&=\epsilon_{y}+g(U)-\text{Cov}(g(U),\Lambda^{\top}\text{Cov}^{-1}(X)X)\,\text{Cov}^{-1}(\widehat{U})\,\Lambda^{\top}\text{Cov}^{-1}(X)X\\
&=\epsilon_{y}+g(U)-\text{Cov}(g(U),U)\,\Lambda^{\top}\text{Cov}^{-1}(X)X,\end{split}
$$
where the simplification uses $X=\Lambda U+\epsilon_{x}$ , $U\perp\epsilon_{x}$ , and the linearity of covariance.
Equations (S61) and (S62) suggest that the optimization problem (S59) can be rewritten as
$$
\widehat{\beta}=\operatorname*{arg\,min}_{\|\beta\|_{0}=s}\;\mathbb{E}\!\left[\left(Y_{r}-f_{r}(X;\beta)\right)^{2}\right]=\operatorname*{arg\,min}_{\|\beta\|_{0}=s}\;\mathbb{E}\!\left[\left(f_{r}(X;\beta^{*})+\widetilde{\epsilon}_{y}-f_{r}(X;\beta)\right)^{2}\right].
$$
A necessary condition for $\widehat{\beta}=\beta^{*}$ is
$$
\mathbb{E}\left[\widetilde{\epsilon}_{y}\,\frac{\partial f_{r}(X;\beta)}{\partial\beta}\bigg|_{\beta=\beta^{*}}\right]=0.
$$
If (S65) is violated, then there exists a $\widetilde{\beta}$ in the neighborhood of $\beta^{*}$ such that $\widetilde{\beta}$ achieves a smaller loss than $\beta^{*}$ , in the sense that
$$
\mathbb{E}\!\left[\left(Y_{r}-f_{r}(X;\widetilde{\beta})\right)^{2}\right]<\mathbb{E}\!\left[\left(Y_{r}-f_{r}(X;\beta^{*})\right)^{2}\right].
$$
We analyze whether (S65) holds in two scenarios:
1. $f(X;\beta)$ is linear;
1. $f(X;\beta)$ is nonlinear.
Case 1: $f(X;\beta)=X^{→p}\beta$ (linear). When $f(X;\beta)=X^{→p}\beta$ , condition (S65) always holds, providing additional justification for why the U-hat1 method is valid in the linear setting, as discussed in Section S.8 of the supplementary material. We now verify this condition explicitly.
Given $f(X;\beta)=X^{→p}\beta$ and $\widehat{U}=\Lambda^{→p}\text{Cov}^{-1}(X)X$ , we obtain the following expression after a straightforward (though tedious) calculation:
$$
\begin{split}f_{r}(X;\beta)&=X^{\top}\beta-\beta^{\top}\Lambda(\Lambda^{\top}\text{Cov}^{-1}(X)\Lambda)^{-1}\Lambda^{\top}\text{Cov}^{-1}(X)X\\
&=X_{r}^{\top}\beta,\end{split}
$$
where
$$
X_{r}=X-\Lambda(\Lambda^{\top}\text{Cov}^{-1}(X)\Lambda)^{-1}\Lambda^{\top}\text{Cov}^{-1}(X)X.
$$
The optimization problem (S64) then becomes
$$
\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{p},\,\|\beta\|_{0}=s}\;\mathbb{E}\!\left\{X_{r}^{\top}\beta^{*}+\widetilde{\epsilon}_{y}-X_{r}^{\top}\beta\right\}^{2},
$$
which is a least squares problem. In this setting, $\mathbb{E}[\widetilde{\epsilon}_{y}X_{r}]=0$ is both necessary and sufficient for $\widehat{\beta}=\beta^{*}$ . Moreover, since $∂ f_{r}/∂\beta=X_{r},$ condition (S65) reduces to $\mathbb{E}\big[\widetilde{\epsilon}_{y}X_{r}\big]=0.$
We now verify that (S65) holds in this case through direct calculation:
$$
\begin{split}\mathbb{E}(\widetilde{\epsilon}_{y}X_{r})&=\text{Cov}\!\left(\widetilde{\epsilon}_{y},X-\Lambda(\Lambda^{\top}\text{Cov}^{-1}(X)\Lambda)^{-1}\Lambda^{\top}\text{Cov}^{-1}(X)X\right)\\
&=\text{Cov}(g(U),X)-\text{Cov}(g(U),U)\Lambda^{\top}\text{Cov}^{-1}(X)\text{Cov}(X)\\
&\quad-\text{Cov}(g(U),X)\text{Cov}^{-1}(X)\Lambda(\Lambda^{\top}\text{Cov}^{-1}(X)\Lambda)^{-1}\Lambda^{\top}\\
&\quad+\text{Cov}(g(U),U)\Lambda^{\top}\text{Cov}^{-1}(X)\text{Cov}(X)\text{Cov}^{-1}(X)\Lambda(\Lambda^{\top}\text{Cov}^{-1}(X)\Lambda)^{-1}\Lambda^{\top}\\
&=0.\end{split}
$$
The first line follows from (S63). In the second line, the first and second terms cancel, and the third and fourth terms also cancel. Thus, in the linear setting, condition (S65) is satisfied. This necessity and sufficiency explain why the U-hat1 method estimates $\beta^{*}$ consistently when $f$ is linear.
Case 2: $f(X;\beta)$ nonlinear. In this case, the necessary condition (S65) may fail, even if $g(U)$ is linear in $U$ . We illustrate this with a counterexample. We set $q=1$ , $s=2$ , and $p=10$ . The functions are defined as
$$
f(X;\beta)=\sum_{j=1}^{10}\cos^{3}(X_{j})\beta_{j},\quad g(U)=U\gamma.
$$
We set $\Lambda_{1}=·s=\Lambda_{10}=1$ and $\gamma=1$ . The latent variables $U_{i,1}$ are i.i.d. from the uniform distribution $U(0,3)$ for $i=1,...,n$ . The random errors $\epsilon_{x,i,j}$ are i.i.d. from the uniform distribution $U(0,5)$ , and $\epsilon_{y,i}$ are i.i.d. from the standard uniform distribution. In this example, after a lengthy calculation, we obtain
$$
\mathbb{E}\left[\widetilde{\epsilon}_{y}\,\frac{\partial f_{r}(X;\beta)}{\partial\beta}\bigg|_{\beta=\beta^{*}}\right]=(-0.013,-0.013,\ldots,-0.013)^{\top},
$$
indicating that the necessary condition (S65) fails under a nonlinear $f$ .
We further evaluate the finite-sample performance of our estimator and the U-hat1 method for sample sizes $n∈\{1000,...,5000\}$ . All simulation results are based on 1000 Monte Carlo replications. The estimators we considered are
1. We obtain $\widehat{\beta}$ by solving
$$
\widehat{\beta}=\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{10}}\left\|\bm{SIV}({{\bm{SIV}^{\top}\bm{SIV}}})^{-1}{{\bm{SIV}^{\top}(\bm{Y}-\cos^{3}(\bm{X})\beta)}}\right\|^{2}_{2}\quad\text{subject to }\|\beta\|_{0}\leq k.
$$
1. First, we obtain $\widehat{\bm{U}}∈\mathbb{R}^{n× q}$ using $\widehat{\bm{U}}=\bm{X}\widehat{\text{Cov}}(X)^{-1}\widehat{\Lambda}$ . Next, we obtain $\widehat{\beta}$ by solving
$$
\widehat{\beta}=\underset{\beta\in\mathbb{R}^{10},\,\gamma\in\mathbb{R}}{\operatorname*{arg\,min}}\|\bm{Y}-\cos^{3}(\bm{X})\beta-\widehat{\bm{U}}\gamma\|_{2}^{2}\quad\text{subject to}\quad\|\beta\|_{0}\leq k.
$$
<details>
<summary>2304.01098v4/Figures/revision2_fig/uhat__new.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Title
**Y = cos³(X)β + U + ε_y**
## Axes
- **X-axis**: Labeled "n" with markers at 1000, 2000, 3000, 4000, 5000.
- **Y-axis**: Labeled "L1 bias" with markers at 0, 0.125, 0.5, 1, 1.5, 2.000.
## Legend
- **Location**: Right side of the chart.
- **Entries**:
- **Red**: SIV (Solid line with circular markers).
- **Blue**: U-hat1 (Dashed line with circular markers).
## Data Series
### SIV (Red)
- **Trend**: Monotonically decreasing curve.
- **Key Data Points**:
- At **n = 1000**: ~0.125.
- At **n = 2000**: ~0.075.
- At **n = 3000**: ~0.05.
- At **n = 4000**: ~0.03.
- At **n = 5000**: ~0.025.
### U-hat1 (Blue)
- **Trend**: Flat line with minimal fluctuation.
- **Key Data Points**:
- At **n = 1000**: ~1.8.
- At **n = 2000**: ~1.9.
- At **n = 3000**: ~1.9.
- At **n = 4000**: ~1.9.
- At **n = 5000**: ~1.9.
## Observations
1. **SIV** demonstrates a clear downward trend as **n** increases, suggesting diminishing L1 bias with larger sample sizes.
2. **U-hat1** remains nearly constant across all **n** values, indicating stable L1 bias regardless of sample size.
3. The y-axis scale (0–2000) appears inconsistent with the data range (0.025–1.9), suggesting potential scaling or labeling errors.
## Spatial Grounding
- Legend is positioned on the **right** of the chart, outside the plotting area.
- Data points for **SIV** (red) and **U-hat1** (blue) are visually distinct and match legend colors exactly.
## Conclusion
The chart compares two methods (SIV and U-hat1) for estimating L1 bias under the model **Y = cos³(X)β + U + ε_y**. SIV shows improvement with increasing **n**, while U-hat1 remains invariant.
</details>
Figure S12: Performance of the U-hat1 and SIV methods under nonlinear $f(X;\beta)$ .
The simulation results in Figure S12 show that the U-hat1 method can fail even if $g(U)$ is linear, whereas our proposed method achieves consistent estimation. Moreover, the estimation error decreases with larger sample sizes.