## InfoGram and Admissible Machine Learning
## Deep Mukhopadhyay
deep@unitedstatalgo.com
## Abstract
We have entered a new era of machine learning (ML), where the most accurate algorithm with superior predictive power may not even be deployable, unless it is admissible under the regulatory constraints. This has led to great interest in developing fair, transparent and trustworthy ML methods. The purpose of this article is to introduce a new information-theoretic learning framework (admissible machine learning) and algorithmic risk-management tools (InfoGram, L-features, ALFA-testing) that can guide an analyst to redesign off-the-shelf ML methods to be regulatory compliant, while maintaining good prediction accuracy. We have illustrated our approach using several real-data examples from financial sectors, biomedical research, marketing campaigns, and the criminal justice system.
Keywords : Admissible machine learning; InfoGram; L-Features; Information-theory; ALFAtesting, Algorithmic risk management; Fairness; Interpretability; COREml; FINEml.
| 1 | Introduction | 2 |
|-----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|
| 2 | Information-Theoretic Principles and Methods | 7 |
| 3 | 2.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Conditional Mutual Information . . . . . . . . . . . . . . . . 2.3 Net-Predictive Information . . . . . . . . . . . . . . . . | 7 7 8 |
| | Conclusion | 35 |
| 4 | Elements of Admissible Machine Learning COREml: Algorithmic Interpretability . . . . . . . . . . . . . 3.1.1 From Predictive Features to Core Features . . . . . . 3.1.2 InfoGram and L-Features . . . . . . . . . . . . . . . . 3.1.3 COREtree: High-dimensional Microarray Data Analysis 3.1.4 COREglm: Breast Cancer Wisconsin Data . . . . . . . . . . . . . . . | 13 15 20 |
| 2.4 | . . . Nonparametric Estimation Algorithm . . . . . . . . . . . . . | 9 |
| 2.5 | Model-based Bootstrap . . . . . . . . . . . . . . . . . . . . . . | 11 |
| 2.6 | A Few Examples . . . . . . . . . . . . . . . . . . . . . . . . . | 11 |
| | Appendix A.1 | 12 12 12 |
| 3.1 | Revisiting COMPAS Data . . . . . . . . Two Cultures of Machine Learning . . . . . . | 42 |
| | 3.2.2 InfoGram and Admissible Feature 3.2.3 FINEtree and ALFA-Test: Financial 3.2.4 Admissible Criminal Justice Risk 3.2.5 FINEglm and Application to Marketing Proof of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . The Algorithmic Accountability Act . . . . . . | 39 39 42 |
| 5 A.2 A.3 | . . . . . . . . . . . . . . . . COREtree: Iris Data . . . . . . . . . | |
| A.7 | EU's Artificial Intelligence | 43 |
| 3.2 | | |
| | FINEml: Algorithmic Fairness . . . . . . . . 3.2.1 FINE-ML: Approaches and Limitations . . . . | 22 22 |
| | . . . . Selection . Industry | 25 |
| | . . | 26 |
| | . . . Applications Assessment . . . . . . . | 32 |
| | Campaign | 32 |
| | . . . . . . . . . . . . | 39 |
| | | 40 |
| A.5 | Fair Housing Act's Disparate Impact Standard Beware of The 'Spurious Bias' Problem | |
| | | 40 |
| A.4 | . . . . . . . . . . . . | |
| A.6 | . . . . . . . . . . . . . | |
| | . . . . . . . . . . . . | |
| A.8 | | |
| | Act . . . . . . . . . . . . . . . | 44 |
## Category: Fairness, Explainability, and Algorithm Bias
Machine learning (ML) methods are rapidly becoming an essential part of automated decision-making systems that directly affect human lives. While substantial progress has been made toward developing more powerful computational algorithms, the widespread adoption of these technologies still faces several barriers-the biggest one being ensuring adherence to regulatory requirements, without compromising too much accuracy. Naturally, the question arises: how to systematically go about building such regulatory-compliant fair and trustworthy algorithms? This paper offers new statistical principles and information-theoretic graphical exploratory tools that engineers can use to 'detect, mitigate, and remediate' off-the-shelf ML-algorithms, thereby making them admissible under appropriate laws and regulatory scrutiny.
## 1 Introduction
First-generation 'prediction-only' machine learning technology has served the tech and eCommerce industry pretty well. However, ML is now rapidly expanding beyond its traditional domains into highly regulated or safety-critical areas-such as healthcare, criminal justice systems, transportation, financial markets, and national security-where achieving high predictive-accuracy is often as important as ensuring regulatory compliance and transparency in order to ensure the trustworthiness. We thus focus on developing admissible machine learning technology that can balance fairness, interpretability, and accuracy in the best manner possible. How to systematically go about building such algorithms in a fast and scalable manner? This article introduces some new statistical learning theory and information-theoretic graphical exploratory tools to address this question.
Going Beyond 'Pure' Prediction Algorithms . Predictive accuracy is not the be-all and end-all for judging the 'quality' of a machine learning model. Here is a dazzling example: Researchers at the Icahn School of Medicine at Mount Sinai in New York City found that (Zech et al., 2018, Reardon, 2019) a deep-learning algorithm, which showed more than 90% accuracy on the x-rays produced at Mount Sinai, performed poorly when tested on data from other institutions. Later it was found that 'the algorithm was also factoring in the odds of a positive finding based on how common pneumonia was at each institution-not something they expected or wanted.' This sort of unreliable and inconsistent performance
can be clearly dangerous. As a result of these safety concerns, despite lots of hype and hysteria around AI in imaging, only about 30% of radiologists are currently using machine learning (ML) for their everyday clinical practices (Allen et al., 2021). To apply machine learning appropriately and safely- especially when human life is at stake-we have to think beyond predictive accuracy. The deployed algorithm needs to be comprehensible (by endusers like doctors, judges, regulators, researchers, etc.) in order to make sure it has learned relevant and admissible features from the data, which is meaningful in light of investigators' domain knowledge. The fact of the matter is, an algorithm that is solely focused on what is learned, without reasoning how it learned what it has learned, is not intelligent enough. We next expand on this issue using two real data applications.
Admissible ML for Industry . Consider the UCI Credit Card data (discussed in more details in Sec 3.2.3), collected in October 2005, from an important Taiwan-based bank. We have records of n 30 , 000 cardholders. The data composed of a response variable Y denoting: default payment status (Yes = 1, No = 0), along with p 23 predictor variables (e.g., gender, education, age, history of past payment, etc.). The goal is to accurately predict the probability of default given the profile of a particular customer.
On the surface, this seems to be a straightforward classification problem for which we have a large inventory of powerful algorithms. Yeh and Lien (2009) performed an exhaustive comparison between six machine learning methods (logistic regression, K-nearest neighbor, neural net, etc.) and finally selected the neural network model, which attained 83% accuracy on a 80-20 train-test split of the data. However, traditionally build ML models are not deployable, unless it is admissible under the financial regulatory constraints 1 (Wall, 2018), which demand that (i) the method should not discriminate people on the basis of protective features 2 , here based on gender and age ; and (ii) The method should be simpler to interpret and transparent (compared to those big neural-nets or ensemble models like random forest and gradient boosting).
To improve fairness, one may remove the sensitive variables and go back to business as usual by fitting the model on the rest of the features-known as 'fairness through unawareness.' Obviously this is not going to work because there will be some proxy attributes (e.g, zip code or profession) that share some degree of correlation (information-sharing) with race,
1 The Equal Credit Opportunity Act (ECOA) is a major federal financial regulation law enacted in 1974.
2 https://en.wikipedia.org/wiki/Protected group
Figure 1: A shallow admissible tree classifier for the UCI credit card data with four decision nodes, which is as accurate as the most complex state-of-the-art ML model.
<details>
<summary>Image 1 Details</summary>

### Visual Description
## Decision Tree: UCI Credit Data
### Overview
The image depicts a decision tree model, specifically an "Admissible Tree," trained on UCI credit data. The tree visually represents a series of decisions based on credit-related features (PAY_0, PAY_2) that lead to classifications. Each node in the tree shows the predicted class (0 or 1), the proportion of data belonging to each class within that node, and the percentage of total data represented by that node.
### Components/Axes
* **Title:** Admissible Tree: UCI Credit Data
* **Nodes:** Each node is represented by a rounded rectangle.
* The top of the node contains a number, presumably an ID.
* The node is split into two lines showing the proportion of class 0 and class 1.
* The bottom line shows the percentage of the total dataset represented by the node.
* **Edges:** Dotted lines connect nodes, representing decisions.
* Edges are labeled with conditions based on the PAY_0 and PAY_2 features.
* Conditions are in the form of "PAY_X < value".
* **Decisions:** "yes" and "no" labels indicate the direction of the decision based on whether the condition is met.
### Detailed Analysis
* **Node 1 (Root):**
* ID: 1
* Class 0: 0.78 (78%)
* Class 1: 0.22 (22%)
* Percentage: 100%
* **Decision 1:** PAY_0 < 1.5
* If yes, go to Node 2.
* If no, go to Node 3.
* **Node 2:**
* ID: 2
* Class 0: 0.83 (83%)
* Class 1: 0.17 (17%)
* Percentage: 90%
* **Node 3:**
* ID: 3
* Class 1: 0.30 (30%)
* Class 0: 0.70 (70%)
* Percentage: 10%
* **Decision 2 (from Node 2):** PAY_2 < 1.5
* If yes, go to Node 5.
* **Decision 3 (from Node 3):** PAY_2 < -0.5
* If yes, go to Node 6.
* If no, go to Node 7.
* **Node 5:**
* ID: 5
* Class 0: 0.58 (58%)
* Class 1: 0.42 (42%)
* Percentage: 8%
* **Decision 4 (from Node 5):** PAY_2 < 2.5
* If yes, go to Node 10.
* If no, go to Node 11.
* **Node 4:**
* ID: 4
* Class 0: 0.86 (86%)
* Class 1: 0.14 (14%)
* Percentage: 82%
* **Node 6:**
* ID: 6
* Class 0: 0.56 (56%)
* Class 1: 0.44 (44%)
* Percentage: 0%
* **Node 7:**
* ID: 7
* Class 1: 0.29 (29%)
* Class 0: 0.71 (71%)
* Percentage: 10%
* **Node 10:**
* ID: 10
* Class 0: 0.60 (60%)
* Class 1: 0.40 (40%)
* Percentage: 7%
* **Node 11:**
* ID: 11
* Class 1: 0.47 (47%)
* Class 0: 0.53 (53%)
* Percentage: 1%
* **Node 4:**
* ID: 4
* Class 0: 0.86 (86%)
* Class 1: 0.14 (14%)
* Percentage: 82%
### Key Observations
* The root node (Node 1) represents the entire dataset, with a majority (78%) belonging to class 0.
* The first split (PAY_0 < 1.5) significantly separates the data. The "yes" branch (Node 2) retains a high proportion of class 0 (83%), while the "no" branch (Node 3) has a higher proportion of class 1 (70%).
* The percentages at the bottom of each node indicate the proportion of the original dataset that falls into that node. For example, Node 4 represents 82% of the original data.
* Node 6 has a percentage of 0%, which is unusual. This could indicate a data issue or a very specific condition that is rarely met.
### Interpretation
The decision tree aims to classify credit data based on the features PAY_0 and PAY_2. The tree structure reveals how different values of these features influence the classification outcome.
* The initial split on PAY_0 < 1.5 is crucial, suggesting that this feature is a strong predictor.
* The subsequent splits on PAY_2 further refine the classification.
* The tree highlights the importance of PAY_0 and PAY_2 in predicting credit risk.
* The low percentage in Node 6 (0%) warrants further investigation, as it might indicate a problem with the data or the model.
* The model appears to be biased towards predicting class 0, as most leaf nodes have a higher proportion of class 0.
</details>
gender, or age. These proxy variables can then lead to the same unfair results. It is not clear how to define and detect those proxy variables to mitigate hidden biases in the data. In fact, on a recent review by Chouldechova and Roth (2020) on algorithmic fairness, the authors forthrightly stated
' But despite the volume and velocity of published work, our understanding of the fundamental questions related to fairness and machine learning remain in its infancy. '
Currently, there exists no systematic method to directly construct an admissible algorithm that can mitigate bias. To quote a real practitioner of a reputed AI-industry: 'I ran 40,000 different random forest models with different features and hyper-parameters to search a fair model.' This ad-hoc and inefficient strategy could be a significant barrier for an efficient large-scale implementation of admissible AI technologies. Fig. 1 shows a fair and shallow tree classifier with four decision nodes, which attains 82.65% accuracy; this was built in a completely automated manner without any hand-crafted manual tuning. Section 2 will introduce the required theory and methods behind our procedure. Nevertheless, this simple and transparent anatomy of the final model makes it easy to convey which are the key drivers of the model: variables Pay 0 and Pay 2 3 are the most important indicators to
3 Pay 0 and Pay 2 denote the repayment status of the last two months (-1=pay duly, 1=payment delay for one month, 2=payment delay for two months, and so on).
default. These variables have two key characteristics: they are highly predictive and at the same time safe to use in the sense that they share very little predictive information with the sensitive attributes age and gender, and for that reason, we call them admissible features. The model also convey how the key variables impacting credit risk: the simple decision tree shown in Fig. 1 is fairly self-explanatory, and its clarity facilitates an easy explanation of the predictions.
Admissible ML for Science . Legal requirement is not the only reason why we want to build admissible ML. In scientific investigations, it is important to know whether the deployed algorithm helps researchers to better understand the phenomena by refining their 'mental model.' Consider, for example, the prostate cancer data where we have p 6033 gene expression measurements from 52 tumor and 50 normal specimens. Fig. 2 shows a 95% accurate classification model for prostate data with only two 'core' driver genes! This compact model is admissible in the sense that it confers the following benefits: (i) it identifies a two-gene signature (composed of gene-1627 and gene-2327) as the top factor associated with prostate cancer. They are jointly overexpressed in the tumor samples but interestingly they have very little marginal information (not individually differentially expressed, as shown in Fig. 6). Accordingly, traditional linear-model-based analysis will fail to detect this genepair as a key biomarker. (ii) The simple decision tree model in Fig. 2 provides a mechanistic understanding and justification as to why the algorithm thinks a patient has prostate cancer or not. (iii) Finally, it provides the needed guidance on what to do next by having a control over the system. In particular, a cancer biologist can choose between different diagnosis and treatment plans with the goal to regulate those two oncogenes.
Goals and Organization . The primary goal of this paper is to introduce some new fundamental concepts and tools to lay the foundation of admissible machine learning that are efficient (enjoy good predictive accuracy), fair (prevent discrimination against minority groups), and interpretable (provide mechanistic understanding) to the best possible extent.
Our statistical learning framework is grounded in the foundational concepts of information theory. The required statistical formalism (nonparametric estimation and inference methods) and information-theoretic principles (entropy, conditional entropy, relative entropy, and conditional mutual information) are introduced in Section 2. A new nonparametric estimation technique for conditional mutual information (CMI) is proposed that scales to large
Figure 2: A two-gene admissible tree classifier for prostate cancer data with p 6033 gene expression measurements on 50 control and 52 cancer patients.
<details>
<summary>Image 2 Details</summary>

### Visual Description
## Decision Tree: Prostate Cancer Data
### Overview
The image depicts a decision tree model, specifically an "Admissible Tree," used for analyzing prostate cancer data. The tree visually represents a series of decisions based on feature values (X1627 and X2327) that lead to different classifications or outcomes. Each node in the tree shows the predicted class (0 or 1), the distribution of classes within that node, and the percentage of samples that reach that node.
### Components/Axes
* **Title:** "Admissible Tree: Prostate Cancer Data"
* **Nodes:** Each node is represented by a rounded rectangle.
* **Node ID:** A number in the top-left corner of each node (1, 2, 3, 6, 7).
* **Class Prediction:** The predicted class (0 or 1) is displayed prominently within the node.
* **Class Distribution:** Two numbers within each node represent the distribution of classes. For example, ".49 .51" means 49% belong to class 0, and 51% belong to class 1.
* **Percentage:** The percentage below the class distribution indicates the proportion of the total dataset that falls into that node.
* **Branches:** Dotted lines connect the nodes, representing the flow of decisions.
* **Decision Rules:** Text labels along the branches indicate the decision rule used to split the data. For example, "X1627 < -0.77".
* **Decision Outcomes:** "yes" and "no" labels indicate the outcome of the decision rule.
### Detailed Analysis
* **Node 1 (Root Node):**
* ID: 1
* Class Prediction: 1
* Class Distribution: 0.49, 0.51 (approximately 49% class 0, 51% class 1)
* Percentage: 100%
* **Branch from Node 1:**
* Decision Rule: X1627 < -0.77
* Outcomes: "yes" leads to Node 2, "no" leads to Node 3.
* **Node 2:**
* ID: 2
* Class Prediction: 0
* Class Distribution: 1.00, 0.00 (100% class 0)
* Percentage: 32%
* **Node 3:**
* ID: 3
* Class Prediction: 1
* Class Distribution: 0.25, 0.75 (approximately 25% class 0, 75% class 1)
* Percentage: 68%
* **Branch from Node 3:**
* Decision Rule: X2327 < -0.87
* **Node 6:**
* ID: 6
* Class Prediction: 0
* Class Distribution: 1.00, 0.00 (100% class 0)
* Percentage: 12%
* **Node 7:**
* ID: 7
* Class Prediction: 1
* Class Distribution: 0.09, 0.91 (approximately 9% class 0, 91% class 1)
* Percentage: 56%
### Key Observations
* The root node (Node 1) has a nearly balanced class distribution (49% class 0, 51% class 1).
* The first split (X1627 < -0.77) separates the data into two branches. The "yes" branch (Node 2) leads to a node with 100% class 0, representing 32% of the total data. The "no" branch (Node 3) leads to a node with a higher proportion of class 1 (75%).
* The second split (X2327 < -0.87) further refines the classification. The "yes" branch (Node 6) leads to a node with 100% class 0, representing 12% of the total data. The "no" branch (Node 7) leads to a node with a high proportion of class 1 (91%), representing 56% of the total data.
* Nodes 2 and 6 are "pure" nodes, meaning they contain only one class (class 0).
### Interpretation
The decision tree model aims to classify prostate cancer data based on two features: X1627 and X2327. The tree demonstrates how these features can be used to separate the data into groups with different probabilities of belonging to class 0 or class 1.
The model suggests that:
* If X1627 is less than -0.77, the data point is highly likely to belong to class 0 (Node 2).
* If X1627 is not less than -0.77, and X2327 is less than -0.87, the data point is highly likely to belong to class 0 (Node 6).
* If X1627 is not less than -0.77, and X2327 is not less than -0.87, the data point is highly likely to belong to class 1 (Node 7).
The percentages associated with each node indicate the relative importance of each decision path. The path leading to Node 7 (56%) is the most common, suggesting that the condition "X1627 >= -0.77 and X2327 >= -0.87" is the most prevalent in the dataset.
The model could be used to predict the class of new data points based on their X1627 and X2327 values. It also provides insights into the relationships between these features and the outcome variable (prostate cancer classification).
</details>
datasets by leveraging the power of machine learning. For statistical inference, we have devised a new model-based bootstrap strategy. The method was applied to the problem of conditional independence testing and integrative genomics (breast cancer multi-omics data from Cancer Genome Atlas). Based on this theoretical foundation, in Section 3, we laid out the basic elements of admissible machine learning. Section 3.1 focuses on algorithmic interpretability: how can we efficiently search and design self-explanatory algorithmic models by balancing accuracy and robustness to the best possible extent? Can we do it in a completely model-agnostic manner? Key concepts and tools introduced in this section are: Core features, infogram, L-features, net-predictive information, and COREml. The procedure was applied to several real datasets, including high-dimensional microarray gene expression datasets (prostate cancer and SRBCT data), MONK's problems, and Wisconsin breast cancer data. Section 3.2 focuses on algorithmic fairness, which tackles the challenging problem of designing admissible ML algorithms that are simultaneously efficient, interpretable, and equitable. There are several key techniques introduced in this section: admissible feature selection, ALFA-testing, graphical risk assessment tool, and FINEml. We illustrate the proposed methods using examples from criminal justice system (ProPublica's COMPAS recidivism data), financial service industry (Adult income data, Taiwan credit card data), and marketing ad campaign. We conclude the paper in Section 4 by reviewing the challenges and opportunities of next-generation admissible ML technologies.
## 2 Information-Theoretic Principles and Methods
The foundation of admissible machine learning relies on information-theoretic principles and nonparametric methods. The key theoretical ideas and results are presented in this section to develop a deeper understanding of the conceptual basis of our new framework.
## 2.1 Notation
Let Y be the response variable taking values t 1 , . . . , k u , X p X 1 , . . . , X p q denotes a p -dimensional feature matrix, and S p S 1 , . . . , S q q is additional set of q covariates (e.g., collection of sensitive attributes like race, gender, age, etc.). A variable is called mixed when it can take either discrete, continuous, or even categorical values, i.e., completely unrestricted data-types. Throughout, we will allow both X and S to be mixed . We write Y K K X to denote the independence of Y and X . While, the conditional independence of Y and X given S is denoted by Y K K X | S . For a continuous random variable, f and F denote the probability density and distribution function, respectively. For a discrete random variable the probability mass function will be denoted by p with proper subscript.
## 2.2 Conditional Mutual Information
Our theory starts with an information-theoretic view of conditional dependence. Under conditional independence:
$$Y \, \mathbb { I } \, X | S$$
the following decomposition holds for all y, x , s
$$f _ { Y , X | S } ( y , x | s ) \, = \, f _ { Y | S } ( y | s ) f _ { X | S } ( x | s ) .$$
More than testing independence, often the real interest lies in quantifying the conditional dependence: the average deviation of the ratio
$$\frac { f _ { Y , X | S } ( y , x | S ) } { f _ { Y | S } ( y | S ) f _ { X | S } ( x | S ) } , \quad ( 2 . 1 )$$
which can be measured by conditional mutual information (Wyner, 1978).
Definition 1. Conditional mutual information (CMI) between Y and X given S is defined as:
$$\begin{array} { r l } \text {as:} & = \underset { y , x , s } { \iiint } \log \left ( \frac { f _ { Y , X | S } ( y , x | S ) } { f _ { Y | S } ( y | S ) f _ { X | S } ( x | S ) } \right ) f _ { Y , X , S } ( y , x , s ) \, d y \, d x \, d s . \quad ( 2 . 2 ) \end{array}$$
Two Important Properties . (P1) One of the striking features of CMI is that it captures multivariate non-linear conditional dependencies between the variables in a completely nonparametric manner. (P2) CMI possesses the necessary and sufficient condition as a measure of conditional independence, in the sense that
$$M I ( Y , X | S ) = 0 \, i f a n d o n l y i f \, Y \perp X | S .$$
Conditional independence relation can be described using graphical model (also known as Markov network), as shown the figure below:
Figure 3: Representing conditional independence graphically, where each node is a random variable (or random vector). The edge between Y and X passes through the S .
<details>
<summary>Image 3 Details</summary>

### Visual Description
## Diagram: Simple Network Diagram
### Overview
The image is a simple network diagram consisting of three nodes labeled Y, S, and X, connected by two edges. The nodes are represented as circles, and the edges are represented as straight lines.
### Components/Axes
* **Nodes:**
* Y: A node on the left.
* S: A node in the center.
* X: A node on the right.
* **Edges:**
* A straight line connecting node Y to node S.
* A straight line connecting node S to node X.
### Detailed Analysis
* The nodes Y, S, and X are arranged horizontally.
* The edges are straight lines connecting the nodes.
* The diagram suggests a linear relationship or flow between Y, S, and X.
### Key Observations
* The diagram is simple and lacks directionality (e.g., arrows).
* The nodes are evenly spaced.
### Interpretation
The diagram represents a basic network or relationship where Y is connected to S, and S is connected to X. It could represent a simple data flow, a dependency chain, or any other linear relationship between three entities. The lack of directionality suggests that the relationship is either bidirectional or that the direction is not relevant in this context.
</details>
## 2.3 Net-Predictive Information
One of the major significances of CMI as a measure of conditional dependence comes from its interpretation in terms of additional 'information gain' on Y learned through X when we already know S . In other words, CMI measures the Net-Predictive Information (NPI) of X -the exclusive information content of X for Y beyond what is already subsumed by S . To formally arrive at this interpretation, we have to look at CMI from a different angle, by expressing it in terms of conditional entropy. Entropy is a fundamental information-theoretic uncertainty measure. For a random variable Z , entropy H p Z q is defined as E Z r log f Z s .
Definition 2. The conditional entropy H p Y | S q is defined as the expected entropy of Y | S s
$$H ( Y | S ) = \int _ { s } H ( Y | S = s ) d F _ { s } ,$$
which measures how much uncertainty remains in Y after knowing S , on average.
Theorem 1. For Y discrete and p X , S q mixed multidimensional random vectors, MI p Y, X | S q can be expressed as the difference between two conditional-entropy statistics:
$$M I ( Y , X | S ) \, = \, H ( Y | S ) \, - \, H ( Y | S , X ) .$$
The proof involves some standard algebraic manipulations, and is given in Appendix A.1.
Remark 1 (Uncertainty Reduction) . The alternative way of defining CMI through eq. (2.5) allows us to interpret it from a new angle: Conditional mutual information MI p Y, X | S q measures the net impact of X in reducing the uncertainty of Y , given S . This new perspective will prove to be vital for our subsequent discussions. Note that, if H p Y | S , X q H p Y | S q , then X carries no net -predictive information about Y .
## 2.4 Nonparametric Estimation Algorithm
The basic formula (2.2) of conditional mutual information (CMI) that we have presented in the earlier section, is, unfortunately, not readily applicable for two reasons. First, the practical side: in the current form, (2.2) requires estimation of f Y, X | S and f X | S , which could be a herculean task, especially when X p X 1 , . . . , X p q and S p S 1 , . . . , S q q are largedimensional. Second, the theoretical side: since the triplet p Y, X , S q is mixed (not all discrete or continuous random vectors) the expression (2.2) is not even a valid representation. The necessary reformulation is given in the next theorem.
Theorem 2. Let Y be a discrete random variable taking values 1 , . . . , k , and p X , S q be a mixed pair of random vectors. Then the conditional mutual information can be rewritten as
$$M I ( Y , X | S ) \, = \, E _ { X , S } \left [ K L \left ( p _ { Y | X , S } \right \| p _ { Y | S } \right ) \right ] ,$$
where Kullback-Leibler (KL) divergence from p Y | X x , S s to p Y | S s is defined as
$$K L \left ( p _ { Y | x , s } \| p _ { Y | s } \right ) = \sum _ { y } p _ { Y | x , s } ( y | x , s ) \, \log \left ( \frac { p _ { Y | x , s } ( y | x , s ) } { p _ { Y | s } ( y | s ) } \right ) .$$
To prove it, first rewrite the dependence-ratio (2.1) solely in terms of conditional distribution of Y as follows:
$$\frac { P r ( Y = y | X = x , S = s ) } { P r ( Y = y | S = s ) } \, = \, \frac { p _ { Y | X , S } ( y | X , s ) } { p _ { Y | S } ( y | S ) }$$
Next, substitute this into (2.2) and express it as
$$M I ( Y , X | S ) \ = \ \iint _ { x , s } \left [ \sum _ { y } p _ { Y | X , S } ( y | X , s ) \log \left ( \frac { p _ { Y | X , S } ( y | X , s ) } { p _ { Y | S } ( y | S ) } \right ) \right ] d F x , s$$
Replace the part inside the square brackets by (2.7) to finish the proof.
Remark 2. CMI measures how much information is shared only between X and Y that is not contained in S . Theorem 2 makes this interpretation explicit.
Estimator . Goal is to develop a practical nonparametric algorithm for estimating CMI from n i.i.d samples t x i , y i , s i u n i 1 that works for large( n, p, q ) settings. Theorem 2 immediately
leads to the following estimator of (2.6):
$$\widehat { M I } ( Y , X | S ) = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \log \frac { \widehat { P r } ( Y = y _ { i } | x _ { i } , s _ { i } ) } { \widehat { P r } ( Y = y _ { i } | s _ { i } ) } .$$
Algorithm 1 . Conditional mutual information estimation : the proposed ML-powered nonparametric estimation method consists of three simple steps:
Step 1 . Choose a machine learning classifier (e.g., support vector machines, random forest, gradient boosted trees, deep neural network, etc.), and call it ML 0 .
Step 2 . Train the following two models:
$$\begin{array} { r c l } \text {ML.train} _ { y | x , s } & \leftarrow & \text {ML} _ { 0 } \left ( Y \sim [ X , S ] \right ) \\ \text {ML.train} _ { y | s } & \leftarrow & \text {ML} _ { 0 } \left ( Y \sim S \right ) \end{array}$$
Step 3 . Extract the conditional probability estimates x Pr p Y y i | x i , s i q from ML.train y | x , s , and x Pr p Y y i | s i q from ML 0 Y S , for i 1 , . . . , n .
Step 4 . Return x MI p Y, X | S q by applying formula (2.8).
Remark 3. We will be using the gradient boosting machine ( gbm ) of Friedman (2001) in our numerical examples (obviously, one can use other methods), whose convergence behavior is well-studied in literature (Breiman et al., 2004, Zhang, 2004), where it was definitively shown that under some very general conditions, the empirical risk (probability of misclassification) of the gbm classifier approaches the optimal Bayes risk. This Bayes risk consistency property surely carries over to our conditional probability estimates in (2.8), which justifies the good empirical performance of our method in real datasets.
Remark 4. Taking the base of the log in (2.8) to be 2, we get the measure in the unit of bits . If the log is taken to be the natural log e , then it is in nats unit. We will use log 2 in all our computation.
The proposed style of nonparametric estimation provides some important practical benefits:
Flexibility: Unlike traditional conditional independence testing procedures (Candes et al., 2018, Berrett et al., 2019), our approach requires neither the knowledge of the exact parametric form of high-dimensional F X 1 ,...,X p nor the knowledge of the conditional distribution of X | S , which are generally unknown in practice.
Applicability: (i) Data-type: The method can be safely used for mixed X and S (any combination of discrete, continuous, or even categorical variables). (ii) Data-dimension: The method is applicable to high-dimensional X p X 1 , . . . , X p q and S p S 1 , . . . , S q q .
- Scalability: Unlike traditional nonparametric methods (such as kernel density or k -nearest neighbor-based methods), our procedure is scalable for big datasets with large( n, p, q ).
## 2.5 Model-based Bootstrap
One can even perform statistical inference for our ML-powered conditional-mutual-information statistic. In order to test H 0 : Y K K X | S , obtain bootstrap-based p-value by noting that under the null Pr p Y y | X x , S s q reduces to Pr p Y y | S s q .
Algorithm 2 . Model-based Bootstrap : The inference scheme proceeds as follows:
Step 1. Let
$$\begin{array} { r } { \hat { p } _ { i | s } = \Pr ( Y _ { i } = 1 | S = s _ { i } ) , \, f o r i = 1 , \dots , n } \end{array}$$
as extracted from (already estimated) the model ML.train y | s (step 2 of Algorithm 1).
Step 2. Generate the null Y n 1 p Y 1 , . . . , Y n q by
$$Y _ { i } ^ { * } \, \leftarrow \, B e r n o u l l i ( \widehat { p } _ { i | s } ) , \, f o r \, i = 1 , \dots , n$$
Step 3. Compute x MI p Y , X | S q using the Algorithm 1.
Step 4. Repeat the process B times (say, B 500); compute the bootstrap null distribution, and return the p-value.
Remark 5. Aparametric version of this inference was proposed by Rosenbaum (1984) in the context of observational causal study. His scheme resamples Y by estimating Pr p Y 1 | S q using a logistic regression model. The procedure was called conditional permutation test.
## 2.6 A Few Examples
Example 1. Model: X Bernoulli p 0 . 5 q ; S Bernoulli p 0 . 5 q ; Y X when S 0 and 1 X when S 1. In this case, it is easy to see that the true MI p Y, X | S q 1. We simulated n 500 i.i.d p x i , y i , s i q from this model and computed our estimate using (2.8). We repeated the process 50 times to access the variability of the estimate. Our estimate is:
$$\dot { M } ( Y , X | S ) \ = \ 0 . 9 9 4 \pm 0 . 0 0 2 3 4 .$$
with (avg.) p-value being almost zero. We repeated the same experiment by making Y Bernoulli p 0 . 5 q (i.e., now true MI p Y, X | S q 0), which yields
$$M I ( Y , X | S ) \ = \ 0 . 0 0 2 2 \pm 0 . 0 0 1 7 .$$
with (avg.) pvalue being 0 . 820.
Example 2. Integrative Genomics . The wide availability of multi-omics data has revolutionized the field of biology. It is a general consensus among practitioners that combining individual omics data sets (mRNA, microRNA, CNV and DNA methylation, etc.) leads to improved prediction. However, before undertaking such analysis, it is probably worthwhile to check what is the additional information we gain from a combined analysis compared to a single-platform one. To illustrate this point, we use a Breast cancer multi-omics data that is a part of The Cancer Genome Atlas (TCGA, http://cancergenome.nih.gov/). It contain the expression of three-kinds of omics data sets: miRNA, mRNA, and proteomics from three kinds of breast cancer samples ( n 150): Basal, Her2, and LumA. X 1 is 150 184 matrix of miRNA, X 2 is 150 200 matrix of mRNA, and X 3 is 150 142 matrix of proteomics.
$$\text {MI} ( Y , \text {X} _ { 2 } \, | \, \text {X} _ { 1 } ) = 0 . 0 1 3 ; \quad & p { \text {-value} } = 0 . 3 5 6 \\ \text {MI} ( Y , \text {X} _ { 3 } \, | \, \text {X} _ { 1 } ) = 0 . 0 1 8 6 ; \quad & p { \text {-value} } = 0 . 2 3 5 \\ \text {MI} \left ( Y , \{ \text {X} _ { 2 } , \text {X} _ { 3 } \} \, | \, \text {X} _ { 1 } \right ) = 0 . 0 1 9 2 ; \quad & p { \text {-value} } = 0 . 5 0 1 .$$
It shows: neither mRNA or proteonomics add any substantial information beyond what is already captured by miRNAs.
## 3 Elements of Admissible Machine Learning
How to design admissible machine learning algorithms with enhanced efficiency, interpretability, and equity? 4 A systematic pipeline for developing such admissible ML models is laid out in this section, which is grounded in the earlier information-theoretic concepts and nonparametric modeling ideas.
## 3.1 COREml: Algorithmic Interpretability
## 3.1.1 From Predictive Features to Core Features
One of the first tasks of any predictive modeling is to identify the key drivers that are affecting the response Y . Here we will discuss a new information-theoretic graphical tool to quickly spot the 'core' decision-making variables, which are going to be vital in building interpretable models. One of the advantages of this method is that it works even in the presence of correlated features, as the following example illustrates; also see Appendix A.7.
$$\begin{array} { r l } \text {Example 3. Correlated features. $Y \sim Bernoulli(\pi(x))$ where $\pi(x)=1/(1+e^{-\mathcal{M}(x)})$ and } \\ \\ \mathcal { M } ( x ) = 3 \sin ( X _ { 1 } ) - 2 X _ { 2 } . \end{array}$$
4 However, the general premise of admissible ML is extremely broad and flexible, and will continue to evolve with the regulatory requirements to ensure rapid development of trustworthy algorithmic methods.
X 1 , . . . X p 1 be i.i.d N p 0 , 1 q random variables, and
$$X _ { p } = 2 X _ { 1 } - X _ { 2 } + \epsilon , w h e r e \epsilon \sim \mathcal { N } ( 0 , 2 ) ,$$
which means X p has no additional predictive value beyond what is already captured by the core variables X 1 and X 2 . Another way of saying this is that X p is redundant -the conditional mutual information between Y and X p given t X 1 , X 2 u is zero:
$$M I \left ( Y , X _ { p } | \{ X _ { 1 } , X _ { 2 } \} \right ) = 0 .$$
The top of Fig. 4 graphically depicts this. The following nomenclature will be useful for discussing our method:
$$\begin{array} { r c l } { C o r e S e t } & { = } & { \{ X _ { 1 } , X _ { 2 } \} } \\ { I m i t a t o r } & { = } & { \{ X _ { p } \} } \\ { P r o b e s } & { = } & { \{ X _ { 3 } , \dots , X _ { p - 1 } \} . } \end{array}$$
Note that the imitator X p is highly predictive for Y due to its association with the core variables. We have simulated n 500 samples with p 50. For each feature we compute,
$$R _ { j } \ = \ o v e r a l l r e l e v a n c e s o r e \, o f \, j t h p r e d i c t o r , \ j = 1 , \dots , p .$$
The bottom-left corner of Fig. 4 shows the relative importance scores (scaled between 0 and 1) for the top seven features using gbm algorithm 5 , which correctly finds t X 1 , X 2 , X 50 u as the important predictors. However, it is important to recognise that this modus operandiirrespective of the ML algorithm-can not distinguish the 'fake imitator' X 50 from the real ones X 1 and X 2 . To enable refined characterization of the variables, we have to 'add more dimension' to the classical machine learning feature importance tools.
## 3.1.2 InfoGram and L-Features
We introduce a tool for identification of core admissible features based on the concept of net-predictive information (NPI) of a feature X j .
Definition 3. The net-predictive (conditional) information of X j given all the rest of the variables X j t X 1 , . . . , X p uzt X j u is defined in terms of conditional mutual information:
$$C _ { j } \ = \ M I ( Y , X _ { j } | X _ { - j } ) , \, f o r j = 1 , \dots , p .$$
5 based on whether a particular variable was selected to split on during learning a tree, and how much it improves the Gini impurity or information gain.
Figure 4: Top: The graphical representation of example 3 is shown. Bottom-left: The gbm-feature importance score for top seven features; rest are almost zero thus not shown. Bottom-right: infogram identifies the core variables t X 1 , X 2 u from the X 50 . The L-shaped area with 0 . 1 width is highlighted in red; it contains inadmissible variables with either low relevance or high redundancy.
<details>
<summary>Image 4 Details</summary>

### Visual Description
## Combined Image Analysis: Network Diagram, Variable Importance Chart, and CoreInfogram
### Overview
The image presents three distinct visual elements: a network diagram illustrating relationships between variables, a horizontal bar chart showing variable importance based on a Gradient Boosting Machine (GBM) model, and a CoreInfogram plotting net information against total information.
### Components/Axes
**1. Network Diagram:**
* Nodes: Labeled as Y, X1, X2, and X50. Y, X1, and X2 are represented as circles, while X50 is represented as a square.
* Edges: Lines connecting the nodes, indicating relationships between them. The network forms a diamond shape.
**2. Variable Importance Chart:**
* Title: "Variable Importance: GBM"
* Y-axis: Represents the variables: 2, 50, 1, 22, 17, 5, 30.
* X-axis: Ranges from 0.0 to 1.0, representing the importance score.
**3. CoreInfogram:**
* Title: "CoreInfogram"
* X-axis: "Total Information," ranging from 0.0 to 1.0.
* Y-axis: "Net Information," ranging from 0.0 to 1.0.
* Admissibility Region: A shaded area (light red) defined by the points (0,0), (0.2, 0), (0.2, 1), (0, 1).
* Data Points: Several small black circles clustered near the origin (0,0), and three labeled points: X1 (blue), X2 (blue), and X50 (blue).
### Detailed Analysis
**1. Network Diagram:**
* Y is connected to X1 and X2.
* X1 is connected to X50.
* X2 is connected to X50.
**2. Variable Importance Chart:**
* Variable 2 has an importance score of approximately 0.9.
* Variable 50 has an importance score of approximately 0.8.
* Variable 1 has an importance score of approximately 0.7.
* Variable 22 has an importance score of approximately 0.2.
* Variable 17 has an importance score of approximately 0.15.
* Variable 5 has an importance score of approximately 0.08.
* Variable 30 has an importance score of approximately 0.03.
* Trend: The variable importance decreases from variable 2 to variable 30.
**3. CoreInfogram:**
* The majority of data points (small black circles) are clustered near the origin, indicating low total and net information.
* X1 is located at approximately (0.6, 0.7).
* X2 is located at approximately (0.95, 0.95).
* X50 is located at approximately (0.9, 0.05).
* The "Admissible" label is placed within the shaded region.
### Key Observations
* In the Variable Importance Chart, variables 2, 50, and 1 are significantly more important than the other variables.
* In the CoreInfogram, X2 has the highest total and net information, while X50 has high total information but very low net information. X1 has moderate total and net information.
* The network diagram visually represents the relationships between the variables, which may influence their importance and information content.
### Interpretation
The image combines three different visualizations to provide a comprehensive view of variable relationships, importance, and information content. The network diagram shows the connections between variables, which may explain why some variables are more important than others in the GBM model. The CoreInfogram provides insights into the information content of each variable, with X2 being the most informative and X50 having high total information but low net information, suggesting it might be redundant or noisy. The "Admissible" region in the CoreInfogram likely represents a threshold for acceptable information content.
</details>
For easy interpretation, we standardize C j by C j max j C j and convert it between 0 and 1. Infogram, which is a abbreviation of information diagram, is a scatter plot of tp R j , C j qu p j 1 over the unit square r 0 , 1 s 2 ; see the bottom-right corner of Fig. 4.
L-Features . The highlighted L-shaped area contains features that are either irrelevant or redundant. For example, notice the position of X 50 in the plot, indicating that it is highly predictive but contains no new complementary information for the response. Clearly, there could be an opposite scenario: a variable carries valuable net individual information for Y , despite being moderately relevant (not ranked among the top few); see Sec. 3.1.4.
Remark 6 (Predictive Features vs. CoreSet) . Recall that in Example 3, the irrelevant feature X 50 is strongly correlated with the relevant ones X 1 and X 2 through (3.2), thus violate the so-called 'irrepresentable condition'-for more details see the bibliographic notes section of Hastie et al. (2015, p. 311). In this scenario (which may easily arise in practice), it is hard to recover the 'important' variables using traditional variable selection methods. The bottom line is: identifying CoreSet is a much more difficult undertaking than merely selecting the most predictive ones. The goal of infogram is to facilitate this process of discovering the key variables that are driving the outcome.
Remark 7 (CoreML) . Two additional comments before diving into a real data examples. First, machine learning models based on 'core' features ( CoreML ) show improved stability, especially when there exists considerable correlation among the features. 6 This will be demonstrated in the next two sections. Second, our approach is not tied to any particular machine learning method; it is completely model-agnostic and can be integrated with any arbitrary algorithm: choose a specific classifier ML 0 and compute (3.3) and (3.4) to generate the associated infogram.
Example 4. MONK's problems (Thrun et al., 1991). It is a collection of three binary artificial classification problems (MONK-1, MONK-2 and MONK-3) with p 6 attributes; available in the UCI Machine Learning Repository. As shown in Fig. 5, infogram selects t X 1 , X 2 , X 5 u for the MONK-1 data, and t X 2 , X 5 u for the MONK-3 data as the core features. MONK-2 is an idiosyncratic case, where all six features turned out to be core! This indicates the possible complex nature of the classification rule for the MONK-2 problem.
## 3.1.3 COREtree: High-dimensional Microarray Data Analysis
How does one distill a compact (parsimonious) ML model by balancing accuracy, robustness, and interpretability to the best extent? To answer that, we introduce COREtree , whose
6 Numerous studies have found that many current methods like partial dependence plots, LIME, and SHAP could be highly misleading, particularly when there is strong dependence among features.
Figure 5: Infograms of Monk's problems. CoreSets are denoted in blue.
<details>
<summary>Image 5 Details</summary>

### Visual Description
## Scatter Plot: Monk Data Analysis
### Overview
The image presents three scatter plots, each representing data for a different "Monk" dataset (Monk-1, Monk-2, and Monk-3). Each plot displays "Net Information" on the y-axis versus "Total Information" on the x-axis. The plots also include a shaded region in the bottom-left corner, and labeled data points.
### Components/Axes
* **Titles:** Each plot has a title indicating the dataset: "Monk-1 Data", "Monk-2 Data", and "Monk-3 Data".
* **X-axis:** Labeled "Total Information", with a scale from 0.0 to 1.0 in increments of 0.2.
* **Y-axis:** Labeled "Net Information", with a scale from 0.0 to 1.0 in increments of 0.2.
* **Data Points:** Each data point is represented by a small circle, with some points labeled as "X1", "X2", "X3", "X4", "X5", and "X6" in blue.
* **Shaded Region:** A shaded region is present in the bottom-left corner of each plot, bounded by x=0.2 and y=0.1, approximately. The region is defined by a dashed red line.
### Detailed Analysis
**Monk-1 Data:**
* X-axis: "Total Information"
* Y-axis: "Net Information"
* Data points:
* X1: Approximately (0.5, 0.95)
* X2: Approximately (0.9, 0.75)
* X5: Approximately (0.95, 0.8)
* Trend: The data points are scattered, with two points in the upper-right quadrant and a cluster of points near the origin.
**Monk-2 Data:**
* X-axis: "Total Information"
* Y-axis: "Net Information"
* Data points:
* X1: Approximately (0.8, 0.5)
* X2: Approximately (0.6, 0.5)
* X3: Approximately (1.0, 0.95)
* X4: Approximately (0.3, 0.65)
* X5: Approximately (0.95, 0.95)
* X6: Approximately (0.4, 0.8)
* Trend: The data points are more dispersed compared to Monk-1, with points spread across the plot.
**Monk-3 Data:**
* X-axis: "Total Information"
* Y-axis: "Net Information"
* Data points:
* X2: Approximately (0.8, 0.95)
* X5: Approximately (1.0, 0.95)
* Trend: Most data points are clustered near the origin, with two points in the upper-right quadrant.
### Key Observations
* The shaded region in each plot appears to define a threshold or boundary.
* Monk-1 and Monk-3 datasets have a cluster of points near the origin, while Monk-2 has more dispersed points.
* The labeled data points (X1, X2, etc.) are not consistent across the datasets.
### Interpretation
The plots likely represent the distribution of data points in terms of "Total Information" and "Net Information" for different Monk datasets. The shaded region might indicate a region of low information or noise. The varying distributions suggest that the Monk datasets have different characteristics in terms of information content and distribution. The specific meaning of "Total Information" and "Net Information" is not provided, but the plots allow for a visual comparison of these metrics across the datasets.
</details>
construction is guided by infogram. The methodology is illustrated using two real datasets, namely Prostate cancer and SRBCT tumor data. The main findings are striking: it shows how one can systematically search and construct robust and interpretable shallow decision tree models (often with just two or three genes) for noisy high-dimensional microarray datasets that are as powerful as the most elaborate and complex machine learning methods.
Example 5. Prostate cancer gene expression data . The data consist of p 6033 gene expression measurements on 50 control and 52 prostate cancer patients. It is available at https://web.stanford.edu/ hastie/CASI files/DATA/prostate.html . Our analysis is summarized below.
Step 1. Identifying CoreGenes . GBM-selected top 50 genes are shown in Fig. 6. We generate the infogram 7 of these 50 variables (displayed on the top-right corner), which identifies five core-genes t 1627 , 2327 , 77 , 1511 , 1322 u .
Step 2. Rank-transform: Robustness and Interpretability . Instead of directly operating on the gene expression values, we transform them into their ranks. Let t x j 1 , . . . , x jn u be the measurements on j th gene with empirical cdf r F j . Convert the raw x ji to u ji by
$$u _ { j i } = \bar { F } _ { j } ( x _ { j i } ) , \, i = 1 , \dots , n$$
and work on the resulting U n p matrix instead of the original X n p . We do this transformation for two reasons: first, to robustify, since it is known that gene expressions are inherently noisy. Second, to make it unit-free, since the raw gene expression values depend on the type
7 To reduce unnecessary clutter, we have displayed the infogram using top 50 features, since the rest of the genes will be cramped inside the nonessential L-zone anyway.
Figure 6: Prostate data analysis. Top panel: the gbm-feature importance graph, along with the infogram for the top 50 genes. Bottom-left: the scatter plot of Gene 1627 vs. 2327. For clarity, we have plotted them in the quantile domain p u i , v i q , where u rank p X r , 1627 sq{ n and v rank p X r , 2327 sq{ n . The black dots denote control samples with y 0 class and red triangles are prostate cancer samples with y 1 class. Bottom-right: the estimated CoreTree with just two decision-nodes, which is good enough to be 95% accurate.
<details>
<summary>Image 6 Details</summary>

### Visual Description
## Chart Compilation: Prostate Cancer Data Analysis
### Overview
The image presents a compilation of four charts related to prostate cancer data analysis. These charts include a variable importance plot, a scatter plot of total vs. net information, a scatter plot of normalized ranks, and a decision tree.
### Components/Axes
**1. Variable Importance: GBM (Top-Left)**
* **Title:** Variable Importance: GBM
* **Y-axis:** List of numerical values (77, 5568, 808, 913, 5530, 2945, 1515, 472, 790, 897, 1631, 285, 5843, 1329, 1909, 3252, 3012). These represent variable names or IDs.
* **X-axis:** Ranges from 0.0 to 1.0, representing the importance score.
**2. Prostate Cancer Data (Top-Right)**
* **Title:** Prostate Cancer Data
* **X-axis:** Total Information, ranging from 0.0 to 1.0.
* **Y-axis:** Net Information, ranging from 0.0 to 1.0.
* **Data Points:** Scatter plot with several points labeled: X1627 (at approximately (0.95, 0.95)), X2327 (at approximately (0.4, 0.7)), X1322 (at approximately (0.15, 0.5)), X1511 (at approximately (0.4, 0.45)), X77 (at approximately (0.6, 0.4)).
* **Highlighted Region:** A shaded red region in the bottom-left corner, bounded by x=0.2 and y=0.1.
**3. Normalized-Rank Scatter Plot (Bottom-Left)**
* **X-axis:** normalized-rank: X1627, ranging from 0.0 to 1.0.
* **Y-axis:** normalized-rank: X2327, ranging from 0.0 to 1.0.
* **Data Points:** Scatter plot with two types of points: circles and red triangles.
* **Lines:** A vertical black line at x=0.35 and a horizontal red line at y=0.25.
**4. Decision Tree (Bottom-Right)**
* **Nodes:** The tree consists of several nodes, each containing information.
* **Root Node:** Top node labeled "1" with values 0.49 and 0.51, and 100%.
* **Decision Rule:** "X1627 < 0.33".
* **Branches:** Two branches labeled "yes" and "no".
* **Child Nodes:**
* Left child node labeled "2" with values 1.00 and 0.00, and 32%.
* Right child node labeled "3" with values 0.25 and 0.75, and 68%.
* **Second Decision Rule:** "X2327 < 0.24" from node 3.
* **Grandchild Nodes:**
* From node 3, left child node labeled "6" with values 1.00 and 0.00, and 12%.
* From node 3, right child node labeled "7" with values 0.09 and 0.91, and 56%.
### Detailed Analysis
**1. Variable Importance: GBM**
* The variable with ID 77 has the highest importance score, close to 1.0.
* The importance scores decrease rapidly for the other variables.
* Variables 5568, 808, 913, and 5530 have relatively high importance scores compared to the rest.
* Variables 1329, 1909, 3252, and 3012 have the lowest importance scores, close to 0.0.
**2. Prostate Cancer Data**
* The majority of data points are clustered near the origin (0.0, 0.0).
* X1627 is an outlier with high total and net information.
* X2327, X1322, X1511, and X77 have moderate total and net information.
* The shaded red region highlights a zone of low total and net information.
**3. Normalized-Rank Scatter Plot**
* The plot shows the relationship between the normalized ranks of X1627 and X2327.
* The red triangles are clustered in the top-right quadrant, indicating high ranks for both variables.
* The circles are more scattered, with a concentration in the bottom-left quadrant.
* The vertical and horizontal lines divide the plot into four regions.
**4. Decision Tree**
* The decision tree uses X1627 and X2327 to classify the data.
* The first split is based on X1627 < 0.33.
* The second split is based on X2327 < 0.24.
* The tree predicts different outcomes based on these splits.
### Key Observations
* Variable 77 is the most important variable according to the GBM model.
* X1627 is an outlier in terms of total and net information.
* The normalized ranks of X1627 and X2327 are correlated for some data points.
* The decision tree provides a simple classification model based on X1627 and X2327.
### Interpretation
The compilation of charts provides a multi-faceted analysis of prostate cancer data. The variable importance plot identifies key variables, while the scatter plot visualizes the relationship between total and net information. The normalized-rank scatter plot explores the correlation between X1627 and X2327, and the decision tree offers a classification model based on these variables. The outlier X1627 and the importance of variable 77 are notable findings. The decision tree suggests that X1627 and X2327 are useful features for classifying the data, and the splits are based on thresholds of 0.33 and 0.24, respectively. The shaded region in the "Prostate Cancer Data" plot likely represents a region of low information content, potentially indicating less informative or less relevant data points.
</details>
of preprocessing, thus carries much less scientific meaning. On the other hand, percentiles are much more easily interpretable to convey 'how overexpressed a gene is.'
Step 3. Shallow Robust Tree . We build a single decision tree using the infogram-selected coregenes. This is displayed in the bottom-right panel of Fig. 6. Interestingly, the CoreTree retained only two genes t 1627 , 2327 u whose scatter plot (in the rank-transform domain) is shown in the bottom-left corner of Fig. 6. A simple eyeball estimate of the discrimination surfaces are shown in bold (black and red) lines, which closely matches with the decision tree rule. It is quite remarkable that we have reduced the original 6033-dimensional problem to a simple bivariate two-sample one, just by wisely selecting the features based on the infogram.
Step 4. Stability . Note the tree that we build is based only on the infogram-selected core features. These features have less redundancy and high relevance, which provide an extraordinary stability (over different runs on the same dataset) to the decision-tree-a highly desirable characteristic.
Step 5. Accuracy . The accuracy of our single decision tree (on a randomly selected 20% test set, averaged over 100 times) is more than 95%. On the other hand, the full-data gbm (with p 6033 genes) is only 75% accurate. Huge simplification of the model-architecture with significant gain in the predictive performance!
Step 6. Gene Hunting: Beyond Marginal Screening . We compute two-sample t -test statistic for all p 6033 genes and rank them according to their absolute values (the gene with the largest absolute t -statistic gets ranked 1-the most differentially expressed gene). The t -scores for the coregenes along with their p-values and ranks are:
$$\begin{array} { r l } & { \left | t _ { 1 6 2 7 } \right | = 0 . 1 5 ; \, p { - v a l u e } = 0 . 8 8 ; \, r a n k = 5 3 8 3 . } \\ & { \left | t _ { 2 3 2 7 } \right | = 1 . 4 0 ; \, p { - v a l u e } = 0 . 1 7 ; \, r a n k = 1 2 2 8 . } \end{array}$$
Thus, it is hopeless to find coregenes by any marginal-screening method-they are too weak marginally (in isolation), but jointly an extremely strong predictor . The good news is that our approach can find those multivariate hidden gems in a completely nonparametric fashion.
Step 7. Lasso Analysis and Results . We have used the glmnet R-package. Lasso with λ min (minimum cross-validation error) selects 70 genes, where as λ 1se (the largest lambda such that error is within 1 standard error of the minimum) selects 60 genes. Main findings are:
Figure 7: SRBCT data analysis. Top-left: GBM-feature importance plot; top 50 genes are shown. Top-right: The associated infogram. Bottom panel: The estimated coretree with just three decision nodes.
<details>
<summary>Image 7 Details</summary>

### Visual Description
## Chart/Diagram Type: Multi-Panel Analysis
### Overview
The image presents a multi-panel analysis consisting of a variable importance chart (GBM), a scatter plot of SRBCT Cancer Data, and a decision tree. The variable importance chart ranks variables by their importance. The scatter plot visualizes the distribution of cancer data based on total and net information. The decision tree outlines a classification process based on variable thresholds.
### Components/Axes
**Panel 1: Variable Importance: GBM**
* **Title:** Variable Importance: GBM
* **Y-axis:** List of variables (123, 1003, 129, 1601, 1095, 1626, 1060, 2146, 191, 2214, 1710, 1644, 2186, 278, 37, 726, 2192)
* **X-axis:** Variable Importance, ranging from 0.0 to 1.0
**Panel 2: SRBCT Cancer Data**
* **Title:** SRBCT Cancer Data
* **X-axis:** Total Information, ranging from 0.0 to 1.0
* **Y-axis:** Net Information, ranging from 0.0 to 1.0
* **Data Points:** Scatter plot points labeled with identifiers such as X123, X1954, X2050, X246, X742.
* **Highlighted Region:** A shaded red/orange rectangle in the bottom-left corner, bounded by x=0 to ~0.2 and y=0 to ~0.1.
**Panel 3: Decision Tree**
* **Nodes:** Represented by rounded rectangles, each containing:
* Node Number (top)
* Class Distribution (four values)
* Percentage of Samples (bottom)
* **Branches:** Represented by dotted lines, each labeled with a decision rule (e.g., "X1954 >= 0.67").
* **Terminal Nodes (Leaves):** Colored rounded rectangles at the bottom of the tree. Colors are green, orange, blue, and purple.
### Detailed Analysis or ### Content Details
**Panel 1: Variable Importance: GBM**
* **Trend:** The variable importance decreases as you go down the list.
* **Top Variables:**
* 123: Importance ~ 0.95
* 1003: Importance ~ 0.75
* 129: Importance ~ 0.65
* 1601: Importance ~ 0.35
* **Least Important Variables:** The variables at the bottom of the list (e.g., 2192) have very low importance, close to 0.0.
**Panel 2: SRBCT Cancer Data**
* **Data Point Locations:**
* X123: Total Information ~ 0.85, Net Information ~ 0.9
* X1954: Total Information ~ 0.75, Net Information ~ 0.3
* X2050: Total Information ~ 0.3, Net Information ~ 0.3
* X246: Total Information ~ 0.4, Net Information ~ 0.2
* X742: Total Information ~ 0.95, Net Information ~ 0.3
* **Cluster:** A cluster of points is located near the origin (0, 0), within the highlighted red/orange rectangle.
**Panel 3: Decision Tree**
* **Root Node (Node 1):**
* Node Number: 1
* Class Distribution: .35, .13, .22, .30
* Percentage: 100%
* **First Split:** X1954 >= 0.67
* Yes Branch leads to Node 3
* No Branch leads to Node 4
* **Node 3:**
* Node Number: 3
* Class Distribution: .04, .20, .33, .44
* Percentage: 66%
* **Second Split (from Node 3):** X742 >= 0.8
* Yes Branch leads to Node 7
* No Branch leads to Node 6
* **Node 7:**
* Node Number: 7
* Class Distribution: .05, .29, .03, .63
* Percentage: 46%
* **Third Split (from Node 7):** X123 >= 0.87
* Yes Branch leads to Node 14
* No Branch leads to Node 15
* **Terminal Nodes:**
* Node 2 (Green): .96, .00, .00, .04, 34%
* Node 6 (Orange): .00, .00, 1.00, .00, 20%
* Node 14 (Blue): .00, 1.00, .00, .00, 13%
* Node 15 (Purple): .07, .00, .04, .89, 33%
### Key Observations
* Variable 123 is the most important variable according to the GBM model.
* The SRBCT cancer data shows distinct clusters based on total and net information.
* The decision tree uses X1954, X742, and X123 to classify the data.
* The terminal nodes of the decision tree have varying class distributions.
### Interpretation
The multi-panel analysis provides insights into variable importance, data distribution, and classification rules for SRBCT cancer data. The GBM model identifies key variables, while the scatter plot visualizes the relationships between total and net information. The decision tree offers a rule-based approach for classifying the data, potentially aiding in diagnosis or prognosis. The decision tree uses the variables identified as important by the GBM (e.g. 123) to create splits. The scatterplot shows how the data is distributed in terms of total and net information, and the decision tree attempts to create rules to separate the data points into different classes.
</details>
(i) The coregenes t 1627 , 2327 u were never selected, probably because they are marginally very weak; and the significant interaction is not detectable by standard-lasso.
(ii) Accuracy of Lasso with λ min is around 78% (each time we have randomly selected 85% data for training; computed the λ cv for making prediction; averaged over 100 runs).
Step 8. Explainability . The final 'two-gene model' is so simple and elegant that it can be easily communicated to doctors and medical practitioners: a patient with overexpressed gene 1627 and gene 2327 has a higher risk of getting prostate cancer. Biologists can use these two genes as robust prognostic markers for decision-making (or for recommending the proper drug). It is hard to imagine there could be a more accurate algorithm, one that is at least as compact as the 'two-gene model.' We should not forget that the success behind this dramatic model-reduction hinges on discovering multivariate coregenes , which: (i) help us to gain insights into biological mechanisms [clarifying 'who' and 'how'], and (ii) provide a simple explanation of the predictions [justifying 'why'].
Example 6. SRBCT Gene Expression Data . It is a microarray experiment of Small Round Blue Cell Tumors (SRBCT) taken from a childhood cancer study. It contain information on p 2 , 308 genes on 63 training samples and 25 test samples. Among n 63 tumor examples, 8 are Burkitt Lymphoma (BL), 23 are Ewing Sarcoma (EWS), 12 are neuroblastoma (NB), and 20 are rhabdomyosarcoma (RMS). The dataset is available in the plsgenomics Rpackage. The top-panel of Fig. 7 shows the infogram, which identifies five core genes t 123 , 742 , 1954 , 246 , 2050 u . The associated coretree with only three decision-nodes is shown in the bottom panel, which accurately classifies 95% of the test cases. In addition, it enjoys all the advantages that were ascribed to the prostate data-we don't repeat them again.
Remark 8. We end this section with a general remark: when applying machine learning algorithms in scientific applications, it is of the utmost importance to design models that can clearly explain the 'why and how' behind their decision-making process. We should not forget that scientists mainly use machine learning as a tool to gain a mechanistic understanding, so that they can judiciously intervene and control the system. Sticking with the old way of building inscrutable predictive black-box models will severely slow down the adoption of ML methods in scientific disciplines like medicine and healthcare.
## 3.1.4 COREglm: Breast Cancer Wisconsin Data
Example 7. Wisconsin Breast Cancer Data . The Breast Cancer dataset is available in the UCI machine learning repository. It contains n 569 malignant and benign tumor cell
Figure 8: Breast Cancer Wisconsin Data. Infogram reveals where the crux of the information is hidden. Infogram-guided admissible decision tree-a compact yet accurate classifier.
<details>
<summary>Image 8 Details</summary>

### Visual Description
## Scatter Plots: CoreInfogram and Core-Scatter Plot
### Overview
The image contains two scatter plots side-by-side. The left plot, titled "CoreInfogram," displays the relationship between "Total Information" and "Net Information" for several features. The right plot, titled "Core-Scatter plot," shows the relationship between "concave_points_mean" and "radius_worst," with data points distinguished by color/shape (red 'x' and green diamond).
### Components/Axes
**CoreInfogram (Left Plot):**
* **Title:** CoreInfogram
* **X-axis:** Total Information, ranging from 0.0 to 1.0 in increments of 0.2.
* **Y-axis:** Net Information, ranging from 0.0 to 1.0 in increments of 0.2.
* **Data Points:**
* texture\_worst (Total Information ~0.1, Net Information ~1.0)
* concave\_points\_mean (Total Information ~0.15, Net Information ~0.7)
* texture\_mean (Total Information ~0.2, Net Information ~0.3)
* radius\_worst (Total Information ~1.0, Net Information ~0.5)
* Several data points clustered near (0.0, 0.0), colored black, green, and white.
* **Red Dashed Box:** A red dashed box spans from approximately (0,0) to (0.1, 1.0).
* **Light Red Shaded Region:** A light red shaded region fills the area within the red dashed box and extends to the right, up to approximately Net Information = 0.1.
**Core-Scatter plot (Right Plot):**
* **Title:** Core-Scatter plot
* **X-axis:** concave\_points\_mean, ranging from 0.00 to 0.20 in increments of 0.05.
* **Y-axis:** radius\_worst, ranging from 10 to 35 in increments of 5.
* **Data Points:**
* Red 'x' markers clustered in the bottom-left corner.
* Green diamond markers scattered across the plot, generally trending upwards.
### Detailed Analysis
**CoreInfogram (Left Plot):**
* The majority of data points are clustered near the origin (0,0).
* The "texture\_worst" feature has high Net Information but low Total Information.
* The "radius\_worst" feature has high Total Information but moderate Net Information.
* The red dashed box and shaded region seem to highlight a specific area of interest in terms of Total and Net Information.
**Core-Scatter plot (Right Plot):**
* The red 'x' markers form a dense cluster, indicating a strong correlation between low "concave\_points\_mean" and low "radius\_worst" values.
* The green diamond markers show a positive correlation between "concave\_points\_mean" and "radius\_worst," although with more variance.
* There is some overlap between the red and green data points in the lower region of the plot.
### Key Observations
* The CoreInfogram plot visualizes the relationship between Total and Net Information for different features.
* The Core-Scatter plot shows the relationship between "concave\_points\_mean" and "radius\_worst," potentially distinguishing between two groups (red 'x' and green diamond).
### Interpretation
The CoreInfogram plot seems to be a feature selection or importance visualization, where Total and Net Information are used as metrics. Features clustered near the origin likely have low importance or relevance. The red box might indicate a threshold or region of interest for feature selection.
The Core-Scatter plot likely represents two different classes or groups of data, where the red 'x' markers represent one group and the green diamond markers represent another. The plot suggests that "concave\_points\_mean" and "radius\_worst" can be used to differentiate between these groups, although there is some overlap. The green diamonds show a positive correlation, suggesting that as "concave\_points\_mean" increases, "radius\_worst" also tends to increase for that group. The red 'x' markers are clustered, indicating a more consistent relationship between the two variables for that group.
</details>
samples. The task is to build an admissible (interpretable and accurate) ML classifier based on p 31 features extracted from cell nuclei images.
Step 1. Infogram Construction: Fig. 8 displays the infogram, which provides a quick understanding of the phenomena by revealing its 'core.' Noteworthy points: (i) there are three highly predictive inadmissible features (green bubbles in the plot: perimeter worst, area worst, and concave points worst), which have large overall predictive importance but almost zero net individual contributions. We have called these variables ' Imitators ' in Sec. 3.1.1. (ii) Three among the four 'core' admissible features (texture worst, concave points mean, and texture mean) are not among the top features based on usual predictive information, yet they contain a considerable amount of new exclusive information (net-predictive information) that is useful for separating malignant and benign tumor cells. In simple terms, infogram help us to track down where the 'core' discriminatory information is hidden.
Step 2. Core-Scatter plot. The right panel of Fig. 8 shows the scatter plot of the top two core features and how they separate the malignant and benign tumor cells.
Step 3. Infogram-assisted CoreGLM model: The simplest possible model that one could build is a logistic regression based on those four admissible features. Interestingly, the Akaike information criterion (AIC) based model selection further drops the variable texture mean ,
which is hardly surprising considering that it has the least net and total information among the four admissible core features. The final logistic regression model with three core variables is displayed below (output of glm R-function):
```
#COREglm Model: UCI breast cancer data
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -29.42361 3.85131 -7.640 2.17e-14 ***
concave_points_mean 96.48880 16.11261 5.988 2.12e-09 ***
radius_worst 0.99767 0.16792 5.941 2.83e-09 ***
texture_worst 0.30451 0.05302 5.744 9.27e-09 ***
```
This simple parametric model achieves a competitive accuracy of 96 . 50% (on a 15% test set; averaged over 50 trials). Compare this with full-fledged big ML models (like gbm, random forest, etc.) which attain accuracy in the range of 95 97%. This example again shows how infogram can guide the design of a highly transparent and interpretable CoreGLM model with a few handful of variables-which is as powerful as complex black-box ML methods.
Remark 9 (Integrated statistical modeling culture) . One should bear in mind that the process by which we arrived at simple admissible models actually utilizes the power of modern machine learning-needed to estimate the formula (3.4) of definition 3, as described by the theory laid out in section 2. For more discussion on this topic, see Appendix A.6 and Mukhopadhyay and Wang (2020). In short, we have developed a process of constructing an admissible (explainable and efficient) ML procedure starting from a 'pure prediction' algorithm.
## 3.2 FINEml: Algorithmic Fairness
ML-systems are increasingly used for automated decision-making in various high-stakes domains such as credit scoring, employment screening, insurance eligibility, medical diagnosis, criminal justice sentencing, and other regulated areas. To ensure that we are making responsible decisions using such algorithms, we have to deploy admissible models that can balance Fairness, INterpretability, and Efficiency ( FINE ) to the best possible extent. This section discusses principles and tools for designing such FINE -algorithms.
## 3.2.1 FINE-ML: Approaches and Limitations
Imagine that a machine learning algorithm is used by a bank to accurately predict whether to approve or deny a loan application based on the probability of default. This ML-based
risk-assessing tool has access to the following historical data:
- Y : { 0 , 1 } Loan status variable-1 whether the loan was approved and 0 if denied.
- S : Collection of protected attributes { gender, marital status, age, race } .
- X : Feature matrix { income, loan amount, education, credit history, zip code }
To automate the loan-eligibility decision-making process, the bank wants to develop an accurate classifier that will not discriminate among applicants on the basis of their protected features. Naturally, the question is: how to go about designing such ML-systems that are accurate and at the same time provide safeguards against algorithmic discrimination?
Approach 1 Non-constructive : We can construct a myriad of ML models by changing and tuning different hyper-parameters, base learners, etc. One can keep building different models until one finds a perfect one that avoids adverse legal and regulatory issues. There are at least two problems with this 'try until you get it right' approach: first, it is non-constructive. The whole process gives zero guidance on how to rectify the algorithm to make it less-biased. Second, there is no single definition of fairness-more than twenty different definitions have been proposed over the last few years (Narayanan, 2018). And the troubling part is that these different fairness measures are mutually incompatible to each other and cannot be satisfied simultaneously (Kleinberg, 2018); see Appendix A.4. Hence this laborious process could end up being a wild-goose chase, resulting in a huge waste of computation.
Approach 2 Constructive : Here we seek to construct ML models that-by design-mitigate bias and discrimination. To execute this task successfully, we must first identify and remove proxy variables (e.g., zip code) from the learning set, which prevent a classification algorithm from achieving desired fairness. But how to define a proper mathematical criterion to detect those surrogate variables? Can we develop some easily interpretable graphical exploratory tools to systematically uncover those problematic variables? If we succeed in doing this, then ML developers can use it as a data filtration tool to quickly spot and remove the potential sources of biases in the pre-modeling (data-curation) stage, in order to mitigate fairness issues in the downstream analysis.
Figure 9: Infogram maps variables in a two dimensional (effectiveness vs. safety) diagram. It is a pre-modeling nonparametric exploratory tool for admissible feature selection. Infogram is interpreted based on graphical (conditional) independence structure. In real problems, all variables will have some degree of correlation with the protected attributes. Important part is to quantify the 'degree,' which is measured through eq. (3.6)-as indicated by varying thicknesses of the edges (bold to dotted). Ultimately, the purpose of this graphical diagnostic tool is to provide the necessary guardrails to construct an appropriate learning algorithm that can retain as much of the predictive accuracy as possible, while defending against unforeseen biases-tool for risk-benefit analysis.
<details>
<summary>Image 9 Details</summary>

### Visual Description
## Scatter Plot and Network Diagrams: Safety vs. Relevance
### Overview
The image presents a scatter plot titled "InfoGram" showing the relationship between "Safety-index" and "Relevance-index" for six data points labeled A through F. Below the scatter plot are six network diagrams, also labeled A through F, each depicting a different network structure involving nodes labeled X, Y, and S.
### Components/Axes
**Scatter Plot:**
* **Title:** InfoGram
* **X-axis:** Relevance-index, ranging from 0.0 to 1.0 in increments of 0.2.
* **Y-axis:** Safety-index, ranging from 0.0 to 1.0 in increments of 0.2.
* **Data Points:** Six data points labeled A, B, C, D, E, and F. All labels are in blue.
* **Highlighted Region:** A shaded rectangular region in the bottom-left corner, extending from approximately (0,0) to (0.2, 1.0) and (1.0, 0.2).
**Network Diagrams:**
* Six separate diagrams labeled A, B, C, D, E, and F.
* Each diagram consists of nodes labeled X, Y, and S, connected by lines.
* Lines can be solid or dotted, representing different types of relationships.
### Detailed Analysis
**Scatter Plot Data Points:**
* **A:** Relevance-index ≈ 0.95, Safety-index ≈ 0.95
* **B:** Relevance-index ≈ 0.95, Safety-index ≈ 0.05
* **C:** Relevance-index ≈ 0.55, Safety-index ≈ 0.50
* **D:** Relevance-index ≈ 0.55, Safety-index ≈ 0.75
* **E:** Relevance-index ≈ 0.60, Safety-index ≈ 0.25
* **F:** Relevance-index ≈ 0.15, Safety-index ≈ 0.10
**Network Diagrams:**
* **A:** Y is connected to both X and S.
* **B:** Y, S, and X are connected linearly in that order.
* **C:** X, Y, and S are connected in a triangle.
* **D:** Y is connected to both X and S with solid lines. X and S are connected with a dotted line.
* **E:** Y is connected to X with a dotted line and to S with a solid line. X and S are connected with a solid line.
* **F:** Y is connected to X with a dotted line and to S with a solid line. X and S are connected with a dotted line.
### Key Observations
* Data points A and B are located at the extreme right of the scatter plot, indicating high relevance.
* Data point A also has a high safety index, while B has a low safety index.
* Data point F is located in the bottom-left corner, indicating low relevance and low safety.
* The shaded region highlights the area of low relevance and low safety.
* The network diagrams show varying connectivity patterns between the nodes X, Y, and S.
### Interpretation
The scatter plot visualizes the relationship between the safety and relevance indices of six entities (A-F). The network diagrams likely represent the underlying structure or relationships associated with each entity.
The placement of points in the scatter plot suggests a trade-off between safety and relevance. For example, entity A is both highly safe and highly relevant, while entity B is highly relevant but not safe. Entity F is neither safe nor relevant. The shaded region likely represents an undesirable zone where both safety and relevance are low.
The network diagrams provide additional information about the nature of each entity. The different connectivity patterns between nodes X, Y, and S could represent different types of relationships or interactions within each entity.
By combining the information from the scatter plot and the network diagrams, one can gain a more comprehensive understanding of the characteristics and relationships of the entities A through F.
</details>
## 3.2.2 InfoGram and Admissible Feature Selection
We offer a diagnostic tool for identification of admissible features that are predictive and safe. Before going any further, it is instructive to formally define what we mean by 'safe.'
Definition 4 (Safety-index and Inadmissibility) . Define the safety-index for variable X j as
$$F _ { j } \, = \, M I \left ( Y , X _ { j } | \, \{ S _ { 1 } , \dots , S _ { q } \} \right )$$
This quantifies how much extra information X j carries for Y that is not acquired through the sensitive variables S p S 1 , . . . , S q q . For interpretation purposes, we standardize F j between zero and one by dividing by the max j F j . Variables with 'small' F -values (F-stands for fairness) will be called inadmissible, as they possess little or no informational value beyond their use as a dummy for protected characteristics.
Construction . In the context of fairness, we construct the infogram by plotting tp R j , F j qu p j 1 , where recall R j denotes the relevance score (3.3) for X j . The goal of this graphical tool is to assist identification of admissible features which have little or no information-overlap with sensitive attributes S , yet are reasonably predictive for Y . Interpretation . Fig. 9 displays an infogram with six covariates. The L-shaped highlighted region contains variables that are either inadmissible (the horizontal slice of L) or inadequate (the vertical slice of L) for prediction. The complementary set L c comprises of the desired admissible features. Focus on variables A and B : both have the same predictive power, but are gained through a completely different manner. The variable B gathered information for Y entirely through the protected features (verify it from the graphical representation of B ), and is thus inadmissible. On the other hand, the variable A carries direct informational value, having no connection with the prohibitive S , and is thus totally admissible. Unfortunately, though, reality is usually more complex than this clear-cut black and white A -B situation. The fact of the matter is: admissibility (or fairness, per se) is not a yes/no concept, but a matter of degree 8 , which is explained at the bottom two rows of Fig. 9 utilizing variables C to F.
Remark 10. The graphical exploratory nature of the infogram makes the whole learning process much more transparent, interactive, and human-centered.
Legal doctrine . Note that in our framework the protected variables are used only in the pre-deployment phase to determine what other (admissible) attributes to include in the
8 'Zero bias' is an illusion. All models are biased (to a different degree), but some are admissible. The real question is how to methodically construct those admissible ones from possibly biased data.
algorithm to mitigate unforeseen downstream bias, which is completely legal (Hellman, 2020). It is also advisable that once inadmissible variables are identified using an infogram, not to throw them (especially the highly predictive ones such as the feature B in Fig. 9) blindly from the analysis without consulting domain experts-including some of them may not necessarily imply violation of the law; ultimately, it is up to the policymakers and judiciary to determine their appropriateness (legal permissibility) based on the given context. Our job as statisticians is to discover those hidden inadmissible L-features (preferably in a fully data-driven and automated manner) and raise a red flag for further investigation.
## 3.2.3 FINEtree and ALFA-Test: Financial Industry Applications
Example 8. The Census Income Data . The dataset is extracted from 1994 United States Census Bureau database, available in UCI Machine Learning Repository. It is also known as the 'Adult Income' dataset, which contains n 45 , 222 records involving personal details such as yearly income (whether it exceeds $ 50,000 or not), education level, age, gender, marital-status, occupation, etc. The classification task is to determine whether a person makes $ 50k per year based on a set of 14 attributes, of which four are protective:
$$S = \left \{ A g e , G e n d e r , R a c e , M a r i t a l . S t a t u s \right \} .$$
Step 1 . Trust in data. Is there any evidence of built-in bias in the data? That is to say, whether a 'significant' portion of the decision-making ( Y is greater or less than 50k per year) was influenced by the sensitive attributes S beyond what is already captured by other covariates X ? One may be tempted to use MI p Y, S | X q as a measure for assessing fairness. But we need to be careful while interpreting the value of MI p Y, S | X q . It can take a 'small' value for two reasons: First, a genuine case of fair decision-making where individuals with similar x received a similar outcome irrespective of their age, gender, and other protected characteristics; see Appendix A.4 for one such example. Second, there is a collusion between X and S in the sense that X contains some proxies of S which reduce its effect-size-leading one to falsely declare a decision-rule fair when it is not.
Remark 11 (Shielding Effect) . The presence of a highly-correlated surrogate variable in the conditional set drastically reduces the size of the CMI-statistics. We call this contraction phenomenon of effect-size in the presence of proxy feature the 'shielding effect.' To guard against this effect-distortion phenomenon we first have to identify the admissible features from the infogram.
Step 2 . Infogram to identify inadmissible proxy features. The infogram, shown in the left
panel of Fig. 10, finds four admissible features
$$X _ { A } \ = \ \left \{ C a p i t a l g a i n , C a p i t a l l o s s , O c u p a t i o n , E d u c a t i o n \right \} .$$
They share very little information with S yet are highly predictive. In other words, they enjoy high relevance and high safety-index. Next, we also see that there is a feature that appears at the lower-right corner
$$X _ { R } \ = \ \left \{ R e l a t i o n s h i p \right \}$$
which is the prime source of bias; the subscript 'R' stands for risky. The variable relationship represents the respondent's role in the family-i.e., whether the breadwinner is husband, wife, child, or other relative.
Remark 12. Since X R is highly predictive, most unguided 'pure prediction' ML algorithms will include it in their models, even though it is quite unsafe. Admissible ML models should avoid using variables like relationship to reduce unwanted bias. 9 A careful examination reveals that there could be some unintended association between relationship and other protected attributes due to social constructs. Without any formal method, it is a hopeless task (especially for practitioners and policymakers; see Lakkaraju and Bastani 2020, Sec. 5.2) to identify these innocent-looking proxy variables in a scalable and automated way.
Step 3 . ALFA-test and encoded bias. We can construct an honest fairness assessment metric by conditioning CMI with X A (instead of X q :
$$\begin{array} { r l } & { \dot { M } ( Y , S | X _ { A } ) = 0 . 1 3 , w i t h p v a l u e a m o l s t 0 . } \end{array}$$
This strongly suggests historical bias or discrimination is encoded in the data. Our approach not only quantifies but also allows ways to mitigate bias to create an admissible prediction rule; this will be discussed in Step 4. The preceding discussions necessitate the following, new general class of fairness metrics.
Definition 5 (Admissible Fairness Criterion) . To check whether an algorithmic decision is fair given the sensitive attributes and the set of admissible features (identified from infogram), define AdmissibLe FAirness criterion, in short the ALFA-test, as
$$\alpha _ { Y } \, \colon = \, \alpha ( Y \, | \, S , X _ { A } ) \, = \, M I ( Y , S \, | \, X _ { A } ) .$$
Three Different Interpretations . The ALFA-statistic (3.8) can be interpreted from three different angles.
9 or at least should be assessed by experts to determine their appropriateness.
<details>
<summary>Image 10 Details</summary>

### Visual Description
## Scatter Plot and Decision Tree: Census Income Data
### Overview
The image presents two distinct visualizations related to census income data. On the left, a scatter plot displays the "Safety-index" against the "Relevance-index" for various factors like capital gain, education, occupation, capital loss, and relationship. A shaded region is present in the bottom-left corner. On the right, a decision tree illustrates how different features (capital gain, capital loss, occupation, education) are used to predict income levels (<=50K or >50K).
### Components/Axes
**Left: Scatter Plot**
* **Title:** Census Income Data
* **X-axis:** Relevance-index, ranging from 0.0 to 1.0 in increments of 0.2.
* **Y-axis:** Safety-index, ranging from 0.0 to 1.0 in increments of 0.2.
* **Data Points:**
* capital\_gain: Located at approximately (0.5, 0.9).
* education: Located at approximately (0.4, 0.8).
* occupation: Located at approximately (0.3, 0.7).
* capital\_loss: Located at approximately (0.2, 0.35).
* relationship: Located at approximately (0.9, 0.15).
* **Shaded Region:** A light red shaded rectangular region exists in the bottom-left corner, bounded by x=0 to x=0.2 and y=0 to y=0.1.
**Right: Decision Tree**
* The tree predicts income levels (<=50K or >50K) based on various features.
* Each node contains information about the predicted income level, the percentage split, and the percentage of data points reaching that node.
* Decision nodes split based on conditions like "capital\_gain < 5119", "capital\_loss < 1821", "occupationExec-managerial < 0.5", "educationMasters < 0.5", and "capital\_loss < 2365".
### Detailed Analysis
**Left: Scatter Plot**
* The scatter plot shows the relative "Safety-index" and "Relevance-index" of different factors.
* "capital\_gain" has the highest Safety-index and a relatively high Relevance-index.
* "relationship" has the highest Relevance-index but a low Safety-index.
* "capital\_loss" has low values for both indices.
**Right: Decision Tree**
* **Root Node (Node 1):** If capital\_gain is less than 5119, 76% of the samples are <=50K and 24% are >50K. This node accounts for 100% of the data.
* **Node 2:** If capital\_gain >= 5119, and capital\_loss is less than 1821, 80% of the samples are <=50K and 20% are >50K. This node accounts for 95% of the data.
* **Node 5:** If capital\_gain >= 5119, and capital\_loss is greater than or equal to 1979, 28% of the samples are <=50K and 72% are >50K. This node accounts for 3% of the data.
* **Node 4:** If capital\_gain < 5119, 81% of the samples are <=50K and 19% are >50K. This node accounts for 92% of the data.
* **Node 8:** If capital\_gain < 5119, capital\_loss < 1821, and occupationExec-managerial is less than 0.5, 84% of the samples are <=50K and 16% are >50K. This node accounts for 82% of the data.
* **Node 16:** If capital\_gain < 5119, capital\_loss < 1821, and occupationExec-managerial is greater than or equal to 0.5, 64% of the samples are <=50K and 36% are >50K. This node accounts for 9% of the data.
* **Node 9:** If capital\_gain < 5119, capital\_loss >= 1979, and educationMasters is less than 0.5, 35% of the samples are <=50K and 65% are >50K. This node accounts for 1% of the data.
* **Node 20:** If capital\_gain < 5119, capital\_loss >= 1979, and educationMasters is greater than or equal to 0.5, 85% of the samples are <=50K and 15% are >50K. This node accounts for 1% of the data.
* **Node 21:** If capital\_gain >= 5119, capital\_loss >= 1979, and capital\_loss < 2365, 21% of the samples are <=50K and 79% are >50K. This node accounts for 0% of the data.
* **Node 11:** If capital\_gain >= 5119, capital\_loss >= 1979, and capital\_loss >= 2365, 11% of the samples are <=50K and 89% are >50K. This node accounts for 2% of the data.
* **Node 3:** If capital\_gain >= 5119, capital\_loss >= 1979, 5% of the samples are <=50K and 95% are >50K. This node accounts for 5% of the data.
### Key Observations
* The scatter plot provides a high-level overview of the importance and safety of different factors related to income.
* The decision tree shows how these factors can be used to predict income levels.
* Capital gain appears to be the most important factor in determining income level, as it is the first split in the decision tree.
* Capital loss, occupation, and education also play a role in predicting income level.
### Interpretation
The data suggests that capital gain is a strong indicator of income level. Individuals with higher capital gains are more likely to have an income greater than $50,000. Other factors, such as capital loss, occupation, and education, also contribute to the prediction of income level, but to a lesser extent. The decision tree provides a model for predicting income level based on these factors. The scatter plot provides a visual representation of the relative importance and safety of these factors. The shaded region in the bottom-left of the scatter plot likely indicates a region of low relevance and low safety, suggesting that factors falling within this region are not significant predictors of income.
</details>
Relevance-index
Figure 10: Census Income Data. The left plot shows the infogram. And FINEtree is displayed on the right.
- It quantifies the trade-off between fairness and model performance: how much netpredictive value is contained within S (and its close associates)? This is the price we pay in terms of accuracy to ensure a higher degree of fairness.
- A small α -inadmissibility value ensures that individuals with similar 'admissible characteristics' receive a similar outcome. Note that our strategy of comparing individuals with respect to only (infogram-learned) 'admissible' features allows us to avoid the (direct and indirect) influences of sensitive attributes on the decision making.
- Lastly, the α -statistic can also be interpreted as 'bias in response Y .' For a given problem, if we have access to several 'comparable' outcome variables 10 then we choose the one which minimizes the α -inadmissibility measure. In this way, we can minimize the loss of predictive accuracy while mitigating the bias as best as we can.
Remark 13 (Generalizability) . Note that, unlike traditional fairness measures, the proposed ALFA-statistic is valid for multi-class problems with a set of multivariate mixed protected attributes-which is, in itself, a significant step forward.
Step 4
.
FINEtree.
The inherent historical footprints of bias (as noted in eq.
3.7) need
10 e.g, Obermeyer et al. (2019) showed that healthcare cost can be a poor proxy of health, especially for Black patients; similarly, Blattner and Nelson (2021) showed that credit scores could be a poor proxy for creditworthiness especially for low-income and minority groups.
to be deconstructed to build a less-discriminatory classification model for the income data. Fig. 10 shows FINEtree-a simple decision tree based on the four admissible features, which attains 83.5% accuracy.
Remark 14. FINEtree is an inherently explainable, fair, and highly competent (decent accuracy) model whose design was guided by the principles of admissible machine learning.
Step 5 . Trust in algorithm through risk assessment and ALFA-ranking: The current standard for evaluating ML models is primarily based on predictive accuracy on a test set, which is narrow and inadequate. For an algorithm to be deployable it has to be admissible ; an unguided ML carries the danger of inheriting bias from data. To see that, consider the following two models:
Model A : Decision tree based on X A (FINEtree)
Model R : Decision tree based on X A Yt relationship u .
Both models have comparable accuracy around 83 . 5%. Let p Y A and p Y R be the predicted labels based on these two models, respectively. Our goal is to compare and rank different models based on their risk of discrimination using ALFA-statistic:
$$\begin{array} { r l r } { \widehat { \alpha } _ { A } } & = } & { \widehat { M I } ( \widehat { Y } _ { A } , S | X _ { A } ) } & = } & { 0 . 0 0 0 4 2 , w i t h p v a l u e 0 . 9 5 } \end{array}$$
$$\begin{array} { r l r } { \widehat { \alpha } _ { R } } & = } & { \widehat { M I } ( \widehat { Y } _ { R } , S | X _ { A } ) = 0 . 1 9 5 , w i t h p v a l u e a m o s t 0 . } \end{array}$$
α -inadmissibility statistic measures how much the final decision (prediction) was impacted by the protective features. A smaller value is better in the sense that it indicates improved fairness of the algorithm's decision. Eqs (3.9)-(3.10) immediately imply that Model A is better (less discriminatory without being inefficient) than Model R , and can be safely put into production.
Remark 15. Infogram and ALFA-testing can be used (by oversight board or regulators) as a fully-automated exploratory auditing tool that can systematically monitor and discover signs of bias or other potential gaps in compliance 11 ; see Appendix A.3.
Example 9. Taiwanese Credit Card data . This dataset was collected in October 2005, from a Taiwan-based bank (a cash and credit card issuer). It is available in the UCI Machine Learning Repository. We have records of n 30 , 000 cardholders, and for each we have
11 Under the Algorithmic Accountability Act, large AI-driven corporations have to perform broader 'admissibility' tests to keep a check on their algorithms' fairness and trustworthiness; see Appx. A.2.
Figure 11: Left: Infogram of UCI credit card data. It selects two admissible features (i.e., those that are relevant and less-biased) that lie in the complementary of the 'L'-shaped region. Right: The FINEtree (test data accuracy 82%).
<details>
<summary>Image 11 Details</summary>

### Visual Description
## Scatter Plot and Decision Tree: UCI Credit Data
### Overview
The image presents two visualizations related to UCI Credit Data. On the left, a scatter plot shows the relationship between "Relevance-index" and "Safety-index" for different data points, with a highlighted region. On the right, a decision tree illustrates a classification process based on "PAY_0" and "PAY_2" variables.
### Components/Axes
**Left: Scatter Plot**
* **Title:** UCI Credit Data (overall title)
* **X-axis:** Relevance-index, ranging from 0.0 to 1.0 in increments of 0.2.
* **Y-axis:** Safety-index, ranging from 0.0 to 1.0 in increments of 0.2.
* **Data Points:** Represented by small circles.
* **Highlighted Region:** A rectangular area in the bottom-left corner, bounded by Relevance-index ~0.2 and Safety-index ~0.1, filled with a light red color and outlined with a dashed red line.
* **Labels:** "PAY_0" is positioned near the top-right of the scatter plot, and "PAY_2" is positioned near a data point around (0.2, 0.7).
**Right: Decision Tree**
* **Nodes:** Represented by rounded rectangles, each containing:
* A node number (1, 2, 3, 4, 5, 6, 7, 10, 11) in the top-left corner.
* Two decimal values, representing proportions (e.g., 0.78, 0.22).
* A percentage value, representing the proportion of data points in that node (e.g., 100%).
* A color, either green or blue, indicating the class.
* **Branches:** Represented by dotted lines, labeled with conditions based on "PAY_0" and "PAY_2" variables.
* **Decision Rules:**
* Node 1 splits based on "PAY_0 < 1.5".
* Node 2 splits based on "PAY_2 < 1.5".
* Node 3 splits based on "PAY_2 < -0.5".
* Node 5 splits based on "PAY_2 < 2.5".
* **Terminal Nodes (Leaves):** Nodes 4, 6, 7, 10, and 11.
### Detailed Analysis
**Left: Scatter Plot**
* Most data points are clustered in the bottom-left corner, within the highlighted region.
* A few data points are scattered above the highlighted region, with "PAY_2" being one of the higher points.
* "PAY_0" is located far away from the other points, at the top-right.
**Right: Decision Tree**
* **Node 1:** (Top Node)
* Values: 0.78, 0.22
* Percentage: 100%
* Color: Green
* **Node 2:** (Left Child of Node 1)
* Values: 0.83, 0.17
* Percentage: 90%
* Color: Green
* **Node 3:** (Right Child of Node 1)
* Values: 0.30, 0.70
* Percentage: 10%
* Color: Blue
* **Node 4:** (Left Child of Node 2)
* Values: 0.86, 0.14
* Percentage: 82%
* Color: Green
* **Node 5:** (Right Child of Node 2)
* Values: 0.58, 0.42
* Percentage: 8%
* Color: Green
* **Node 6:** (Left Child of Node 3)
* Values: 0.56, 0.44
* Percentage: 0%
* Color: Green
* **Node 7:** (Right Child of Node 3)
* Values: 0.29, 0.71
* Percentage: 10%
* Color: Blue
* **Node 10:** (Left Child of Node 5)
* Values: 0.60, 0.40
* Percentage: 7%
* Color: Green
* **Node 11:** (Right Child of Node 5)
* Values: 0.47, 0.53
* Percentage: 1%
* Color: Blue
### Key Observations
* The scatter plot suggests that most data points have low "Relevance-index" and "Safety-index" values.
* The decision tree uses "PAY_0" and "PAY_2" to classify data points into two classes, represented by green and blue colors.
* The decision tree shows that the majority of the data (90% in Node 2) follows the "PAY_0 < 1.5" path.
* The terminal nodes have varying percentages, indicating different distributions of data points in each class.
### Interpretation
The visualizations provide insights into the UCI Credit Data. The scatter plot highlights a cluster of data points with low relevance and safety indices, potentially indicating a group of credit applicants with similar characteristics. The decision tree demonstrates a classification model based on payment history variables ("PAY_0" and "PAY_2"), which can be used to predict credit risk. The tree structure reveals the importance of "PAY_0" as the primary splitting variable, followed by "PAY_2". The percentages in each node indicate the proportion of data points that fall into each category, providing a measure of the model's confidence in its predictions. The highlighted region in the scatter plot may represent a "safe zone" or a group of applicants with low risk based on the two indices.
</details>
a response variable Y denoting: default payment status (Yes = 1, No = 0), along with p 23 predictor variables, including demographic factors, credit data, history of payment, etc. Among these 23 features we have two protected attributes: gender and age .
The infogram, shown in the left panel of Fig. 11, clearly selects the variable Pay 0 and Pay 2 as the key admissible factors that determine the likelihood of default. Once we know the admissible features, the next question is: 'how' Pay 0 and Pay 2 are impacting the credit risk? Can we extract an admissible decision rule? For that we construct the FINEtree: a decision tree model based on the infogram-selected admissible features; see Fig. 11. The resulting predictive model is extremely transparent (with shallow yet accurate decision trees 12 ) and also mitigates unwanted bias by avoiding inadmissible variables. Lenders, regulators, and bank managers can use this model for automating credit decisions.
12 One can slightly improve accuracy by combining hundreds or thousands of trees (based on only the admissible features) using random forest or boosting. But the opacity of such models renders them unfit for deployment in financial and bank sectors (Fahner, 2018).
Figure 12: ProPublica's COMPAS Data: Top row: infogram and the estimated FINEtree. Bottom row: The two-sample distribution of the continuous variable end and binary event show their usefulness for predicting whether a defendant will recidivate or not.
<details>
<summary>Image 12 Details</summary>

### Visual Description
## Chart/Diagram Type: Multi-Panel Data Visualization of ProPublica's COMPAS Data
### Overview
The image presents a multi-panel visualization of ProPublica's COMPAS data, including a scatter plot, a decision tree, density plots, and a bar chart. These visualizations explore the relationships between variables such as "Relevance-index," "Safety-index," "End," and "Event."
### Components/Axes
**Panel 1: Scatter Plot**
* **Title:** ProPublica's COMPAS Data
* **X-axis:** Relevance-index (scale: 0.0 to 1.0, increments of 0.2)
* **Y-axis:** Safety-index (scale: 0.0 to 1.0, increments of 0.2)
* **Data Points:**
* A cluster of points near (0.0, 0.0)
* A point labeled "event" at approximately (0.2, 0.8)
* A point labeled "end" at approximately (0.95, 0.95)
* **Highlighted Region:** A red dashed L-shaped region extending from (0,0) to (0.2, 1) and (1, 0.1).
**Panel 2: Decision Tree**
* **Root Node (Node 0):**
* Value: 0, 54.46
* Percentage: 100%
* **First Split:** "end >= 729"
* **Yes Branch:** Leads to Node 1
* **No Branch:** Leads to Node 3
* **Node 1:**
* Value: 1, 19.81
* Percentage: 57%
* **Second Split (from Node 1):** "event < 0.5"
* **Yes Branch:** Leads to Node 6
* **No Branch:** Not explicitly shown, but implied
* **Node 6:**
* Value: 0, 54.46
* Percentage: 20%
* **Third Split (from Node 6):** "end >= 183"
* **Yes Branch:** Leads to Node 2
* **No Branch:** Splits into Node 12 and Node 13
* **Node 2:**
* Value: 0, 99.01
* Percentage: 43%
* **Node 12:**
* Value: 0, 69.31
* Percentage: 9%
* **Node 13:**
* Value: 1, 40.60
* Percentage: 10%
* **Node 7:**
* Value: 1, 00 1.00
* Percentage: 37%
**Panel 3: Density Plot**
* **Title:** Variable: End
* **X-axis:** (Implied, likely a continuous variable related to "End")
* **Y-axis:** Density (scale: 0.0000 to 0.0030)
* **Data Series:**
* Red Line (Y=0): Peaks around x=100, then decreases.
* Blue Line (Y=1): Peaks around x=800 and x=1100.
**Panel 4: Bar Chart**
* **Title:** Variable: Event
* **X-axis:** 0, 1
* **Y-axis:** Frequency (scale: 0 to 3000)
* **Legend:**
* Red: Y=0
* Blue: Y=1
* **Bars:**
* For x=0: Red bar extends to approximately 3000, blue bar extends to approximately 500.
* For x=1: Red bar extends to approximately 200, blue bar extends to approximately 2500.
### Detailed Analysis
**Scatter Plot:**
* The majority of data points are clustered near the origin (low Relevance-index and low Safety-index).
* The "event" and "end" points are located in the top-right quadrant, indicating high Relevance-index and high Safety-index.
* The red L-shaped region highlights a zone of low Relevance-index or low Safety-index.
**Decision Tree:**
* The tree uses "end" and "event" variables to classify data.
* The first split is based on whether "end" is greater than or equal to 729.
* The second split is based on whether "event" is less than 0.5.
* The percentages at each node indicate the proportion of data that falls into that category.
**Density Plot:**
* The density plot shows the distribution of the "End" variable for Y=0 and Y=1.
* When Y=0, the density is highest at lower values of "End".
* When Y=1, the density is highest at higher values of "End".
**Bar Chart:**
* The bar chart shows the frequency of Y=0 and Y=1 for different values of the "Event" variable.
* When "Event" is 0, Y=0 is much more frequent than Y=1.
* When "Event" is 1, Y=1 is much more frequent than Y=0.
### Key Observations
* The scatter plot suggests a potential trade-off between Relevance-index and Safety-index.
* The decision tree provides a rule-based approach to classifying data based on "end" and "event".
* The density plot shows distinct distributions of "End" for different values of Y.
* The bar chart indicates a strong association between "Event" and Y.
### Interpretation
The multi-panel visualization provides insights into the relationships between different variables in ProPublica's COMPAS data. The scatter plot highlights the distribution of data points based on Relevance-index and Safety-index, while the decision tree offers a classification model based on "end" and "event". The density plot reveals the distribution of "End" for different values of Y, and the bar chart shows the frequency of Y=0 and Y=1 for different values of "Event".
The data suggests that "Event" and "End" are important factors in determining the outcome variable Y. The decision tree uses these variables to create a classification model, while the density plot and bar chart show the distributions and frequencies of these variables for different values of Y. The scatter plot provides a visual representation of the relationship between Relevance-index and Safety-index, which may be relevant to the overall analysis of the COMPAS data.
</details>
## 3.2.4 Admissible Criminal Justice Risk Assessment
Example 10. ProPublica's COMPAS Data . COMPAS-an acronym for Correctional Offender Management Profiling for Alternative Sanctions-is a most widely used commercial algorithm within the criminal justice system for predicting recidivism risk (the likelihood of re-offending). The data 13 -complied by a team of journalists from ProPublica-constitute all criminal defendants who were subject to COMPAS screening in Broward County, Florida, during 2013 and 2014. For each defendant, p 14 features were gathered, including demographic information, criminal history, and other administrative information. Besides, the dataset also contains information on whether the defendant did in fact actually recidivate (or not) within two years of the COMPAS administration date (i.e., through the end of March 2016); and 3 additional sensitive attributes (gender, race, and age) for each case.
The goal is to develop a accurate and fairer algorithm to predict whether a defendant will engage in violent crime or fail to appear in court if released. Fig. 12 shows our results. Infogram selects event and end as the vital admissible features. The bottom row of Fig. 12 confirms their predictive power. Unfortunately, these two variables are not explicitly defined by ProPublica in the data repository. Based on Brennan et al. (2009), we feel that event indicates some kind of crime that resulted in a prison sentence during a past observation period (we suspect the assessments were conducted by local probation officers during some period between January 2001 and December 2004), and the variable end denotes the number of days under observation (first event or end of study, whichever occurred first). The associated FINEtree recidivism algorithm based on event and end reaches 93% accuracy with AUC 0 . 92 on a test set (consist of 20% of the data). Also see Appendix A.5.
## 3.2.5 FINEglm and Application to Marketing Campaign
We are interested in the following question: how does one systematically build fairnessenhancing parametric statistical algorithms, such as a generalized linear model (GLM)?
Example 11. Thera Bank Financial Marketing Campaign . This is a case study about Thera Bank, the majority of whose customers are liability customers (depositors) with varying sizes of deposits-and among them, very few are borrowers (asset customers). The bank wants to expand its client network to bring more loan business and in the process, earn more through the interest on loans. To test the viability of this business idea they ran a small marketing campaign with n 5000 customers where a 480 (= 9.6%) accepted the personal loan offer. Motivated by the healthy conversion rate, the marketing department wants to devise a much more targeted digital campaign to boost loan applications with a minimal budget.
13 Data: https://github.com/propublica/compas-analysis/raw/master/compas-scores-two-years.csv
Figure 13: Thera Bank marketing campaign data. Left: infogram. Right: scatter plot based on the two admissible features; the color blue and red indicate two different classes.
<details>
<summary>Image 13 Details</summary>

### Visual Description
## Scatter Plots: Financial Marketing Data
### Overview
The image contains two scatter plots related to financial marketing data. The left plot shows the relationship between "Safety-index" and "Relevance-index" for different categories (CCAvg, Family, Income, Education), with a shaded region. The right plot shows the relationship between "Income" and "CCAvg", differentiating data points based on a binary variable "Y" (0 or 1).
### Components/Axes
**Left Plot:**
* **Title:** Financial Marketing Data
* **X-axis:** Relevance-index, ranging from 0.0 to 1.0 in increments of 0.2.
* **Y-axis:** Safety-index, ranging from 0.0 to 1.0 in increments of 0.2.
* **Categories:** CCAvg, Family, Income, Education.
* **Shaded Region:** A rectangular region defined by Relevance-index values between approximately 0.0 and 0.2, and Safety-index values between approximately 0.1 and 1.0. Also, a rectangular region defined by Relevance-index values between approximately 0.2 and 1.0, and Safety-index values between approximately 0.0 and 0.1.
**Right Plot:**
* **X-axis:** Income, ranging from 0 to 200 in increments of 50.
* **Y-axis:** CCAvg, ranging from 0 to 10 in increments of 2.
* **Legend:** Located in the top-left corner.
* Y=0 (Blue circles)
* Y=1 (Red circles)
### Detailed Analysis
**Left Plot:**
* **CCAvg:** Located at approximately (0.2, 0.6)
* **Family:** Located at approximately (0.4, 0.1)
* **Income:** Located at approximately (0.7, 1.0)
* **Education:** Located at approximately (1.0, 0.1)
* There are also two data points located near the origin, at approximately (0.0, 0.0).
**Right Plot:**
* **Y=0 (Blue circles):** Concentrated in the lower-left region of the plot, where Income is low (0-100) and CCAvg is low (0-3). The density of blue circles decreases as Income and CCAvg increase.
* **Y=1 (Red circles):** Predominantly located in the upper-right region of the plot, where Income is higher (100-200) and CCAvg is higher (3-10). The density of red circles increases as Income and CCAvg increase.
* There is an overlap between the two groups in the middle of the plot.
### Key Observations
* The left plot shows the relative positioning of different categories based on "Safety-index" and "Relevance-index".
* The right plot shows a clear separation between the two groups (Y=0 and Y=1) based on "Income" and "CCAvg".
* Higher income and CCAvg are associated with Y=1, while lower income and CCAvg are associated with Y=0.
### Interpretation
The plots suggest that "Income" and "CCAvg" are important factors in distinguishing between the two groups represented by "Y". The left plot provides a relative positioning of different categories based on "Safety-index" and "Relevance-index", which could be used for marketing segmentation or targeting. The shaded region in the left plot might represent a target area or a region of interest for a specific marketing campaign. The right plot indicates a positive correlation between income and CCAvg, and that the variable Y is strongly influenced by these two factors.
</details>
Data and the problem . For each of 5000 customers, we have binary response Y : customer response to the last personal loan campaign, and 12 other features like customer's annual income, family size, education level, value of house mortgage if any, etc. Among these 12 variables, there are two protected features: age and zip code . We consider zip code as a sensitive attribute, since it often acts as a proxy for race.
Based on this data, we want to devise an automatic and fair digital marketing campaign that will maximize the targeting effectiveness of the advertising campaign while minimizing the discriminatory impact on protected classes to avoid legal landmines.
Customer targeting using admissible machine learning . Our approach is summarized below:
Step 1 . Graphical tool for algorithmic risk management. Fig. 13 shows the infogram, which identifies two admissible features for loan decision: Income (annual income in $ 000), and CCAvg (Avg. spending on credit cards per month). However, the two highly predictive variables education (education level: undergraduate, graduate, or advanced) and family (family size of the customer) turn out to be inadmissible, even though they look completely 'reasonable' on the surface. Consequently, including these variables in a model can do more harm than good by discriminating against minority applicants.
Remark 16. It is evident that infogram can be used as an algorithmic risk management tool to quickly identify and combat unwanted hidden bias. Financial regulators can use infogram to quickly spot and remediate issues of historic discrimination; see Appendix A.3.
Remark 17. Infogram runs a 'combing operation' to distill down a large, complex problem to its core that holds the bulk of the 'admissible information.' In our problem, the useful information is mostly concentrated into two variables-Income and CCAvg, as seen in the scatter diagram.
Step 2 . FINE-Logistic model: We train a logistic regression model based on the two admissible features, leading to the following model:
$$\log i t \left \{ \mu ( x ) \right \} = - 6 . 1 3 + . 0 4 \, I n c o m e + . 0 6 \, C A v g ,$$
where µ p x q Pr p Y 1 | X x q . This simple model achieves 91% accuracy. It provides a clear understanding of the 'core' factors that are driving the model's recommendation.
Remark 18 (Trustworthy algorithmic decision-making) . FINEml models provide a transparent and self-explainable algorithmic decision-making system that comes with protection against unfair discrimination-which is essential for earning the trust and confidence of customers. The financial services industry can immensely benefit from this tool.
Step 3 . FINElasso . One natural question would be, How can we extend this idea to highdimensional glm models? In particular, we are interested in the following question: Is there any way we can directly embed 'admissibility' into the lasso regression model? The key idea is as follows: use adaptive regularization by choosing the weights to be the inverse of safety-indices, as computed in formula (3.6) of definition 4. Estimate FINElasso model by solving the following adaptive version:
$$\hat { \beta } _ { F I N E } = \arg \min _ { \beta } \sum _ { i = 1 } ^ { n } \left [ - y _ { i } ( x _ { i } ^ { T } \beta ) + \log \left ( 1 + e ^ { x _ { i } ^ { T } \beta } \right ) \right ] \, - \, \lambda \sum _ { j = 1 } ^ { p } w _ { j } \left | \beta _ { j } \right | ,$$
where the weights are defined as
$$w _ { j } ^ { - 1 } \, = M I \left ( Y , X _ { j } | \{ S _ { 1 } , \dots , S _ { q } \} \right ) .$$
The adaptive penalization in (3.12) acts as a bias-mitigation mechanism by dropping (that is, heavily penalizing) the variables with very low safety-indices. This whole procedure can be easily implemented using the penalty.factor argument of glmnet R-package (Friedman et al., 2010). No doubt a similar strategy can be adopted for other regularized methods such as ridge or elastic-net. For an excellent review on different kinds of regularization procedures, see Hastie (2020).
Remark 19. Afull lasso on X selects the strong surrogates (variables family and education ) as some of the top features due to their high predictive power, and hence carries enhanced risk of being discriminatory. On the other hand, an infogram-guided FINElasso provides an automatic defense mechanism for combating bias without significantly compromising accuracy.
Remark 20 (Towards A Systematic Recipe) . This idea of data-adaptive 're-weighting' as a bias mitigation strategy, can be easily translated to other types of machine learning models. For example, to incorporate fairness into the traditional random forest method, choose splitting variables at each node by performing weighted random sampling. The selection probability is determined by
$$P r ( s e l e c t i n g v a r i a b l e X _ { j } ) \, = \, \frac { F _ { j } } { \sum _ { j } F _ { j } } ,$$
where the F-values F j is defined in equation (3.6). This can be easily operationalized using the mtry.select.prob argument of the randomForest() function in iRF R-package. Following this line of thought, one can (re)design a variety of less-discriminatory ML techniques without changing a single architecture of the original algorithms.
## 4 Conclusion
Faced with the profound changes that AI technologies can produce, pressure for 'more' and 'tougher' regulation is probably inevitable. (Stone et al., 2019).
Over the last 60 years or so-since the early 1960s-there's been an explosion of powerful ML algorithms with increasing predictive performance. However, the challenge for the next few decades will be to develop sound theoretical principles and computational mechanisms that transform those conventional ML methods into more safe, reliable, and trustworthy ones.
The fact of the matter is that doing machine learning in a 'responsible' way is much harder than developing another complex ML technique. A highly accurate algorithm that does not comply with regulations is (or will soon be) unfit for deployment, especially in safetycritical areas that directly affect human lives. For example, the Algorithmic Accountability Act 14 (see Appx. A.2) introduced in April 2019 requires large corporations (including tech companies, as well as banks, insurance, retailers, and many other consumer businesses) to be
14 Also see, EU's 'Artificial Intelligence Act' released on April 21, 2021, whose key points are summarized in Appendix A.8.
cognizant of the potential for biased decision-making due to algorithmic methods; otherwise, civil lawsuits can be filed against those firms. As a result, it is becoming necessary to develop tools and methods that can provide ways to enhance interpretability and efficiency of classical ML models while guarding against bias. With this goal in mind, this paper introduces a new kind of statistical learning technology and information-theoretic automated monitoring tools that can guide a modeler to quickly build 'better' algorithms that are lessbiased, more-interpretable, and sufficiently accurate.
One thing is clear: rather than being passive recipients of complex automated ML technologies, we need more general-purpose statistical risk management tools for algorithmic accountability and oversight. This is critical to the responsible adoption of regulatorycompliant AI-systems. This paper has taken some important steps towards this goal by introducing the concepts and principles of 'Admissible Machine Learning.'
## Acknowledgement
The author thanks the editor, associate editor, and four anonymous reviewers for their helpful suggestions. I would like to specially thank Erin LeDell for bringing this problem to my attention. The author was benefited from many useful discussions with Michael Guerzhoy, Hany Farid, Julia Dressel, Beau Coker, and Hanchen Wang on demystifying some aspects of COMPASS data; Daniel Osei on the data pre-processing steps of Lending Club loan data. This research was supported by H2O.ai .
## References
- Allen, B., S. Agarwal, L. Coombs, C. Wald, and K. Dreyer (2021). 2020 ACR Data Science Institute Artificial Intelligence Survey. Journal of the American College of Radiology .
- Berrett, T. B., Y. Wang, R. F. Barber, and R. J. Samworth (2019). The conditional permutation test for independence while controlling for confounders. Journal of the Royal Statistical Society: Series B (Statistical Methodology) .
- Blattner, L. and S. Nelson (2021). How costly is noise? Data and disparities in consumer credit. arXiv preprint:2105.07554 .
- Breiman, L. et al. (2004). Population theory for boosting ensembles. The Annals of Statistics 32 (1), 1-11.
- Brennan, T., W. Dieterich, and B. Ehret (2009). Evaluating the predictive validity of the compas risk and needs assessment system. Criminal Justice and Behavior 36 (1), 21-40.
- Candes, E., Y. Fan, L. Janson, and J. Lv (2018). Panning for gold: 'model-x' knockoffs for high dimensional controlled variable selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 80 (3), 551-577.
- Chouldechova, A. and A. Roth (2020). A snapshot of the frontiers of fairness in machine learning. Communications of the ACM 63 (5), 82-89.
- Fahner, G. (2018). Developing transparent credit risk scorecards more effectively: An explainable artificial intelligence approach. Data Anal 2018 , 17.
- Friedman, J., T. Hastie, and R. Tibshirani (2010). Regularization paths for generalized linear models via coordinate descent. Journal of statistical software 33 (1), 1.
- Friedman, J. H. (2001). Greedy function approximation: a gradient boosting machine. Annals of statistics , 1189-1232.
- Hastie, T. (2020). Ridge regularization: An essential concept in data science. Technometrics 62 (4), 426-433.
- Hastie, T., R. Tibshirani, and M. Wainwright (2015). Statistical learning with sparsity: the lasso and generalizations . CRC press.
- Hellman, D. (2020). Measuring algorithmic fairness. Va. L. Rev. 106 , 811.
- Kleinberg, J. (2018). Inherent trade-offs in algorithmic fairness. In Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems , pp. 40-40.
- Lakkaraju, H. and O. Bastani (2020). 'How do I fool you?' manipulating user trust via misleading black box explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society , pp. 79-85.
- Mukhopadhyay, S. and K. Wang (2020). Breiman's 'Two Cultures' revisited and reconciled. arXiv:2005.13596 , 1-51.
- Narayanan, A. (2018). Translation tutorial: 21 fairness definitions and their politics. In Proc. Conf. Fairness Accountability Transp., New York, USA , Volume 1170.
- Obermeyer, Z., B. Powers, C. Vogeli, and S. Mullainathan (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science 366 (6464), 447-453.
- Reardon, S. (2019). Rise of robot radiologists. Nature 576 (7787), S54.
- Rosenbaum, P. R. (1984). Conditional permutation tests and the propensity score in observational studies. Journal of the American Statistical Association 79 (387), 565-574.
- Stone, P., R. Brooks, E. Brynjolfsson, et al. (2019). One hundred year study on artificial intelligence. Stanford University; https://ai100.stanford.edu .
- Thrun, S. B., J. Bala, E. Bloedorn, I. Bratko, B. Cestnik, J. Cheng, K. De Jong, S. Dzeroski, S. E. Fahlman, D. Fisher, et al. (1991). The monk's problems a performance comparison of different learning algorithms.
- Wall, L. D. (2018). Some financial regulatory implications of artificial intelligence. Journal of Economics and Business 100 , 55-63.
- Wyner, A. D. (1978). A definition of conditional mutual information for arbitrary ensembles. Information and Control 38 (1), 51-59.
- Yeh, I.-C. and C.-h. Lien (2009). The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients. Expert Systems with Applications 36 (2), 2473-2480.
- Zech, J. R., M. A. Badgeley, M. Liu, A. B. Costa, J. J. Titano, and E. K. Oermann (2018). Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLoS medicine 15 (11), e1002683.
- Zhang, T. (2004). Statistical behavior and consistency of classification methods based on convex risk minimization. Annals of Statistics , 56-85.
## 5 Appendix
## A.1 Proof of Theorem 1
The conditional entropy H p Y | X , S q can be expressed as
$$H ( Y \, | \, X , S ) \ = \ & \ \iint H ( Y \, | \, X = x , S = s ) \, d F _ { x , s } \\ \ = \ & \ \iint \left \{ \ - \ \int _ { y } f _ { Y | X , S } ( y , x | S ) \log \left ( f _ { Y | X , S } ( y , x | S ) \right ) \, d y \right \} \, d F _ { x , s } \\ \ = \ & \ - \iint \log \left ( f _ { Y | X , S } ( y , x | S ) \right ) \, d F _ { x , s , y } .$$
Similarly,
$$\begin{array} { r l } { H ( Y | S ) } & { = } & { \int _ { s } H ( Y | S = s ) \, d F _ { s } } \\ & { = } & { \int _ { s } \left \{ - \int _ { y } f _ { Y | S } ( y | s ) \log \left ( f _ { Y | S } ( y | s ) \right ) \, d y \right \} \, d F _ { s } } \\ & { = } & { - \iint _ { x , s , y } \log \left ( f _ { Y | S } ( y | s ) \right ) \, d F _ { x , s , y } . } \end{array}$$
Take the difference H p Y | S q H p Y | X , S q by substituting (6.2) and (6.1) to complete the proof.
## A.2 The Algorithmic Accountability Act
This bill 15 was introduced by Senators Cory Booker (D-NJ) and Ron Wyden (D-OR) in the Senate and Rep. Yvette Clarke (D-N.Y.) in the House on April, 2019. It requires large companies to conduct automated decision system impact assessments of their algorithms. Entities that develop, acquire, and/or utilize AI must be cognizant of the potential for biased decision-making and outcomes resulting from its use, otherwise civil lawsuits can be filed against those firms. Interestingly, on Jan. 13, 2020, the Office of Management and Budget released a draft memorandum 16 to make sure the federal government doesn't over-regulate industry's AI to the extent that it hampers innovation and development.
15 https://www.congress.gov/bill/116th-congress/house-bill/2231/all-info
16 The draft memo is available at:whitehouse.gov/wp-content/uploads/2020/01/Draft-OMB-Memoon-Regulation-of-AI-1-7-19.pdf
## A.3 Fair Housing Act's Disparate Impact Standard
Detecting inadmissible (proxy) variables can be used as a first defense against algorithmic bias. Consider the Fair Housing Act's Disparate Impact Standard 17 (U.S. Aug. 19, 2019)according to § 100.500 (c)(2)(i) of the Act, a defendant can rebut a claim of discrimination by showing that 'none of the factors used in the algorithm rely in any material part on factors which are substitutes or close proxies for protected classes under the Fair Housing Act.' Therefore regulators, judges, and model developers can use infogram as a statistical diagnostic tool to keep a check on the algorithmic disparity of automated decision systems.
## A.4 Beware of The 'Spurious Bias' Problem
Using a real data example, here we alert practitioners some of the flaws of current fairness criteria and discuss their remedies. Consider the admission data shown in Table 1. We are interested to know: is there a gender bias in the admission process?
Marginal analysis: the overall acceptance rate in two departments for female applicants is 37%, whereas for male applicants it is roughly 50%. The disparity can be quantified using the adverse impact ratio (AIR), also known as disparate impact:
$$A I R ( Y , G ) = { \frac { P r ( Y = 1 | G = f e m a l e ) } { P r ( Y = 1 | G = m a l e ) } } = { \frac { . 3 7 } { . 5 0 } } = 0 . 7 4 < 0 . 8 0$$
The conventional '80% rule' 18 indicates that the admission process is biased.
The bias-reversal phenomena: admission chances within Department I: Male 63% (male), and female 68%; within Department II: Male 33%, and female F 35%. Thus, when we investigate the admissions by department, the discrimination against women vanishes; in fact, the bias gets reversed (in the favor of women)!
Department-specific 'subgroup' analysis: Here we investigate the adverse impact ratio (AIR) within each department.
For Dept I (no bias):
$$A I R ( Y , G | D = I ) \, = \, { \frac { P r ( Y = 1 | G = m a l e ) } { P r ( Y = 1 | G = f e m a l e ) } } \, = \, . 6 3 / . 6 8 = 0 . 9 2 \, > \, 0 . 8 0 . \quad ( 6 . 4 )$$
17 https://www.govinfo.gov/content/pkg/FR-2019-08-19/pdf/2019-17542.pdf
18 The US Equal Employment Opportunity Commission states that fair employment should abide the 80% rule: the acceptance rate for any group should be no less than 80% of that of the highest-accepted group.
Table 1: Admission data classified by gender and departments. This is actually a part of the 1973 UC Berkeley graduate admission data; here, for simplicity, we have taken the data of Departments B and D.
| Dept (D) | Gender (G) | Admitted ( y 1) | Rejected ( y 0) |
|------------|--------------|-----------------------------|-----------------------------|
| I | Male | 353 | 207 |
| I | Female | 17 | 8 |
| II | Male | 138 | 279 |
| II | Female | 131 | 244 |
For Dept II (no bias):
$$A I R ( Y , G \, | \, D = I I ) \ = \ \frac { \Pr ( Y = 1 \, | \, G = m a l e ) } { \Pr ( Y = 1 \, | \, G = f e n a l e ) } \ = \ . 3 3 / . 3 5 = 0 . 9 4 \, > \, 0 . 8 0 .$$
Eqs. (6.3)-(6.5) present us with a paradoxical situation. What will be our final conclusion on the fairness of the admission process? How to resolve it in a principled way?
A resolution: Compute a measure of overall (university-wide) discrimination by ALFAstatistic (see definition 5 for more details):
$$\alpha _ { Y } \colon = M I ( Y , G | D ) \ = \ \sum _ { d = 0 } ^ { 1 } P r ( D = d ) \, M I ( Y , G | D = d ) ,$$
where α -inadmissibility statistic measures the discrimination (how predictive the admission variable Y is based on gender G ) in a particular department's admission. Applying the formula (2.6) we get
$$\widehat { \alpha } _ { Y } = \widehat { M I } ( Y , G | D ) = 0 . 0 0 0 2 8 5 , w i t h p - v a l u e \colon 0 . 7 1 5 .$$
This suggests Y K K G | D , i.e., the gender contains no additional predictive information for admission beyond what is already captured by the department variable. The apparent gender bias can be 'explained away' by the choice of the department. Graphically, this can be represented as a Markov chain:
<details>
<summary>Image 14 Details</summary>

### Visual Description
## Diagram: Simple Network Diagram
### Overview
The image is a simple network diagram consisting of three nodes labeled Y, D, and G, connected by two edges. The nodes are represented as circles, and the edges are represented as straight lines.
### Components/Axes
* **Nodes:** Three nodes labeled "Y", "D", and "G". Each node is represented by a circle.
* **Edges:** Two edges connecting the nodes. The edge between Y and D, and the edge between D and G.
### Detailed Analysis
* **Node Y:** Located on the left side of the diagram.
* **Node D:** Located in the center of the diagram.
* **Node G:** Located on the right side of the diagram.
* **Edge Y-D:** A straight line connecting node Y to node D.
* **Edge D-G:** A straight line connecting node D to node G.
### Key Observations
The diagram shows a linear network topology where node D acts as an intermediary between nodes Y and G.
### Interpretation
The diagram represents a simple network or relationship between three entities (Y, D, and G). The lines indicate a direct connection or interaction between these entities. Node D serves as a central point, connecting Y and G. This could represent a flow of information, resources, or any other type of relationship where D mediates between Y and G.
</details>
Note that there is no direct link between the gender (G) and admission (Y). Conclusion: there is no evidence of any direct sex-discrimination in the admission process.
Improved AIR measure: one can generalize the (marginal) adverse impact ratio (6.3) to the following conditional one (which is similar in spirit to eq. (6.6)):
$$\text {CAIR} ( Y , G | D ) \ = \ \int A I R ( Y , G | D = d ) \, d F _ { D } ,$$
which, in this case, can be decomposed as
$$C A I R ( Y , G | D ) = \Pr ( D = I ) A I R ( Y , G | D = I ) + \Pr ( D = I ) A I R ( Y , G | D = I ) .$$
Applying (6.8) for our Berkeley example data yields the following estimate:
$$\widehat { C A I R } ( Y , G | D ) \ = \ 0 . 4 3 \times 0 . 9 2 + 0 . 5 7 \times 0 . 9 4 \\ \ = \ 0 . 9 3 \ > \ 0 . 8 0 .$$
This shows no evidence of sex bias in graduate admissions! The moral is: beware of spurious bias, and be aware of two types of errors that might occur due to an incorrect fairness-metric: falsely rejecting a fair algorithm as unfair (Type-I fairness error), and falsely accepting an unfair algorithm as fair (Type-II fairness error).
## A.5 Revisiting COMPAS Data
There is another version of the COMPAS data 19 (binarized features) that researchers have used for evaluating the accuracy of their algorithms. This dataset contains a list of handpicked p 22 features over n 10 , 747 criminal records. Goal is to build an interpretable and accurate recidivism prediction model. Infogram-selected COREtree is displayed below.
10-fold cross-validation shows p 72 1 . 50 q % classification accuracy of our model, which is close to the best known performance on this version of the COMPAS data.
## A.6 Two Cultures of Machine Learning
Black-box ML culture: it builds large complex models, keeping solely the predictive accuracy in mind. White-box ML culture: it directly builds interpretable models, often by enforcing domain-knowledge-based constraints on traditional ML algorithms like decision tree or neural net. Orthodox 'black-or-white thinkers' of each camp have been at loggerheads for some time. This raises the question: is there any way to get the best of both worlds? If so, how?
19 https://raw.githubusercontent.com/Jimmy-Lin/TreeBenchmark/master/datasets/compas/data.csv
Figure 14: Infogram-selected COREtree.
<details>
<summary>Image 15 Details</summary>

### Visual Description
## Decision Tree: Classification Model
### Overview
The image depicts a decision tree, a classification model used to predict a binary outcome (0 or 1). The tree branches based on different features (Age_first_offense, Misdem_count, Probation), leading to leaf nodes that represent the predicted class and its associated probability.
### Components/Axes
* **Nodes:** Each node represents a decision point or a final prediction.
* The number at the top-right of each node is the node ID.
* The two numbers in the center of each node represent the probability of belonging to class 0 and class 1, respectively.
* The percentage at the bottom of each node represents the percentage of samples that fall into that node.
* **Root Node:** Node 0 at the top, representing the starting point of the decision tree.
* **Decision Rules:** Each branch represents a decision rule based on a feature.
* **Leaf Nodes:** The nodes at the bottom of the tree, representing the final predictions.
* **Colors:**
* Green nodes: Predominantly class 0.
* Blue nodes: Predominantly class 1.
### Detailed Analysis or ### Content Details
**Root Node (Node 0):**
* Probability: 0.70 (class 0), 0.30 (class 1)
* Percentage: 100%
**First Split: Age_first_offense >= 21**
* **Left Branch (Yes - Node 2):**
* Probability: 0.78 (class 0), 0.22 (class 1)
* Percentage: 62%
* **Right Branch (No - Node 3):**
* Probability: 0.56 (class 0), 0.44 (class 1)
* Percentage: 38%
**Second Level Splits:**
* **Node 2 (Misdem_count < 2.5 - Node 5):**
* Probability: 0.65 (class 0), 0.35 (class 1)
* Percentage: 16%
* **Node 3 (Probation < 0.5 - Node 7):**
* Probability: 0.45 (class 0), 0.55 (class 1)
* Percentage: 14%
**Third Level Splits:**
* **Node 5 (Probation < 0.5 - Node 11):**
* Probability: 0.57 (class 0), 0.43 (class 1)
* Percentage: 7%
* **Node 7 (Misdem_count < 2.5 - Node 15):**
* Probability: 0.42 (class 0), 0.58 (class 1)
* Percentage: 12%
**Fourth Level Splits:**
* **Node 15 (Misdem_count < 14 - Node 30):**
* Probability: 0.45 (class 0), 0.55 (class 1)
* Percentage: 9%
* **Node 30 (Age >= 32):**
* No further split.
**Leaf Nodes (Bottom Row):**
* Node 4: 0.82 (class 0), 0.18 (class 1), 46%
* Node 10: 0.71 (class 0), 0.29 (class 1), 8%
* Node 22: 0.65 (class 0), 0.35 (class 1), 5%
* Node 23: 0.39 (class 0), 0.61 (class 1), 2%
* Node 6: 0.63 (class 0), 0.37 (class 1), 24%
* Node 14: 0.61 (class 0), 0.39 (class 1), 2%
* Node 60: 0.59 (class 0), 0.41 (class 1), 2%
* Node 61: 0.40 (class 0), 0.60 (class 1), 6%
* Node 31: 0.31 (class 0), 0.69 (class 1), 3%
### Key Observations
* The tree uses features like "Age_first_offense", "Misdem_count", and "Probation" to classify the outcome.
* The root node has a higher probability of class 0 (0.70) compared to class 1 (0.30).
* The leaf nodes show varying probabilities for each class, indicating the model's prediction for different segments of the data.
* The percentage of samples decreases as the tree branches, indicating the distribution of data across different decision paths.
* Nodes 23, 61, and 31 are predominantly class 1.
### Interpretation
The decision tree model attempts to classify an outcome based on a series of sequential decisions. The features used in the tree ("Age_first_offense", "Misdem_count", "Probation") are likely factors that influence the outcome being predicted.
The tree structure reveals how different combinations of these features lead to different predictions. For example, if "Age_first_offense" is less than 21 and "Probation" is less than 0.5, the model follows the right branch from the root node, potentially leading to a different prediction than if "Age_first_offense" is greater than or equal to 21.
The probabilities in the leaf nodes represent the model's confidence in its prediction for a given segment of the data. For instance, Node 4 has a high probability of class 0 (0.82), suggesting that samples falling into this node are more likely to belong to class 0.
The percentages at each node indicate the proportion of the original dataset that flows through that node. This can be useful for understanding the relative importance of different decision paths in the tree.
</details>
An Integrated (third?) culture : In this paper, we have taken the middle path between two extremes. We leverage (instead of boycotting) the power (scalability and flexibility) of modern machine learning methods by viewing them as a heavy-duty 'toolkit' that can efficiently drill through big complex datasets to systematically search for the hidden admissible models.
## A.7 COREtree: Iris Data
The dataset includes three kinds of iris flowers (setosa, versicolor, or virginica) with 50 samples from each class. The task is to develop a model (preferably a compact model based on only important features) to accurately classify iris flowers based on their sepals and petals' length and width ( p 4). Before we start our analysis, it is important to be aware of the highly-correlated nature of the 4-features; the estimated 4 4 correlation matrix is displayed below:
$$\hat { \Sigma } _ { \rho } \ = \ & \begin{bmatrix} 1 . 0 0 0 & - 0 . 1 1 8 & 0 . 8 7 2 & 0 . 8 1 8 \\ - 0 . 1 1 8 & 1 . 0 0 0 & - 0 . 4 2 8 & - 0 . 3 6 6 \end{bmatrix}$$
The infogram for the iris data, constructed using the recipe given in section 3.1, is shown at the top-left corner of Fig. 15, which clearly identifies petal.length and petal.width as
the core relevant features. Since we have reduced the problem to a bivariate one (variables: petal.length and petal.width ), we can now simply plot the data. This is done in the top-right of Fig. 15. We can even visually draw the linear decision surfaces to separate the three classes; see the red and blue lines in the scatter plot. Finally, we train a decision tree classifier based on the selected core features: petal.length and petal.width . The estimated COREtree is shown in the bottom panel, which gives a beautifully crisp (readily interpretable) decision rule for classifying iris flowers.
## A.8 EU's Artificial Intelligence Act
On 21st April 2021, the European Union (EU) unveiled strict regulations 20 to govern highrisk AI systems, which provides one of the first formal and comprehensive regulatory frameworks on AI. Few key takeaways from the report:
A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems.
In identifying the most appropriate risk management measures, the following shall be ensured: elimination or reduction of risks as far as possible through adequate design and development.
Bias monitoring, detection, and correction mechanism should be at place for high-risk AI systems in the pre-as well as the post-deployment stages.
High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system's output and use it appropriately.
High-risk AI systems should equip with appropriate human-machine interface toolswhich allows the system to be effectively overseen by natural persons during the period in which the AI system is in use.
High-risk AI technology providers shall ensure that their systems undergo regulatory compliant assessments. If the AI system is not in conformity with the requirements, they need to take the necessary corrective actions before putting them into service. Companies that fail to do so could face fines of up to 6% of their global sales.
20 The full report is available online at https://bit.ly/EUAI act. Also see the New York Times article https://www.nytimes.com/2021/04/16/business/artificial-intelligence-regulation.html
Figure 15: Iris data analysis. Top left: infogram; top right: the scatter plot of the data based on the selected core features; three different classes are indicated by red, green, and blue colors; bottom: the estimated decision tree classifier using the variables petal-length and petal-width.
<details>
<summary>Image 16 Details</summary>

### Visual Description
## Multi-Panel Analysis: Information Plots and Decision Tree
### Overview
The image presents a multi-panel figure comprising two scatter plots and a decision tree. The top-left plot shows "Conditional Information" vs. "Total Information" with two labeled data points. The top-right plot displays "Petal.Width" vs. "Petal.Length" with data points distinguished by shape. The bottom panel shows a decision tree based on "Petal.Length" and "Petal.Width".
### Components/Axes
**Top-Left Plot:**
* **X-axis:** Total Information, ranging from 0.0 to 1.0 in increments of 0.2.
* **Y-axis:** Conditional Information, ranging from 0.0 to 1.0 in increments of 0.2.
* **Data Points:** Two data points are labeled "Petal.Length" and "Petal.Width".
* **Highlighted Region:** A rectangular region in the bottom-left corner is highlighted in light red. The region spans from approximately x=0 to x=0.2 and y=0 to y=0.1.
**Top-Right Plot:**
* **X-axis:** Petal.Length, ranging from 1 to 7 in increments of 1.
* **Y-axis:** Petal.Width, ranging from 0.5 to 2.5 in increments of 0.5.
* **Data Points:** Three distinct groups of data points, each represented by a different shape: squares, circles, and triangles.
* **Decision Boundaries:** A vertical red line at approximately Petal.Length = 2.75 and a horizontal blue line at approximately Petal.Width = 1.8.
**Decision Tree:**
* **Root Node:** Labeled "1", contains the values "0", ".33 .33 .33", and "100%". The decision rule is "Petal.Length < 2.5".
* **Left Child Node:** Labeled "2", contains the values "0", "1.00 .00 .00", and "33%".
* **Right Child Node:** Labeled "3", contains the values "1", ".00 .50 .50", and "67%". The decision rule is "Petal.Width < 1.8".
* **Left Grandchild Node:** Labeled "6", contains the values "1", ".00 .91 .09", and "36%".
* **Right Grandchild Node:** Labeled "7", contains the values "2", ".00 .02 .98", and "31%".
### Detailed Analysis
**Top-Left Plot:**
* The "Petal.Length" data point is located at approximately (0.75, 0.55).
* The "Petal.Width" data point is located at approximately (0.9, 0.95).
* Two additional unlabeled data points are present at approximately (0.02, 0.1) and (0.02, 0.7).
**Top-Right Plot:**
* Squares (likely representing one class) are clustered in the bottom-left corner, with Petal.Length ranging from 1 to 2.5 and Petal.Width ranging from 0.2 to 0.6.
* Circles (likely representing another class) are clustered in the middle, with Petal.Length ranging from 3 to 5 and Petal.Width ranging from 1.0 to 1.8.
* Triangles (likely representing a third class) are clustered in the top-right corner, with Petal.Length ranging from 4.5 to 7 and Petal.Width ranging from 1.4 to 2.5.
**Decision Tree:**
* The root node splits the data based on "Petal.Length < 2.5".
* The right child node further splits the data based on "Petal.Width < 1.8".
* The values within each node likely represent class distributions and percentages.
### Key Observations
* The top-left plot seems to represent some information gain or feature importance metric for Petal Length and Petal Width.
* The top-right plot shows a clear separation of three classes based on Petal.Length and Petal.Width.
* The decision tree attempts to classify the data based on these two features.
### Interpretation
The image illustrates a classification problem, likely involving different species of Iris flowers, where the goal is to predict the species based on Petal.Length and Petal.Width. The scatter plot shows that these two features are effective in separating the classes. The decision tree provides a simple rule-based approach to classification, using thresholds on Petal.Length and Petal.Width to assign data points to different classes. The values in the decision tree nodes likely represent the proportion of each class within that node. The highlighted region in the top-left plot might indicate a region of low information or high uncertainty.
</details>