## InfoGram and Admissible Machine Learning
## Deep Mukhopadhyay
deep@unitedstatalgo.com
## Abstract
We have entered a new era of machine learning (ML), where the most accurate algorithm with superior predictive power may not even be deployable, unless it is admissible under the regulatory constraints. This has led to great interest in developing fair, transparent and trustworthy ML methods. The purpose of this article is to introduce a new information-theoretic learning framework (admissible machine learning) and algorithmic risk-management tools (InfoGram, L-features, ALFA-testing) that can guide an analyst to redesign off-the-shelf ML methods to be regulatory compliant, while maintaining good prediction accuracy. We have illustrated our approach using several real-data examples from financial sectors, biomedical research, marketing campaigns, and the criminal justice system.
Keywords : Admissible machine learning; InfoGram; L-Features; Information-theory; ALFAtesting, Algorithmic risk management; Fairness; Interpretability; COREml; FINEml.
| 1 | Introduction | 2 |
|-----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|
| 2 | Information-Theoretic Principles and Methods | 7 |
| 3 | 2.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Conditional Mutual Information . . . . . . . . . . . . . . . . 2.3 Net-Predictive Information . . . . . . . . . . . . . . . . | 7 7 8 |
| | Conclusion | 35 |
| 4 | Elements of Admissible Machine Learning COREml: Algorithmic Interpretability . . . . . . . . . . . . . 3.1.1 From Predictive Features to Core Features . . . . . . 3.1.2 InfoGram and L-Features . . . . . . . . . . . . . . . . 3.1.3 COREtree: High-dimensional Microarray Data Analysis 3.1.4 COREglm: Breast Cancer Wisconsin Data . . . . . . . . . . . . . . . | 13 15 20 |
| 2.4 | . . . Nonparametric Estimation Algorithm . . . . . . . . . . . . . | 9 |
| 2.5 | Model-based Bootstrap . . . . . . . . . . . . . . . . . . . . . . | 11 |
| 2.6 | A Few Examples . . . . . . . . . . . . . . . . . . . . . . . . . | 11 |
| | Appendix A.1 | 12 12 12 |
| 3.1 | Revisiting COMPAS Data . . . . . . . . Two Cultures of Machine Learning . . . . . . | 42 |
| | 3.2.2 InfoGram and Admissible Feature 3.2.3 FINEtree and ALFA-Test: Financial 3.2.4 Admissible Criminal Justice Risk 3.2.5 FINEglm and Application to Marketing Proof of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . The Algorithmic Accountability Act . . . . . . | 39 39 42 |
| 5 A.2 A.3 | . . . . . . . . . . . . . . . . COREtree: Iris Data . . . . . . . . . | |
| A.7 | EU's Artificial Intelligence | 43 |
| 3.2 | | |
| | FINEml: Algorithmic Fairness . . . . . . . . 3.2.1 FINE-ML: Approaches and Limitations . . . . | 22 22 |
| | . . . . Selection . Industry | 25 |
| | . . | 26 |
| | . . . Applications Assessment . . . . . . . | 32 |
| | Campaign | 32 |
| | . . . . . . . . . . . . | 39 |
| | | 40 |
| A.5 | Fair Housing Act's Disparate Impact Standard Beware of The 'Spurious Bias' Problem | |
| | | 40 |
| A.4 | . . . . . . . . . . . . | |
| A.6 | . . . . . . . . . . . . . | |
| | . . . . . . . . . . . . | |
| A.8 | | |
| | Act . . . . . . . . . . . . . . . | 44 |
## Category: Fairness, Explainability, and Algorithm Bias
Machine learning (ML) methods are rapidly becoming an essential part of automated decision-making systems that directly affect human lives. While substantial progress has been made toward developing more powerful computational algorithms, the widespread adoption of these technologies still faces several barriers-the biggest one being ensuring adherence to regulatory requirements, without compromising too much accuracy. Naturally, the question arises: how to systematically go about building such regulatory-compliant fair and trustworthy algorithms? This paper offers new statistical principles and information-theoretic graphical exploratory tools that engineers can use to 'detect, mitigate, and remediate' off-the-shelf ML-algorithms, thereby making them admissible under appropriate laws and regulatory scrutiny.
## 1 Introduction
First-generation 'prediction-only' machine learning technology has served the tech and eCommerce industry pretty well. However, ML is now rapidly expanding beyond its traditional domains into highly regulated or safety-critical areas-such as healthcare, criminal justice systems, transportation, financial markets, and national security-where achieving high predictive-accuracy is often as important as ensuring regulatory compliance and transparency in order to ensure the trustworthiness. We thus focus on developing admissible machine learning technology that can balance fairness, interpretability, and accuracy in the best manner possible. How to systematically go about building such algorithms in a fast and scalable manner? This article introduces some new statistical learning theory and information-theoretic graphical exploratory tools to address this question.
Going Beyond 'Pure' Prediction Algorithms . Predictive accuracy is not the be-all and end-all for judging the 'quality' of a machine learning model. Here is a dazzling example: Researchers at the Icahn School of Medicine at Mount Sinai in New York City found that (Zech et al., 2018, Reardon, 2019) a deep-learning algorithm, which showed more than 90% accuracy on the x-rays produced at Mount Sinai, performed poorly when tested on data from other institutions. Later it was found that 'the algorithm was also factoring in the odds of a positive finding based on how common pneumonia was at each institution-not something they expected or wanted.' This sort of unreliable and inconsistent performance
can be clearly dangerous. As a result of these safety concerns, despite lots of hype and hysteria around AI in imaging, only about 30% of radiologists are currently using machine learning (ML) for their everyday clinical practices (Allen et al., 2021). To apply machine learning appropriately and safely- especially when human life is at stake-we have to think beyond predictive accuracy. The deployed algorithm needs to be comprehensible (by endusers like doctors, judges, regulators, researchers, etc.) in order to make sure it has learned relevant and admissible features from the data, which is meaningful in light of investigators' domain knowledge. The fact of the matter is, an algorithm that is solely focused on what is learned, without reasoning how it learned what it has learned, is not intelligent enough. We next expand on this issue using two real data applications.
Admissible ML for Industry . Consider the UCI Credit Card data (discussed in more details in Sec 3.2.3), collected in October 2005, from an important Taiwan-based bank. We have records of n 30 , 000 cardholders. The data composed of a response variable Y denoting: default payment status (Yes = 1, No = 0), along with p 23 predictor variables (e.g., gender, education, age, history of past payment, etc.). The goal is to accurately predict the probability of default given the profile of a particular customer.
On the surface, this seems to be a straightforward classification problem for which we have a large inventory of powerful algorithms. Yeh and Lien (2009) performed an exhaustive comparison between six machine learning methods (logistic regression, K-nearest neighbor, neural net, etc.) and finally selected the neural network model, which attained 83% accuracy on a 80-20 train-test split of the data. However, traditionally build ML models are not deployable, unless it is admissible under the financial regulatory constraints 1 (Wall, 2018), which demand that (i) the method should not discriminate people on the basis of protective features 2 , here based on gender and age ; and (ii) The method should be simpler to interpret and transparent (compared to those big neural-nets or ensemble models like random forest and gradient boosting).
To improve fairness, one may remove the sensitive variables and go back to business as usual by fitting the model on the rest of the features-known as 'fairness through unawareness.' Obviously this is not going to work because there will be some proxy attributes (e.g, zip code or profession) that share some degree of correlation (information-sharing) with race,
1 The Equal Credit Opportunity Act (ECOA) is a major federal financial regulation law enacted in 1974.
2 https://en.wikipedia.org/wiki/Protected group
Figure 1: A shallow admissible tree classifier for the UCI credit card data with four decision nodes, which is as accurate as the most complex state-of-the-art ML model.
<details>
<summary>Image 1 Details</summary>

### Visual Description
## Diagram: Admissible Tree - UCI Credit Data
### Overview
The image depicts a decision tree diagram, likely representing a classification model built on the UCI Credit Data dataset. The tree structure shows a series of binary splits based on the values of features (PAY_0, PAY_2), leading to terminal nodes representing classifications (0 or 1) along with associated probabilities and percentages.
### Components/Axes
The diagram consists of rectangular nodes connected by branches. Each node contains:
* A node ID (numbers 0-11).
* A classification (0 or 1).
* Two numerical values, likely representing a metric like Gini impurity or information gain, and a probability.
* A percentage value.
* A decision rule based on a feature and a threshold (e.g., "PAY_0 < 1.5").
The branches are labeled with "yes" or "no" indicating the outcome of the decision rule.
### Detailed Analysis or Content Details
**Node 0 (Root Node):**
* ID: 0
* Classification: 0
* Values: .78 .22
* Percentage: 100%
* Decision Rule: PAY_0 < 1.5
**Node 1 (Branch from Node 0 - "no"):**
* ID: 1
* Classification: 1
* Values: .30 .70
* Percentage: 10%
* Decision Rule: PAY_2 < -0.5
**Node 2 (Branch from Node 0 - "yes"):**
* ID: 2
* Classification: 0
* Values: .83 .17
* Percentage: 90%
* Decision Rule: PAY_2 < 1.5
**Node 3 (Branch from Node 1):**
* ID: 3
* Classification: 1
* Values: .30 .70
* Percentage: 10%
* Decision Rule: PAY_2 < -0.5
**Node 4 (Branch from Node 2 - "yes"):**
* ID: 4
* Classification: 0
* Values: .86 .14
* Percentage: 82%
**Node 5 (Branch from Node 2 - "no"):**
* ID: 5
* Classification: 0
* Values: .58 .42
* Percentage: 8%
* Decision Rule: PAY_2 < 2.5
**Node 6 (Branch from Node 1 - "yes"):**
* ID: 6
* Classification: 0
* Values: .56 .44
* Percentage: 0%
**Node 7 (Branch from Node 1 - "no"):**
* ID: 7
* Classification: 1
* Values: .29 .71
* Percentage: 10%
**Node 10 (Branch from Node 5 - "yes"):**
* ID: 10
* Classification: 0
* Values: .60 .40
* Percentage: 7%
**Node 11 (Branch from Node 5 - "no"):**
* ID: 11
* Classification: 1
* Values: .47 .53
* Percentage: 1%
### Key Observations
* The root node (Node 0) splits on PAY_0 < 1.5.
* The "no" branch from the root node (Node 1) further splits on PAY_2 < -0.5, leading to relatively low percentages (10% and 10%).
* The "yes" branch from the root node (Node 2) splits on PAY_2 < 1.5, and then on PAY_2 < 2.5.
* The terminal nodes (Nodes 4, 6, 7, 10, 11) have varying percentages, indicating different levels of confidence in the classification. Node 4 has the highest percentage (82%) of belonging to class 0.
* The percentages at the terminal nodes do not sum to 100%, suggesting that some data points may not have reached these nodes or that the tree is not fully representative of the entire dataset.
### Interpretation
This decision tree appears to be modeling credit risk based on payment history. The features PAY_0 and PAY_2 likely represent the amount of past payment defaults. The tree suggests that individuals with low values of PAY_0 (less than 1.5) are more likely to be classified as low-risk (class 0), while those with negative values of PAY_2 (less than -0.5) are more likely to be classified as high-risk (class 1).
The percentages at each node represent the proportion of data points that fall into that category. For example, at Node 4, 82% of the data points that satisfy the conditions PAY_0 < 1.5 and PAY_2 < 1.5 are classified as 0.
The tree's structure and the values within the nodes provide insights into the relationships between the features and the target variable (credit risk). The tree can be used to predict the credit risk of new individuals based on their payment history. The relatively low percentages at some terminal nodes suggest that the model may not be perfectly accurate and that further refinement or additional features may be needed to improve its performance.
</details>
gender, or age. These proxy variables can then lead to the same unfair results. It is not clear how to define and detect those proxy variables to mitigate hidden biases in the data. In fact, on a recent review by Chouldechova and Roth (2020) on algorithmic fairness, the authors forthrightly stated
' But despite the volume and velocity of published work, our understanding of the fundamental questions related to fairness and machine learning remain in its infancy. '
Currently, there exists no systematic method to directly construct an admissible algorithm that can mitigate bias. To quote a real practitioner of a reputed AI-industry: 'I ran 40,000 different random forest models with different features and hyper-parameters to search a fair model.' This ad-hoc and inefficient strategy could be a significant barrier for an efficient large-scale implementation of admissible AI technologies. Fig. 1 shows a fair and shallow tree classifier with four decision nodes, which attains 82.65% accuracy; this was built in a completely automated manner without any hand-crafted manual tuning. Section 2 will introduce the required theory and methods behind our procedure. Nevertheless, this simple and transparent anatomy of the final model makes it easy to convey which are the key drivers of the model: variables Pay 0 and Pay 2 3 are the most important indicators to
3 Pay 0 and Pay 2 denote the repayment status of the last two months (-1=pay duly, 1=payment delay for one month, 2=payment delay for two months, and so on).
default. These variables have two key characteristics: they are highly predictive and at the same time safe to use in the sense that they share very little predictive information with the sensitive attributes age and gender, and for that reason, we call them admissible features. The model also convey how the key variables impacting credit risk: the simple decision tree shown in Fig. 1 is fairly self-explanatory, and its clarity facilitates an easy explanation of the predictions.
Admissible ML for Science . Legal requirement is not the only reason why we want to build admissible ML. In scientific investigations, it is important to know whether the deployed algorithm helps researchers to better understand the phenomena by refining their 'mental model.' Consider, for example, the prostate cancer data where we have p 6033 gene expression measurements from 52 tumor and 50 normal specimens. Fig. 2 shows a 95% accurate classification model for prostate data with only two 'core' driver genes! This compact model is admissible in the sense that it confers the following benefits: (i) it identifies a two-gene signature (composed of gene-1627 and gene-2327) as the top factor associated with prostate cancer. They are jointly overexpressed in the tumor samples but interestingly they have very little marginal information (not individually differentially expressed, as shown in Fig. 6). Accordingly, traditional linear-model-based analysis will fail to detect this genepair as a key biomarker. (ii) The simple decision tree model in Fig. 2 provides a mechanistic understanding and justification as to why the algorithm thinks a patient has prostate cancer or not. (iii) Finally, it provides the needed guidance on what to do next by having a control over the system. In particular, a cancer biologist can choose between different diagnosis and treatment plans with the goal to regulate those two oncogenes.
Goals and Organization . The primary goal of this paper is to introduce some new fundamental concepts and tools to lay the foundation of admissible machine learning that are efficient (enjoy good predictive accuracy), fair (prevent discrimination against minority groups), and interpretable (provide mechanistic understanding) to the best possible extent.
Our statistical learning framework is grounded in the foundational concepts of information theory. The required statistical formalism (nonparametric estimation and inference methods) and information-theoretic principles (entropy, conditional entropy, relative entropy, and conditional mutual information) are introduced in Section 2. A new nonparametric estimation technique for conditional mutual information (CMI) is proposed that scales to large
Figure 2: A two-gene admissible tree classifier for prostate cancer data with p 6033 gene expression measurements on 50 control and 52 cancer patients.
<details>
<summary>Image 2 Details</summary>

### Visual Description
\n
## Diagram: Admissible Tree - Prostate Cancer Data
### Overview
The image depicts a decision tree diagram, likely used in a medical context for diagnosing or classifying prostate cancer. The tree branches based on the results of two tests: "X1627 < -0.77" and "X2327 < -0.87". Each node in the tree represents a decision point or a final classification, and is labeled with numerical data.
### Components/Axes
The diagram consists of rectangular nodes connected by lines representing decision branches. Each node contains three numbers, and a percentage. The branches are labeled with the conditions for following that branch ("yes" or "no", and the test condition). The title of the diagram is "Admissible Tree: Prostate Cancer Data".
### Detailed Analysis or Content Details
The tree starts with node 1, positioned at the top-center.
* **Node 1:** Contains the values 0.49, 0.51, and 1, with a percentage of 100%. It branches based on the condition "X1627 < -0.77".
* **"yes" branch:** Leads to node 2, positioned on the left.
* **Node 2:** Contains the values 0, 1.00, and 0.00, with a percentage of 32%.
* **"no" branch:** Leads to node 3, positioned towards the bottom-center.
* **Node 3:** Contains the values 0.25, 0.75, and 1, with a percentage of 68%. It branches based on the condition "X2327 < -0.87".
* **"yes" branch:** Leads to node 6, positioned on the bottom-left.
* **Node 6:** Contains the values 1.00, 0.00, and 0, with a percentage of 12%.
* **"no" branch:** Leads to node 7, positioned on the bottom-right.
* **Node 7:** Contains the values 0.09, 0.91, and 1, with a percentage of 56%.
### Key Observations
The tree structure suggests a sequential decision-making process. The percentage values likely represent the proportion of cases that follow each branch or end up in each terminal node. Node 1 represents the initial population, and subsequent nodes represent increasingly refined classifications based on the test results. The percentages decrease as the tree branches, indicating that each test narrows down the population.
### Interpretation
This diagram represents a simplified model for classifying prostate cancer based on two biomarker tests (X1627 and X2327). The tree structure allows for a clear visualization of the decision-making process. The values within each node likely represent probabilities or proportions related to the presence or absence of the disease, or specific characteristics of the cancer.
The initial split at X1627 < -0.77 divides the population into two groups. Those with values below this threshold (32% of the initial population) are classified by Node 2. Those with values above this threshold (68% of the initial population) are further classified based on X2327 < -0.87.
The final nodes (2, 6, and 7) represent the ultimate classifications. The percentages associated with these nodes indicate the prevalence of each classification within the overall population. For example, 32% of the initial population falls into the classification represented by Node 2.
The diagram suggests that the combination of these two tests can effectively stratify patients into different risk groups or disease subtypes. The specific meaning of the numerical values within each node would require additional context about the data and the model.
</details>
datasets by leveraging the power of machine learning. For statistical inference, we have devised a new model-based bootstrap strategy. The method was applied to the problem of conditional independence testing and integrative genomics (breast cancer multi-omics data from Cancer Genome Atlas). Based on this theoretical foundation, in Section 3, we laid out the basic elements of admissible machine learning. Section 3.1 focuses on algorithmic interpretability: how can we efficiently search and design self-explanatory algorithmic models by balancing accuracy and robustness to the best possible extent? Can we do it in a completely model-agnostic manner? Key concepts and tools introduced in this section are: Core features, infogram, L-features, net-predictive information, and COREml. The procedure was applied to several real datasets, including high-dimensional microarray gene expression datasets (prostate cancer and SRBCT data), MONK's problems, and Wisconsin breast cancer data. Section 3.2 focuses on algorithmic fairness, which tackles the challenging problem of designing admissible ML algorithms that are simultaneously efficient, interpretable, and equitable. There are several key techniques introduced in this section: admissible feature selection, ALFA-testing, graphical risk assessment tool, and FINEml. We illustrate the proposed methods using examples from criminal justice system (ProPublica's COMPAS recidivism data), financial service industry (Adult income data, Taiwan credit card data), and marketing ad campaign. We conclude the paper in Section 4 by reviewing the challenges and opportunities of next-generation admissible ML technologies.
## 2 Information-Theoretic Principles and Methods
The foundation of admissible machine learning relies on information-theoretic principles and nonparametric methods. The key theoretical ideas and results are presented in this section to develop a deeper understanding of the conceptual basis of our new framework.
## 2.1 Notation
Let Y be the response variable taking values t 1 , . . . , k u , X p X 1 , . . . , X p q denotes a p -dimensional feature matrix, and S p S 1 , . . . , S q q is additional set of q covariates (e.g., collection of sensitive attributes like race, gender, age, etc.). A variable is called mixed when it can take either discrete, continuous, or even categorical values, i.e., completely unrestricted data-types. Throughout, we will allow both X and S to be mixed . We write Y K K X to denote the independence of Y and X . While, the conditional independence of Y and X given S is denoted by Y K K X | S . For a continuous random variable, f and F denote the probability density and distribution function, respectively. For a discrete random variable the probability mass function will be denoted by p with proper subscript.
## 2.2 Conditional Mutual Information
Our theory starts with an information-theoretic view of conditional dependence. Under conditional independence:
$$Y \, \mathbb { I } \, X | S$$
the following decomposition holds for all y, x , s
$$f _ { Y , X | S } ( y , x | s ) \, = \, f _ { Y | S } ( y | s ) f _ { X | S } ( x | s ) .$$
More than testing independence, often the real interest lies in quantifying the conditional dependence: the average deviation of the ratio
$$\frac { f _ { Y , X | S } ( y , x | S ) } { f _ { Y | S } ( y | S ) f _ { X | S } ( x | S ) } , \quad ( 2 . 1 )$$
which can be measured by conditional mutual information (Wyner, 1978).
Definition 1. Conditional mutual information (CMI) between Y and X given S is defined as:
$$\begin{array} { r l } \text {as:} & = \underset { y , x , s } { \iiint } \log \left ( \frac { f _ { Y , X | S } ( y , x | S ) } { f _ { Y | S } ( y | S ) f _ { X | S } ( x | S ) } \right ) f _ { Y , X , S } ( y , x , s ) \, d y \, d x \, d s . \quad ( 2 . 2 ) \end{array}$$
Two Important Properties . (P1) One of the striking features of CMI is that it captures multivariate non-linear conditional dependencies between the variables in a completely nonparametric manner. (P2) CMI possesses the necessary and sufficient condition as a measure of conditional independence, in the sense that
$$M I ( Y , X | S ) = 0 \, i f a n d o n l y i f \, Y \perp X | S .$$
Conditional independence relation can be described using graphical model (also known as Markov network), as shown the figure below:
Figure 3: Representing conditional independence graphically, where each node is a random variable (or random vector). The edge between Y and X passes through the S .
<details>
<summary>Image 3 Details</summary>

### Visual Description
\n
## Diagram: Simple Linear Sequence
### Overview
The image depicts a simple linear diagram consisting of three oval-shaped nodes labeled 'Y', 'S', and 'X', connected by two horizontal lines. The diagram illustrates a sequential relationship between these elements. There are no axes, legends, or numerical data present.
### Components/Axes
The diagram consists of:
* **Nodes:** Three oval-shaped nodes labeled 'Y', 'S', and 'X'.
* **Connections:** Two horizontal lines connecting the nodes in the sequence Y -> S -> X.
### Detailed Analysis or Content Details
The diagram shows a linear progression from 'Y' to 'S' and then to 'X'. The lines indicate a direct relationship or flow between these elements. There are no quantitative values or additional details provided within the diagram.
### Key Observations
The diagram is extremely simple and lacks any quantitative data. The order of the elements is the only information conveyed.
### Interpretation
The diagram likely represents a process, a sequence of events, or a causal relationship. 'Y' could be an input, 'S' a process or intermediate state, and 'X' the output. Without further context, the specific meaning of 'Y', 'S', and 'X' remains unknown. The diagram emphasizes the order of these elements, suggesting that the sequence is important. It could also represent a simple dependency chain where 'S' depends on 'Y', and 'X' depends on 'S'. The diagram is abstract and requires additional information to fully understand its purpose.
</details>
## 2.3 Net-Predictive Information
One of the major significances of CMI as a measure of conditional dependence comes from its interpretation in terms of additional 'information gain' on Y learned through X when we already know S . In other words, CMI measures the Net-Predictive Information (NPI) of X -the exclusive information content of X for Y beyond what is already subsumed by S . To formally arrive at this interpretation, we have to look at CMI from a different angle, by expressing it in terms of conditional entropy. Entropy is a fundamental information-theoretic uncertainty measure. For a random variable Z , entropy H p Z q is defined as E Z r log f Z s .
Definition 2. The conditional entropy H p Y | S q is defined as the expected entropy of Y | S s
$$H ( Y | S ) = \int _ { s } H ( Y | S = s ) d F _ { s } ,$$
which measures how much uncertainty remains in Y after knowing S , on average.
Theorem 1. For Y discrete and p X , S q mixed multidimensional random vectors, MI p Y, X | S q can be expressed as the difference between two conditional-entropy statistics:
$$M I ( Y , X | S ) \, = \, H ( Y | S ) \, - \, H ( Y | S , X ) .$$
The proof involves some standard algebraic manipulations, and is given in Appendix A.1.
Remark 1 (Uncertainty Reduction) . The alternative way of defining CMI through eq. (2.5) allows us to interpret it from a new angle: Conditional mutual information MI p Y, X | S q measures the net impact of X in reducing the uncertainty of Y , given S . This new perspective will prove to be vital for our subsequent discussions. Note that, if H p Y | S , X q H p Y | S q , then X carries no net -predictive information about Y .
## 2.4 Nonparametric Estimation Algorithm
The basic formula (2.2) of conditional mutual information (CMI) that we have presented in the earlier section, is, unfortunately, not readily applicable for two reasons. First, the practical side: in the current form, (2.2) requires estimation of f Y, X | S and f X | S , which could be a herculean task, especially when X p X 1 , . . . , X p q and S p S 1 , . . . , S q q are largedimensional. Second, the theoretical side: since the triplet p Y, X , S q is mixed (not all discrete or continuous random vectors) the expression (2.2) is not even a valid representation. The necessary reformulation is given in the next theorem.
Theorem 2. Let Y be a discrete random variable taking values 1 , . . . , k , and p X , S q be a mixed pair of random vectors. Then the conditional mutual information can be rewritten as
$$M I ( Y , X | S ) \, = \, E _ { X , S } \left [ K L \left ( p _ { Y | X , S } \right \| p _ { Y | S } \right ) \right ] ,$$
where Kullback-Leibler (KL) divergence from p Y | X x , S s to p Y | S s is defined as
$$K L \left ( p _ { Y | x , s } \| p _ { Y | s } \right ) = \sum _ { y } p _ { Y | x , s } ( y | x , s ) \, \log \left ( \frac { p _ { Y | x , s } ( y | x , s ) } { p _ { Y | s } ( y | s ) } \right ) .$$
To prove it, first rewrite the dependence-ratio (2.1) solely in terms of conditional distribution of Y as follows:
$$\frac { P r ( Y = y | X = x , S = s ) } { P r ( Y = y | S = s ) } \, = \, \frac { p _ { Y | X , S } ( y | X , s ) } { p _ { Y | S } ( y | S ) }$$
Next, substitute this into (2.2) and express it as
$$M I ( Y , X | S ) \ = \ \iint _ { x , s } \left [ \sum _ { y } p _ { Y | X , S } ( y | X , s ) \log \left ( \frac { p _ { Y | X , S } ( y | X , s ) } { p _ { Y | S } ( y | S ) } \right ) \right ] d F x , s$$
Replace the part inside the square brackets by (2.7) to finish the proof.
Remark 2. CMI measures how much information is shared only between X and Y that is not contained in S . Theorem 2 makes this interpretation explicit.
Estimator . Goal is to develop a practical nonparametric algorithm for estimating CMI from n i.i.d samples t x i , y i , s i u n i 1 that works for large( n, p, q ) settings. Theorem 2 immediately
leads to the following estimator of (2.6):
$$\widehat { M I } ( Y , X | S ) = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \log \frac { \widehat { P r } ( Y = y _ { i } | x _ { i } , s _ { i } ) } { \widehat { P r } ( Y = y _ { i } | s _ { i } ) } .$$
Algorithm 1 . Conditional mutual information estimation : the proposed ML-powered nonparametric estimation method consists of three simple steps:
Step 1 . Choose a machine learning classifier (e.g., support vector machines, random forest, gradient boosted trees, deep neural network, etc.), and call it ML 0 .
Step 2 . Train the following two models:
$$\begin{array} { r c l } \text {ML.train} _ { y | x , s } & \leftarrow & \text {ML} _ { 0 } \left ( Y \sim [ X , S ] \right ) \\ \text {ML.train} _ { y | s } & \leftarrow & \text {ML} _ { 0 } \left ( Y \sim S \right ) \end{array}$$
Step 3 . Extract the conditional probability estimates x Pr p Y y i | x i , s i q from ML.train y | x , s , and x Pr p Y y i | s i q from ML 0 Y S , for i 1 , . . . , n .
Step 4 . Return x MI p Y, X | S q by applying formula (2.8).
Remark 3. We will be using the gradient boosting machine ( gbm ) of Friedman (2001) in our numerical examples (obviously, one can use other methods), whose convergence behavior is well-studied in literature (Breiman et al., 2004, Zhang, 2004), where it was definitively shown that under some very general conditions, the empirical risk (probability of misclassification) of the gbm classifier approaches the optimal Bayes risk. This Bayes risk consistency property surely carries over to our conditional probability estimates in (2.8), which justifies the good empirical performance of our method in real datasets.
Remark 4. Taking the base of the log in (2.8) to be 2, we get the measure in the unit of bits . If the log is taken to be the natural log e , then it is in nats unit. We will use log 2 in all our computation.
The proposed style of nonparametric estimation provides some important practical benefits:
Flexibility: Unlike traditional conditional independence testing procedures (Candes et al., 2018, Berrett et al., 2019), our approach requires neither the knowledge of the exact parametric form of high-dimensional F X 1 ,...,X p nor the knowledge of the conditional distribution of X | S , which are generally unknown in practice.
Applicability: (i) Data-type: The method can be safely used for mixed X and S (any combination of discrete, continuous, or even categorical variables). (ii) Data-dimension: The method is applicable to high-dimensional X p X 1 , . . . , X p q and S p S 1 , . . . , S q q .
- Scalability: Unlike traditional nonparametric methods (such as kernel density or k -nearest neighbor-based methods), our procedure is scalable for big datasets with large( n, p, q ).
## 2.5 Model-based Bootstrap
One can even perform statistical inference for our ML-powered conditional-mutual-information statistic. In order to test H 0 : Y K K X | S , obtain bootstrap-based p-value by noting that under the null Pr p Y y | X x , S s q reduces to Pr p Y y | S s q .
Algorithm 2 . Model-based Bootstrap : The inference scheme proceeds as follows:
Step 1. Let
$$\begin{array} { r } { \hat { p } _ { i | s } = \Pr ( Y _ { i } = 1 | S = s _ { i } ) , \, f o r i = 1 , \dots , n } \end{array}$$
as extracted from (already estimated) the model ML.train y | s (step 2 of Algorithm 1).
Step 2. Generate the null Y n 1 p Y 1 , . . . , Y n q by
$$Y _ { i } ^ { * } \, \leftarrow \, B e r n o u l l i ( \widehat { p } _ { i | s } ) , \, f o r \, i = 1 , \dots , n$$
Step 3. Compute x MI p Y , X | S q using the Algorithm 1.
Step 4. Repeat the process B times (say, B 500); compute the bootstrap null distribution, and return the p-value.
Remark 5. Aparametric version of this inference was proposed by Rosenbaum (1984) in the context of observational causal study. His scheme resamples Y by estimating Pr p Y 1 | S q using a logistic regression model. The procedure was called conditional permutation test.
## 2.6 A Few Examples
Example 1. Model: X Bernoulli p 0 . 5 q ; S Bernoulli p 0 . 5 q ; Y X when S 0 and 1 X when S 1. In this case, it is easy to see that the true MI p Y, X | S q 1. We simulated n 500 i.i.d p x i , y i , s i q from this model and computed our estimate using (2.8). We repeated the process 50 times to access the variability of the estimate. Our estimate is:
$$\dot { M } ( Y , X | S ) \ = \ 0 . 9 9 4 \pm 0 . 0 0 2 3 4 .$$
with (avg.) p-value being almost zero. We repeated the same experiment by making Y Bernoulli p 0 . 5 q (i.e., now true MI p Y, X | S q 0), which yields
$$M I ( Y , X | S ) \ = \ 0 . 0 0 2 2 \pm 0 . 0 0 1 7 .$$
with (avg.) pvalue being 0 . 820.
Example 2. Integrative Genomics . The wide availability of multi-omics data has revolutionized the field of biology. It is a general consensus among practitioners that combining individual omics data sets (mRNA, microRNA, CNV and DNA methylation, etc.) leads to improved prediction. However, before undertaking such analysis, it is probably worthwhile to check what is the additional information we gain from a combined analysis compared to a single-platform one. To illustrate this point, we use a Breast cancer multi-omics data that is a part of The Cancer Genome Atlas (TCGA, http://cancergenome.nih.gov/). It contain the expression of three-kinds of omics data sets: miRNA, mRNA, and proteomics from three kinds of breast cancer samples ( n 150): Basal, Her2, and LumA. X 1 is 150 184 matrix of miRNA, X 2 is 150 200 matrix of mRNA, and X 3 is 150 142 matrix of proteomics.
$$\text {MI} ( Y , \text {X} _ { 2 } \, | \, \text {X} _ { 1 } ) = 0 . 0 1 3 ; \quad & p { \text {-value} } = 0 . 3 5 6 \\ \text {MI} ( Y , \text {X} _ { 3 } \, | \, \text {X} _ { 1 } ) = 0 . 0 1 8 6 ; \quad & p { \text {-value} } = 0 . 2 3 5 \\ \text {MI} \left ( Y , \{ \text {X} _ { 2 } , \text {X} _ { 3 } \} \, | \, \text {X} _ { 1 } \right ) = 0 . 0 1 9 2 ; \quad & p { \text {-value} } = 0 . 5 0 1 .$$
It shows: neither mRNA or proteonomics add any substantial information beyond what is already captured by miRNAs.
## 3 Elements of Admissible Machine Learning
How to design admissible machine learning algorithms with enhanced efficiency, interpretability, and equity? 4 A systematic pipeline for developing such admissible ML models is laid out in this section, which is grounded in the earlier information-theoretic concepts and nonparametric modeling ideas.
## 3.1 COREml: Algorithmic Interpretability
## 3.1.1 From Predictive Features to Core Features
One of the first tasks of any predictive modeling is to identify the key drivers that are affecting the response Y . Here we will discuss a new information-theoretic graphical tool to quickly spot the 'core' decision-making variables, which are going to be vital in building interpretable models. One of the advantages of this method is that it works even in the presence of correlated features, as the following example illustrates; also see Appendix A.7.
$$\begin{array} { r l } \text {Example 3. Correlated features. $Y \sim Bernoulli(\pi(x))$ where $\pi(x)=1/(1+e^{-\mathcal{M}(x)})$ and } \\ \\ \mathcal { M } ( x ) = 3 \sin ( X _ { 1 } ) - 2 X _ { 2 } . \end{array}$$
4 However, the general premise of admissible ML is extremely broad and flexible, and will continue to evolve with the regulatory requirements to ensure rapid development of trustworthy algorithmic methods.
X 1 , . . . X p 1 be i.i.d N p 0 , 1 q random variables, and
$$X _ { p } = 2 X _ { 1 } - X _ { 2 } + \epsilon , w h e r e \epsilon \sim \mathcal { N } ( 0 , 2 ) ,$$
which means X p has no additional predictive value beyond what is already captured by the core variables X 1 and X 2 . Another way of saying this is that X p is redundant -the conditional mutual information between Y and X p given t X 1 , X 2 u is zero:
$$M I \left ( Y , X _ { p } | \{ X _ { 1 } , X _ { 2 } \} \right ) = 0 .$$
The top of Fig. 4 graphically depicts this. The following nomenclature will be useful for discussing our method:
$$\begin{array} { r c l } { C o r e S e t } & { = } & { \{ X _ { 1 } , X _ { 2 } \} } \\ { I m i t a t o r } & { = } & { \{ X _ { p } \} } \\ { P r o b e s } & { = } & { \{ X _ { 3 } , \dots , X _ { p - 1 } \} . } \end{array}$$
Note that the imitator X p is highly predictive for Y due to its association with the core variables. We have simulated n 500 samples with p 50. For each feature we compute,
$$R _ { j } \ = \ o v e r a l l r e l e v a n c e s o r e \, o f \, j t h p r e d i c t o r , \ j = 1 , \dots , p .$$
The bottom-left corner of Fig. 4 shows the relative importance scores (scaled between 0 and 1) for the top seven features using gbm algorithm 5 , which correctly finds t X 1 , X 2 , X 50 u as the important predictors. However, it is important to recognise that this modus operandiirrespective of the ML algorithm-can not distinguish the 'fake imitator' X 50 from the real ones X 1 and X 2 . To enable refined characterization of the variables, we have to 'add more dimension' to the classical machine learning feature importance tools.
## 3.1.2 InfoGram and L-Features
We introduce a tool for identification of core admissible features based on the concept of net-predictive information (NPI) of a feature X j .
Definition 3. The net-predictive (conditional) information of X j given all the rest of the variables X j t X 1 , . . . , X p uzt X j u is defined in terms of conditional mutual information:
$$C _ { j } \ = \ M I ( Y , X _ { j } | X _ { - j } ) , \, f o r j = 1 , \dots , p .$$
5 based on whether a particular variable was selected to split on during learning a tree, and how much it improves the Gini impurity or information gain.
Figure 4: Top: The graphical representation of example 3 is shown. Bottom-left: The gbm-feature importance score for top seven features; rest are almost zero thus not shown. Bottom-right: infogram identifies the core variables t X 1 , X 2 u from the X 50 . The L-shaped area with 0 . 1 width is highlighted in red; it contains inadmissible variables with either low relevance or high redundancy.
<details>
<summary>Image 4 Details</summary>

### Visual Description
## Diagram: Causal Diagram with Variable Importance and Corelnfogam
### Overview
The image presents a combination of a causal diagram, a bar chart representing variable importance from a Gradient Boosting Machine (GBM) model, and a scatter plot labeled "Corelnfogam". The causal diagram depicts relationships between variables Xโ, Xโ, Xโ โ, and Y. The bar chart shows the relative importance of different variables, and the Corelnfogam plot visualizes total and net information.
### Components/Axes
* **Causal Diagram:** Nodes representing variables (Xโ, Xโ, Xโ โ, Y) connected by directed edges.
* **Variable Importance (GBM) Bar Chart:**
* X-axis: Variable Importance (scale from 0.0 to 1.0, with markers at 0.0, 0.2, 0.4, 0.6, 0.8, 1.0).
* Y-axis: Variable names (2, 50, 1, 22, 17, 5, 30).
* **Corelnfogam Scatter Plot:**
* X-axis: Total Information (scale from 0.0 to 1.0, with markers at 0.0, 0.2, 0.4, 0.6, 0.8, 1.0).
* Y-axis: Net Information (scale from 0.0 to 1.0, with markers at 0.0, 0.2, 0.4, 0.6, 0.8, 1.0).
* Background shading indicating an "Admissible" region.
### Detailed Analysis or Content Details
**Causal Diagram:**
The diagram shows a directed acyclic graph (DAG).
* Y has incoming edges from Xโ and Xโ.
* Xโ has an incoming edge from Xโ โ.
* Xโ has an incoming edge from Xโ โ.
* Xโ โ has no incoming edges.
**Variable Importance (GBM) Bar Chart:**
The bars are horizontally oriented. The variable "2" has the highest importance, followed by "50", then "1", "22", "17", "5", and finally "30".
* Variable 2: Approximately 0.95 importance.
* Variable 50: Approximately 0.85 importance.
* Variable 1: Approximately 0.75 importance.
* Variable 22: Approximately 0.60 importance.
* Variable 17: Approximately 0.50 importance.
* Variable 5: Approximately 0.35 importance.
* Variable 30: Approximately 0.10 importance.
**Corelnfogam Scatter Plot:**
The plot shows several points scattered within the defined area. The background is shaded in a light red color, labeled "Admissible".
* Xโ: Total Information โ 0.2, Net Information โ 0.8.
* Xโ: Total Information โ 0.9, Net Information โ 0.8.
* Xโ โ: Total Information โ 0.8, Net Information โ 0.1.
There are several other points clustered near the origin (Total Information โ 0.0, Net Information โ 0.0).
### Key Observations
* The variable "2" is significantly more important than all other variables according to the GBM model.
* The Corelnfogam plot suggests that Xโ and Xโ have high net information, while Xโ โ has low net information.
* The "Admissible" region in the Corelnfogam plot defines a space where combinations of total and net information are considered acceptable.
### Interpretation
The causal diagram illustrates hypothesized relationships between variables. The GBM variable importance plot suggests that variable "2" is the strongest predictor in the model, while variable "30" is the weakest. The Corelnfogam plot provides insights into the information content of each variable, with Xโ and Xโ contributing more net information than Xโ โ. The "Admissible" region in the Corelnfogam plot likely represents a constraint or threshold for acceptable information levels.
The combination of these visualizations suggests a system where variable "2" plays a crucial role in predicting the outcome (Y), and the information provided by Xโ and Xโ is more valuable than that provided by Xโ โ. The causal diagram provides the structure, the GBM plot quantifies variable influence, and the Corelnfogam plot assesses information characteristics. The relationships between these elements suggest a model where the influence of Xโ โ is mediated through Xโ and Xโ and is less directly impactful on Y.
</details>
For easy interpretation, we standardize C j by C j max j C j and convert it between 0 and 1. Infogram, which is a abbreviation of information diagram, is a scatter plot of tp R j , C j qu p j 1 over the unit square r 0 , 1 s 2 ; see the bottom-right corner of Fig. 4.
L-Features . The highlighted L-shaped area contains features that are either irrelevant or redundant. For example, notice the position of X 50 in the plot, indicating that it is highly predictive but contains no new complementary information for the response. Clearly, there could be an opposite scenario: a variable carries valuable net individual information for Y , despite being moderately relevant (not ranked among the top few); see Sec. 3.1.4.
Remark 6 (Predictive Features vs. CoreSet) . Recall that in Example 3, the irrelevant feature X 50 is strongly correlated with the relevant ones X 1 and X 2 through (3.2), thus violate the so-called 'irrepresentable condition'-for more details see the bibliographic notes section of Hastie et al. (2015, p. 311). In this scenario (which may easily arise in practice), it is hard to recover the 'important' variables using traditional variable selection methods. The bottom line is: identifying CoreSet is a much more difficult undertaking than merely selecting the most predictive ones. The goal of infogram is to facilitate this process of discovering the key variables that are driving the outcome.
Remark 7 (CoreML) . Two additional comments before diving into a real data examples. First, machine learning models based on 'core' features ( CoreML ) show improved stability, especially when there exists considerable correlation among the features. 6 This will be demonstrated in the next two sections. Second, our approach is not tied to any particular machine learning method; it is completely model-agnostic and can be integrated with any arbitrary algorithm: choose a specific classifier ML 0 and compute (3.3) and (3.4) to generate the associated infogram.
Example 4. MONK's problems (Thrun et al., 1991). It is a collection of three binary artificial classification problems (MONK-1, MONK-2 and MONK-3) with p 6 attributes; available in the UCI Machine Learning Repository. As shown in Fig. 5, infogram selects t X 1 , X 2 , X 5 u for the MONK-1 data, and t X 2 , X 5 u for the MONK-3 data as the core features. MONK-2 is an idiosyncratic case, where all six features turned out to be core! This indicates the possible complex nature of the classification rule for the MONK-2 problem.
## 3.1.3 COREtree: High-dimensional Microarray Data Analysis
How does one distill a compact (parsimonious) ML model by balancing accuracy, robustness, and interpretability to the best extent? To answer that, we introduce COREtree , whose
6 Numerous studies have found that many current methods like partial dependence plots, LIME, and SHAP could be highly misleading, particularly when there is strong dependence among features.
Figure 5: Infograms of Monk's problems. CoreSets are denoted in blue.
<details>
<summary>Image 5 Details</summary>

### Visual Description
## Scatter Plots: Monk Data Analysis
### Overview
The image presents three separate scatter plots, each representing data from a different "Monk" (Monk-1, Monk-2, and Monk-3). Each plot visualizes the relationship between "Total Information" on the x-axis and "Net Information" on the y-axis. Each plot contains a shaded region, presumably representing a threshold or area of interest. Data points are labeled X1 through X6.
### Components/Axes
* **X-axis Label (all plots):** "Total Information" - Scale ranges from 0.0 to 1.0.
* **Y-axis Label (all plots):** "Net Information" - Scale ranges from 0.0 to 1.0.
* **Plot Titles:**
* Top-left: "Monk-1 Data"
* Top-center: "Monk-2 Data"
* Top-right: "Monk-3 Data"
* **Shaded Region (all plots):** A light orange, rectangular region occupying the lower-left portion of each plot. The boundaries are not precisely defined, but appear to be roughly bounded by x=0.0, y=0.0, x=0.8, and y=0.2.
* **Data Points:** Each plot contains several data points labeled X1 through X6. The color of the data points varies by plot.
### Detailed Analysis or Content Details
**Monk-1 Data (Top-Left)**
* Data points are black.
* X1: (Total Information โ 0.7, Net Information โ 0.9)
* X5: (Total Information โ 0.8, Net Information โ 0.8)
**Monk-2 Data (Top-Center)**
* Data points are blue.
* X1: (Total Information โ 0.5, Net Information โ 0.5)
* X2: (Total Information โ 0.7, Net Information โ 0.4)
* X3: (Total Information โ 0.9, Net Information โ 0.9)
* X4: (Total Information โ 0.4, Net Information โ 0.6)
* X5: (Total Information โ 0.9, Net Information โ 0.9)
* X6: (Total Information โ 0.6, Net Information โ 0.7)
**Monk-3 Data (Top-Right)**
* Data points are purple.
* X2: (Total Information โ 0.9, Net Information โ 0.9)
* X5: (Total Information โ 0.8, Net Information โ 0.9)
* Several other data points are present, clustered near the origin (Total Information < 0.4, Net Information < 0.2). Precise values are difficult to determine due to density.
### Key Observations
* **Monk-1:** Only two data points are visible, both with relatively high values for both Total and Net Information.
* **Monk-2:** Six data points are present, showing a wider distribution across the Total and Net Information scales. X3 and X5 are identical.
* **Monk-3:** A cluster of data points near the origin, and two points with high Total and Net Information.
* The shaded region in each plot appears to represent a lower bound for Total and Net Information. Most data points in Monk-2 and Monk-3 fall outside this region.
### Interpretation
The plots likely represent an attempt to quantify information processing or gain in three different individuals ("Monks"). The "Total Information" could represent the amount of information processed, while "Net Information" could represent the amount of useful information retained or the gain in knowledge.
The shaded region might represent a threshold below which information processing is considered ineffective or unreliable. The fact that Monk-1 has all data points outside the shaded region suggests that this monk consistently processes information above a certain level of effectiveness. Monk-2 and Monk-3 show more variability, with some data points falling within the shaded region, indicating periods of less effective information processing.
The identical points for X3 and X5 in Monk-2 suggest a repeated measurement or a consistent state for that monk. The clustering of points near the origin in Monk-3 could indicate a baseline level of information processing or a period of inactivity.
The differences between the monks suggest individual variations in information processing capabilities or strategies. Further analysis would be needed to determine the significance of these differences and the factors that contribute to them.
</details>
construction is guided by infogram. The methodology is illustrated using two real datasets, namely Prostate cancer and SRBCT tumor data. The main findings are striking: it shows how one can systematically search and construct robust and interpretable shallow decision tree models (often with just two or three genes) for noisy high-dimensional microarray datasets that are as powerful as the most elaborate and complex machine learning methods.
Example 5. Prostate cancer gene expression data . The data consist of p 6033 gene expression measurements on 50 control and 52 prostate cancer patients. It is available at https://web.stanford.edu/ hastie/CASI files/DATA/prostate.html . Our analysis is summarized below.
Step 1. Identifying CoreGenes . GBM-selected top 50 genes are shown in Fig. 6. We generate the infogram 7 of these 50 variables (displayed on the top-right corner), which identifies five core-genes t 1627 , 2327 , 77 , 1511 , 1322 u .
Step 2. Rank-transform: Robustness and Interpretability . Instead of directly operating on the gene expression values, we transform them into their ranks. Let t x j 1 , . . . , x jn u be the measurements on j th gene with empirical cdf r F j . Convert the raw x ji to u ji by
$$u _ { j i } = \bar { F } _ { j } ( x _ { j i } ) , \, i = 1 , \dots , n$$
and work on the resulting U n p matrix instead of the original X n p . We do this transformation for two reasons: first, to robustify, since it is known that gene expressions are inherently noisy. Second, to make it unit-free, since the raw gene expression values depend on the type
7 To reduce unnecessary clutter, we have displayed the infogram using top 50 features, since the rest of the genes will be cramped inside the nonessential L-zone anyway.
Figure 6: Prostate data analysis. Top panel: the gbm-feature importance graph, along with the infogram for the top 50 genes. Bottom-left: the scatter plot of Gene 1627 vs. 2327. For clarity, we have plotted them in the quantile domain p u i , v i q , where u rank p X r , 1627 sq{ n and v rank p X r , 2327 sq{ n . The black dots denote control samples with y 0 class and red triangles are prostate cancer samples with y 1 class. Bottom-right: the estimated CoreTree with just two decision-nodes, which is good enough to be 95% accurate.
<details>
<summary>Image 6 Details</summary>

### Visual Description
## Chart Compilation: Variable Importance & Prostate Cancer Data Analysis
### Overview
The image presents a compilation of four charts related to prostate cancer data analysis, likely stemming from a Gradient Boosting Machine (GBM) model. The charts explore variable importance, a scatter plot of Net Information vs. Total Information, and two scatter plots with associated histograms representing normalized ranks for variables X1627 and X2327.
### Components/Axes
* **Top-Left: Variable Importance (GBM)**
* X-axis: Variable Importance (scale 0.0 to 1.0, with increments of 0.2)
* Y-axis: Variable names (listed numerically, from 77 to 3012)
* Bar chart representing the importance of each variable in the GBM model.
* **Top-Right: Prostate Cancer Data - Net Information vs. Total Information**
* X-axis: Total Information (scale 0.0 to 1.0, with increments of 0.2)
* Y-axis: Net Information (scale 0.0 to 1.0, with increments of 0.2)
* Scatter plot of data points, with points labeled X1627, X2327, X1322, X1511, and X77. A red dashed rectangle is drawn in the top-right corner.
* **Bottom-Left: Scatter Plot - Normalized-rank: X2327 vs. Normalized-rank: X1627**
* X-axis: normalized-rank: X1627 (scale 0.0 to 1.0, with increments of 0.2)
* Y-axis: normalized-rank: X2327 (scale 0.0 to 1.0, with increments of 0.2)
* Scatter plot with two distinct point shapes: circles and triangles.
* **Bottom-Right: Histograms - X1627 & X2327**
* Two histograms, one for X1627 and one for X2327.
* X1627 Histogram:
* Categories: 0, 1
* Counts: 1, 49
* Percentages: 2%, 98%
* Label: "yes" (associated with 0) and "no" (associated with 1)
* Threshold: X1627 < 0.33
* X2327 Histogram:
* Categories: 0, 1
* Counts: 1, 56
* Percentages: 2%, 98%
* Label: "yes" (associated with 0) and "no" (associated with 1)
* Threshold: X2327 < 0.24
### Detailed Analysis or Content Details
* **Variable Importance (GBM):** The bar chart shows a decreasing trend in variable importance. The highest importance is around 3012, and the lowest is around 77. The values are approximately: 3012 (around 0.95), 5843 (around 0.85), 1329 (around 0.75), 1909 (around 0.65), 3252 (around 0.55), 5530 (around 0.45), 2945 (around 0.35), 515 (around 0.25), 472 (around 0.15), 77 (around 0.05).
* **Prostate Cancer Data - Net Information vs. Total Information:** The scatter plot shows a generally positive correlation, but with significant spread.
* X1627: Approximately (0.9, 0.85)
* X2327: Approximately (0.7, 0.75)
* X1322: Approximately (0.4, 0.35)
* X1511: Approximately (0.5, 0.3)
* X77: Approximately (0.6, 0.2)
The points are clustered, with X1627 being the most extreme point in the top-right quadrant.
* **Scatter Plot - Normalized-rank: X2327 vs. Normalized-rank: X1627:** The scatter plot shows two distinct clusters of points. The circles are generally concentrated in the lower-left quadrant, while the triangles are more dispersed, with a tendency towards the upper-right quadrant.
* **Histograms - X1627 & X2327:**
* X1627: The histogram shows a strong bias towards the value 1 (98% of the data), indicating that most samples have a normalized rank greater than 0.33.
* X2327: The histogram also shows a strong bias towards the value 1 (98% of the data), indicating that most samples have a normalized rank greater than 0.24.
### Key Observations
* Variable 3012 is the most important variable in the GBM model.
* X1627 has the highest Net and Total Information.
* The scatter plot of X2327 vs. X1627 suggests a potential relationship between these two variables, with the triangles indicating a higher normalized rank for X2327 given a higher normalized rank for X1627.
* Both X1627 and X2327 are predominantly greater than their respective thresholds (0.33 and 0.24).
### Interpretation
The data suggests that variable 3012 is a strong predictor in the GBM model for prostate cancer data. The Net Information vs. Total Information plot highlights X1627 as a particularly informative variable. The scatter plot and histograms suggest a relationship between X1627 and X2327, and that both variables are generally high-ranking. The histograms indicate that the majority of samples have normalized ranks above the specified thresholds for both variables. The red rectangle in the Net Information vs. Total Information plot may indicate a region of interest or a threshold for identifying potentially significant variables. The separation of the points in the scatter plot by shape (circles and triangles) could represent different subgroups within the data, potentially related to disease status or other clinical characteristics. Further investigation is needed to understand the specific meaning of these variables and their relationship to prostate cancer.
</details>
of preprocessing, thus carries much less scientific meaning. On the other hand, percentiles are much more easily interpretable to convey 'how overexpressed a gene is.'
Step 3. Shallow Robust Tree . We build a single decision tree using the infogram-selected coregenes. This is displayed in the bottom-right panel of Fig. 6. Interestingly, the CoreTree retained only two genes t 1627 , 2327 u whose scatter plot (in the rank-transform domain) is shown in the bottom-left corner of Fig. 6. A simple eyeball estimate of the discrimination surfaces are shown in bold (black and red) lines, which closely matches with the decision tree rule. It is quite remarkable that we have reduced the original 6033-dimensional problem to a simple bivariate two-sample one, just by wisely selecting the features based on the infogram.
Step 4. Stability . Note the tree that we build is based only on the infogram-selected core features. These features have less redundancy and high relevance, which provide an extraordinary stability (over different runs on the same dataset) to the decision-tree-a highly desirable characteristic.
Step 5. Accuracy . The accuracy of our single decision tree (on a randomly selected 20% test set, averaged over 100 times) is more than 95%. On the other hand, the full-data gbm (with p 6033 genes) is only 75% accurate. Huge simplification of the model-architecture with significant gain in the predictive performance!
Step 6. Gene Hunting: Beyond Marginal Screening . We compute two-sample t -test statistic for all p 6033 genes and rank them according to their absolute values (the gene with the largest absolute t -statistic gets ranked 1-the most differentially expressed gene). The t -scores for the coregenes along with their p-values and ranks are:
$$\begin{array} { r l } & { \left | t _ { 1 6 2 7 } \right | = 0 . 1 5 ; \, p { - v a l u e } = 0 . 8 8 ; \, r a n k = 5 3 8 3 . } \\ & { \left | t _ { 2 3 2 7 } \right | = 1 . 4 0 ; \, p { - v a l u e } = 0 . 1 7 ; \, r a n k = 1 2 2 8 . } \end{array}$$
Thus, it is hopeless to find coregenes by any marginal-screening method-they are too weak marginally (in isolation), but jointly an extremely strong predictor . The good news is that our approach can find those multivariate hidden gems in a completely nonparametric fashion.
Step 7. Lasso Analysis and Results . We have used the glmnet R-package. Lasso with ฮป min (minimum cross-validation error) selects 70 genes, where as ฮป 1se (the largest lambda such that error is within 1 standard error of the minimum) selects 60 genes. Main findings are:
Figure 7: SRBCT data analysis. Top-left: GBM-feature importance plot; top 50 genes are shown. Top-right: The associated infogram. Bottom panel: The estimated coretree with just three decision nodes.
<details>
<summary>Image 7 Details</summary>

### Visual Description
## Decision Tree: GBM Variable Importance & SRBCT Cancer Data
### Overview
The image presents a decision tree visualization, likely generated from a Gradient Boosting Machine (GBM) model, alongside a scatter plot showing the relationship between Total Information and Net Information for SRBCT cancer data points. The decision tree illustrates splits based on variable importance, with associated probabilities and node counts. The scatter plot displays individual data points labeled with identifiers (X123, X2050, etc.).
### Components/Axes
* **Left:** Bar chart titled "Variable Importance: GBM". X-axis represents variable importance (0.0 to 1.0), Y-axis displays variable identifiers (123, 1003, 129, etc.).
* **Right:** Scatter plot titled "SRBCT Cancer Data". X-axis: "Total Information" (0.0 to 1.0). Y-axis: "Net Information" (0.0 to 1.0).
* **Center:** Decision tree diagram with nodes representing splits and leaves indicating outcomes ("yes" or "no"). Nodes contain probabilities and percentages.
* **Legend (Scatter Plot):** Located in the top-right corner, color-coded data points: X123 (Red), X1954 (Blue), X2050 (Orange), X246 (Green), X742 (Purple).
### Detailed Analysis or Content Details
**Variable Importance (Bar Chart):**
The bar chart shows the relative importance of different variables in the GBM model. The bars are arranged in descending order of importance. Approximate values (with uncertainty due to visual estimation):
* 2192: ~0.98
* 1710: ~0.95
* 1644: ~0.93
* 1095: ~0.92
* 1601: ~0.90
* 129: ~0.85
* 1003: ~0.80
* 123: ~0.75
* 726: ~0.70
* 37: ~0.60
* 278: ~0.50
* 2186: ~0.45
* 1191: ~0.40
* 2214: ~0.35
**SRBCT Cancer Data (Scatter Plot):**
The scatter plot shows the distribution of data points based on Total and Net Information.
* X123 (Red): Total Information ~0.95, Net Information ~0.90
* X1954 (Blue): Total Information ~0.85, Net Information ~0.30
* X2050 (Orange): Total Information ~0.30, Net Information ~0.60
* X246 (Green): Total Information ~0.10, Net Information ~0.20
* X742 (Purple): Total Information ~0.90, Net Information ~0.20
The points are relatively sparse, with X123 having the highest values for both Total and Net Information.
**Decision Tree:**
The decision tree shows a series of splits based on variable thresholds.
* **Root Node (Top):** 100% of data.
* **Split 1:** X1954 >= 0.67. "yes" branch (35% of data) and "no" branch (65% of data).
* **"yes" Branch:** 35% of data.
* Split 2: X742 >= 0.8. "yes" branch (4% of data) and "no" branch (31% of data).
* **"no" Branch:** 65% of data.
* Split 3: X742 >= 0.8. "yes" branch (33% of data) and "no" branch (32% of data).
* **Leaf Nodes:** Represent final outcomes.
* Node 2: 96, 0.04, 0.00, 34%
* Node 3: 0.05, 0.29, 0.03, 46%
* Node 4: 0.00, 1.00, 0.00, 20%
* Node 5: 0.07, 0.04, 0.89, 33%
* Node 6: 3, 4, 63%
* Node 7: 1, 4, 33%
### Key Observations
* Variables 2192 and 1710 are the most important in the GBM model, according to the bar chart.
* X123 stands out in the scatter plot as having both high Total and Net Information.
* The decision tree shows a hierarchical splitting process, with X1954 and X742 being key variables in the initial splits.
* The leaf nodes indicate the final classification outcomes, with associated probabilities.
### Interpretation
The image illustrates the process of building and evaluating a predictive model (GBM) for SRBCT cancer data. The variable importance plot identifies the features that contribute most to the model's predictive power. The scatter plot provides a visual representation of the data distribution in terms of Total and Net Information, potentially highlighting data points that are more informative or representative of specific cancer subtypes. The decision tree shows how the model makes predictions based on a series of rules derived from the data.
The combination of these visualizations allows for a comprehensive understanding of the model's behavior and the underlying data characteristics. The high importance of variables 2192 and 1710 suggests that these features are strongly correlated with the outcome being predicted. The outlier X123 in the scatter plot may represent a unique case or a data point with particularly strong predictive signals. The decision tree provides a transparent view of the model's decision-making process, allowing for validation and interpretation. The percentages at each node represent the proportion of samples that follow each branch, providing insights into the prevalence of different outcomes.
</details>
(i) The coregenes t 1627 , 2327 u were never selected, probably because they are marginally very weak; and the significant interaction is not detectable by standard-lasso.
(ii) Accuracy of Lasso with ฮป min is around 78% (each time we have randomly selected 85% data for training; computed the ฮป cv for making prediction; averaged over 100 runs).
Step 8. Explainability . The final 'two-gene model' is so simple and elegant that it can be easily communicated to doctors and medical practitioners: a patient with overexpressed gene 1627 and gene 2327 has a higher risk of getting prostate cancer. Biologists can use these two genes as robust prognostic markers for decision-making (or for recommending the proper drug). It is hard to imagine there could be a more accurate algorithm, one that is at least as compact as the 'two-gene model.' We should not forget that the success behind this dramatic model-reduction hinges on discovering multivariate coregenes , which: (i) help us to gain insights into biological mechanisms [clarifying 'who' and 'how'], and (ii) provide a simple explanation of the predictions [justifying 'why'].
Example 6. SRBCT Gene Expression Data . It is a microarray experiment of Small Round Blue Cell Tumors (SRBCT) taken from a childhood cancer study. It contain information on p 2 , 308 genes on 63 training samples and 25 test samples. Among n 63 tumor examples, 8 are Burkitt Lymphoma (BL), 23 are Ewing Sarcoma (EWS), 12 are neuroblastoma (NB), and 20 are rhabdomyosarcoma (RMS). The dataset is available in the plsgenomics Rpackage. The top-panel of Fig. 7 shows the infogram, which identifies five core genes t 123 , 742 , 1954 , 246 , 2050 u . The associated coretree with only three decision-nodes is shown in the bottom panel, which accurately classifies 95% of the test cases. In addition, it enjoys all the advantages that were ascribed to the prostate data-we don't repeat them again.
Remark 8. We end this section with a general remark: when applying machine learning algorithms in scientific applications, it is of the utmost importance to design models that can clearly explain the 'why and how' behind their decision-making process. We should not forget that scientists mainly use machine learning as a tool to gain a mechanistic understanding, so that they can judiciously intervene and control the system. Sticking with the old way of building inscrutable predictive black-box models will severely slow down the adoption of ML methods in scientific disciplines like medicine and healthcare.
## 3.1.4 COREglm: Breast Cancer Wisconsin Data
Example 7. Wisconsin Breast Cancer Data . The Breast Cancer dataset is available in the UCI machine learning repository. It contains n 569 malignant and benign tumor cell
Figure 8: Breast Cancer Wisconsin Data. Infogram reveals where the crux of the information is hidden. Infogram-guided admissible decision tree-a compact yet accurate classifier.
<details>
<summary>Image 8 Details</summary>

### Visual Description
\n
## Charts: CoreInfogram & Core-Scatter plot
### Overview
The image presents two charts: a scatter plot labeled "CoreInfogram" and another scatter plot labeled "Core-Scatter plot". Both charts appear to be related to data analysis, potentially in a medical or biological context, given the feature names.
### Components/Axes
**CoreInfogram:**
* **X-axis:** "Total Information" (Scale: 0.0 to 1.0)
* **Y-axis:** "Net Information" (Scale: 0.0 to 1.0)
* **Data Points:** Scattered points with no explicit legend, but labels are directly associated with some points.
* **Labels:** "texture\_worst", "concave\_points\_mean", "radius\_worst", "texture\_mean"
**Core-Scatter plot:**
* **X-axis:** "concave\_points\_mean" (Scale: 0.00 to 0.20)
* **Y-axis:** "radius\_worst" (Scale: 8 to 35)
* **Data Points:** Two distinct sets of points, represented by different colors and markers.
* **Legend:** (Implied by color and marker)
* Red points with '+' markers
* Green diamonds
### Detailed Analysis or Content Details
**CoreInfogram:**
The chart displays the relationship between "Total Information" and "Net Information". The data points are scattered, with a concentration of points near the bottom-left corner.
* "texture\_worst" is located at approximately (0.1, 0.85).
* "concave\_points\_mean" is located at approximately (0.25, 0.65).
* "radius\_worst" is located at approximately (0.8, 0.5).
* "texture\_mean" is located at approximately (0.3, 0.2).
**Core-Scatter plot:**
This chart shows the relationship between "concave\_points\_mean" and "radius\_worst".
* **Red Points (+):** These points form a dense cluster in the lower-left region of the chart, with "concave\_points\_mean" values ranging from approximately 0.00 to 0.10 and "radius\_worst" values ranging from approximately 8 to 18. There is a slight upward trend.
* **Green Diamonds:** These points are more dispersed, primarily in the upper-right region, with "concave\_points\_mean" values ranging from approximately 0.10 to 0.20 and "radius\_worst" values ranging from approximately 18 to 35. There is a clear upward trend.
### Key Observations
* **CoreInfogram:** The labeled points suggest that "texture\_worst" and "concave\_points\_mean" have relatively high "Net Information" compared to "texture\_mean". "radius\_worst" has moderate "Net Information" and high "Total Information".
* **Core-Scatter plot:** The two distinct clusters of points suggest two different groups or categories within the data. The positive correlation between "concave\_points\_mean" and "radius\_worst" indicates that as one variable increases, the other tends to increase as well.
### Interpretation
The charts likely represent features extracted from a dataset, potentially related to tumor characteristics (given the feature names like "radius\_worst", "concave\_points\_mean", and "texture").
* **CoreInfogram** appears to be a feature importance plot, showing how much "Net Information" each feature contributes, given its "Total Information". Features with higher "Net Information" are more informative for distinguishing between different classes or outcomes.
* **Core-Scatter plot** suggests a strong relationship between "concave\_points\_mean" and "radius\_worst". The two clusters could represent different subtypes of tumors or different stages of disease progression. The upward trend indicates that larger tumors (higher "radius\_worst") tend to have more irregular contours (higher "concave\_points\_mean").
The combination of these two charts provides a comprehensive view of the data, highlighting both the importance of individual features and the relationships between them. Further analysis would be needed to determine the specific meaning of these findings in the context of the original dataset.
</details>
samples. The task is to build an admissible (interpretable and accurate) ML classifier based on p 31 features extracted from cell nuclei images.
Step 1. Infogram Construction: Fig. 8 displays the infogram, which provides a quick understanding of the phenomena by revealing its 'core.' Noteworthy points: (i) there are three highly predictive inadmissible features (green bubbles in the plot: perimeter worst, area worst, and concave points worst), which have large overall predictive importance but almost zero net individual contributions. We have called these variables ' Imitators ' in Sec. 3.1.1. (ii) Three among the four 'core' admissible features (texture worst, concave points mean, and texture mean) are not among the top features based on usual predictive information, yet they contain a considerable amount of new exclusive information (net-predictive information) that is useful for separating malignant and benign tumor cells. In simple terms, infogram help us to track down where the 'core' discriminatory information is hidden.
Step 2. Core-Scatter plot. The right panel of Fig. 8 shows the scatter plot of the top two core features and how they separate the malignant and benign tumor cells.
Step 3. Infogram-assisted CoreGLM model: The simplest possible model that one could build is a logistic regression based on those four admissible features. Interestingly, the Akaike information criterion (AIC) based model selection further drops the variable texture mean ,
which is hardly surprising considering that it has the least net and total information among the four admissible core features. The final logistic regression model with three core variables is displayed below (output of glm R-function):
```
#COREglm Model: UCI breast cancer data
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -29.42361 3.85131 -7.640 2.17e-14 ***
concave_points_mean 96.48880 16.11261 5.988 2.12e-09 ***
radius_worst 0.99767 0.16792 5.941 2.83e-09 ***
texture_worst 0.30451 0.05302 5.744 9.27e-09 ***
```
This simple parametric model achieves a competitive accuracy of 96 . 50% (on a 15% test set; averaged over 50 trials). Compare this with full-fledged big ML models (like gbm, random forest, etc.) which attain accuracy in the range of 95 97%. This example again shows how infogram can guide the design of a highly transparent and interpretable CoreGLM model with a few handful of variables-which is as powerful as complex black-box ML methods.
Remark 9 (Integrated statistical modeling culture) . One should bear in mind that the process by which we arrived at simple admissible models actually utilizes the power of modern machine learning-needed to estimate the formula (3.4) of definition 3, as described by the theory laid out in section 2. For more discussion on this topic, see Appendix A.6 and Mukhopadhyay and Wang (2020). In short, we have developed a process of constructing an admissible (explainable and efficient) ML procedure starting from a 'pure prediction' algorithm.
## 3.2 FINEml: Algorithmic Fairness
ML-systems are increasingly used for automated decision-making in various high-stakes domains such as credit scoring, employment screening, insurance eligibility, medical diagnosis, criminal justice sentencing, and other regulated areas. To ensure that we are making responsible decisions using such algorithms, we have to deploy admissible models that can balance Fairness, INterpretability, and Efficiency ( FINE ) to the best possible extent. This section discusses principles and tools for designing such FINE -algorithms.
## 3.2.1 FINE-ML: Approaches and Limitations
Imagine that a machine learning algorithm is used by a bank to accurately predict whether to approve or deny a loan application based on the probability of default. This ML-based
risk-assessing tool has access to the following historical data:
- Y : { 0 , 1 } Loan status variable-1 whether the loan was approved and 0 if denied.
- S : Collection of protected attributes { gender, marital status, age, race } .
- X : Feature matrix { income, loan amount, education, credit history, zip code }
To automate the loan-eligibility decision-making process, the bank wants to develop an accurate classifier that will not discriminate among applicants on the basis of their protected features. Naturally, the question is: how to go about designing such ML-systems that are accurate and at the same time provide safeguards against algorithmic discrimination?
Approach 1 Non-constructive : We can construct a myriad of ML models by changing and tuning different hyper-parameters, base learners, etc. One can keep building different models until one finds a perfect one that avoids adverse legal and regulatory issues. There are at least two problems with this 'try until you get it right' approach: first, it is non-constructive. The whole process gives zero guidance on how to rectify the algorithm to make it less-biased. Second, there is no single definition of fairness-more than twenty different definitions have been proposed over the last few years (Narayanan, 2018). And the troubling part is that these different fairness measures are mutually incompatible to each other and cannot be satisfied simultaneously (Kleinberg, 2018); see Appendix A.4. Hence this laborious process could end up being a wild-goose chase, resulting in a huge waste of computation.
Approach 2 Constructive : Here we seek to construct ML models that-by design-mitigate bias and discrimination. To execute this task successfully, we must first identify and remove proxy variables (e.g., zip code) from the learning set, which prevent a classification algorithm from achieving desired fairness. But how to define a proper mathematical criterion to detect those surrogate variables? Can we develop some easily interpretable graphical exploratory tools to systematically uncover those problematic variables? If we succeed in doing this, then ML developers can use it as a data filtration tool to quickly spot and remove the potential sources of biases in the pre-modeling (data-curation) stage, in order to mitigate fairness issues in the downstream analysis.
Figure 9: Infogram maps variables in a two dimensional (effectiveness vs. safety) diagram. It is a pre-modeling nonparametric exploratory tool for admissible feature selection. Infogram is interpreted based on graphical (conditional) independence structure. In real problems, all variables will have some degree of correlation with the protected attributes. Important part is to quantify the 'degree,' which is measured through eq. (3.6)-as indicated by varying thicknesses of the edges (bold to dotted). Ultimately, the purpose of this graphical diagnostic tool is to provide the necessary guardrails to construct an appropriate learning algorithm that can retain as much of the predictive accuracy as possible, while defending against unforeseen biases-tool for risk-benefit analysis.
<details>
<summary>Image 9 Details</summary>

### Visual Description
## Scatter Plot & Diagram: InfoGram - Relevance vs. Safety & Associated Tree Structures
### Overview
The image presents a scatter plot showing the relationship between "Relevance-index" and "Safety-index" for six data points labeled A through F. Below the scatter plot are six corresponding tree diagrams, one for each data point. The tree diagrams appear to represent relationships between variables X, Y, and S.
### Components/Axes
* **Scatter Plot:**
* **X-axis:** Relevance-index, ranging from approximately 0.0 to 1.0.
* **Y-axis:** Safety-index, ranging from approximately 0.0 to 1.0.
* **Data Points:** Labeled A, B, C, D, E, and F.
* **Background:** A light orange gradient, darker at the top-left and lighter at the bottom-right.
* **Tree Diagrams:** Six diagrams, labeled A through F, each depicting a tree-like structure with nodes X, Y, and S.
* **Nodes:** X, Y, and S are represented as circles.
* **Edges:** Solid lines and dotted lines connect the nodes, indicating relationships.
### Detailed Analysis or Content Details
* **Scatter Plot Data Points:**
* **A:** Relevance-index โ 1.0, Safety-index โ 0.95.
* **B:** Relevance-index โ 1.0, Safety-index โ 0.05.
* **C:** Relevance-index โ 0.7, Safety-index โ 0.55.
* **D:** Relevance-index โ 0.6, Safety-index โ 0.75.
* **E:** Relevance-index โ 0.5, Safety-index โ 0.25.
* **F:** Relevance-index โ 0.2, Safety-index โ 0.1.
* **Tree Diagram Details:**
* **A:** Y is connected to X and S with solid lines.
* **B:** Y is connected to X and S with solid lines.
* **C:** Y is connected to X and S with solid lines.
* **D:** Y is connected to S with a solid line, and X is connected to Y with a dotted line.
* **E:** Y is connected to S with a solid line, and X is connected to Y with a dotted line.
* **F:** Y is connected to S with a dotted line, and X is connected to Y with a dotted line.
### Key Observations
* **Scatter Plot Trend:** There is no strong linear correlation between Relevance-index and Safety-index. Points A and B have high Relevance-index values but differ significantly in Safety-index.
* **Tree Diagram Variation:** The tree diagrams show a consistent structure for A, B, and C, while D, E, and F exhibit a different structure with dotted lines indicating a weaker or different type of relationship between X and Y.
* **Dotted vs. Solid Lines:** The use of dotted lines in diagrams D, E, and F suggests a conditional or less direct relationship between X and Y compared to the solid lines in A, B, and C.
### Interpretation
The "InfoGram" appears to be visualizing a trade-off or relationship between relevance and safety, alongside a representation of the underlying relationships between variables X, Y, and S. The scatter plot suggests that high relevance does not necessarily guarantee high safety, and vice versa. The tree diagrams provide context for each data point, potentially indicating the factors influencing the relevance and safety scores.
The distinction between the tree structures (solid vs. dotted lines) is crucial. A, B, and C represent scenarios where X and Y are strongly linked, contributing to a certain level of safety. D, E, and F, however, show a weaker link between X and Y, potentially explaining their lower safety scores despite varying relevance levels.
The use of the gradient background in the scatter plot might be intended to visually emphasize the trade-off โ areas with high relevance and high safety are less common (darker orange), while areas with low relevance and low safety are more prevalent (lighter orange).
The diagrams suggest a system where Y and S are always directly related, but the relationship between X and Y can vary in strength, impacting the overall safety profile. This could represent a scenario where a core component (S) is always affected by a primary factor (Y), but the influence of a secondary factor (X) is conditional or indirect.
</details>
## 3.2.2 InfoGram and Admissible Feature Selection
We offer a diagnostic tool for identification of admissible features that are predictive and safe. Before going any further, it is instructive to formally define what we mean by 'safe.'
Definition 4 (Safety-index and Inadmissibility) . Define the safety-index for variable X j as
$$F _ { j } \, = \, M I \left ( Y , X _ { j } | \, \{ S _ { 1 } , \dots , S _ { q } \} \right )$$
This quantifies how much extra information X j carries for Y that is not acquired through the sensitive variables S p S 1 , . . . , S q q . For interpretation purposes, we standardize F j between zero and one by dividing by the max j F j . Variables with 'small' F -values (F-stands for fairness) will be called inadmissible, as they possess little or no informational value beyond their use as a dummy for protected characteristics.
Construction . In the context of fairness, we construct the infogram by plotting tp R j , F j qu p j 1 , where recall R j denotes the relevance score (3.3) for X j . The goal of this graphical tool is to assist identification of admissible features which have little or no information-overlap with sensitive attributes S , yet are reasonably predictive for Y . Interpretation . Fig. 9 displays an infogram with six covariates. The L-shaped highlighted region contains variables that are either inadmissible (the horizontal slice of L) or inadequate (the vertical slice of L) for prediction. The complementary set L c comprises of the desired admissible features. Focus on variables A and B : both have the same predictive power, but are gained through a completely different manner. The variable B gathered information for Y entirely through the protected features (verify it from the graphical representation of B ), and is thus inadmissible. On the other hand, the variable A carries direct informational value, having no connection with the prohibitive S , and is thus totally admissible. Unfortunately, though, reality is usually more complex than this clear-cut black and white A -B situation. The fact of the matter is: admissibility (or fairness, per se) is not a yes/no concept, but a matter of degree 8 , which is explained at the bottom two rows of Fig. 9 utilizing variables C to F.
Remark 10. The graphical exploratory nature of the infogram makes the whole learning process much more transparent, interactive, and human-centered.
Legal doctrine . Note that in our framework the protected variables are used only in the pre-deployment phase to determine what other (admissible) attributes to include in the
8 'Zero bias' is an illusion. All models are biased (to a different degree), but some are admissible. The real question is how to methodically construct those admissible ones from possibly biased data.
algorithm to mitigate unforeseen downstream bias, which is completely legal (Hellman, 2020). It is also advisable that once inadmissible variables are identified using an infogram, not to throw them (especially the highly predictive ones such as the feature B in Fig. 9) blindly from the analysis without consulting domain experts-including some of them may not necessarily imply violation of the law; ultimately, it is up to the policymakers and judiciary to determine their appropriateness (legal permissibility) based on the given context. Our job as statisticians is to discover those hidden inadmissible L-features (preferably in a fully data-driven and automated manner) and raise a red flag for further investigation.
## 3.2.3 FINEtree and ALFA-Test: Financial Industry Applications
Example 8. The Census Income Data . The dataset is extracted from 1994 United States Census Bureau database, available in UCI Machine Learning Repository. It is also known as the 'Adult Income' dataset, which contains n 45 , 222 records involving personal details such as yearly income (whether it exceeds $ 50,000 or not), education level, age, gender, marital-status, occupation, etc. The classification task is to determine whether a person makes $ 50k per year based on a set of 14 attributes, of which four are protective:
$$S = \left \{ A g e , G e n d e r , R a c e , M a r i t a l . S t a t u s \right \} .$$
Step 1 . Trust in data. Is there any evidence of built-in bias in the data? That is to say, whether a 'significant' portion of the decision-making ( Y is greater or less than 50k per year) was influenced by the sensitive attributes S beyond what is already captured by other covariates X ? One may be tempted to use MI p Y, S | X q as a measure for assessing fairness. But we need to be careful while interpreting the value of MI p Y, S | X q . It can take a 'small' value for two reasons: First, a genuine case of fair decision-making where individuals with similar x received a similar outcome irrespective of their age, gender, and other protected characteristics; see Appendix A.4 for one such example. Second, there is a collusion between X and S in the sense that X contains some proxies of S which reduce its effect-size-leading one to falsely declare a decision-rule fair when it is not.
Remark 11 (Shielding Effect) . The presence of a highly-correlated surrogate variable in the conditional set drastically reduces the size of the CMI-statistics. We call this contraction phenomenon of effect-size in the presence of proxy feature the 'shielding effect.' To guard against this effect-distortion phenomenon we first have to identify the admissible features from the infogram.
Step 2 . Infogram to identify inadmissible proxy features. The infogram, shown in the left
panel of Fig. 10, finds four admissible features
$$X _ { A } \ = \ \left \{ C a p i t a l g a i n , C a p i t a l l o s s , O c u p a t i o n , E d u c a t i o n \right \} .$$
They share very little information with S yet are highly predictive. In other words, they enjoy high relevance and high safety-index. Next, we also see that there is a feature that appears at the lower-right corner
$$X _ { R } \ = \ \left \{ R e l a t i o n s h i p \right \}$$
which is the prime source of bias; the subscript 'R' stands for risky. The variable relationship represents the respondent's role in the family-i.e., whether the breadwinner is husband, wife, child, or other relative.
Remark 12. Since X R is highly predictive, most unguided 'pure prediction' ML algorithms will include it in their models, even though it is quite unsafe. Admissible ML models should avoid using variables like relationship to reduce unwanted bias. 9 A careful examination reveals that there could be some unintended association between relationship and other protected attributes due to social constructs. Without any formal method, it is a hopeless task (especially for practitioners and policymakers; see Lakkaraju and Bastani 2020, Sec. 5.2) to identify these innocent-looking proxy variables in a scalable and automated way.
Step 3 . ALFA-test and encoded bias. We can construct an honest fairness assessment metric by conditioning CMI with X A (instead of X q :
$$\begin{array} { r l } & { \dot { M } ( Y , S | X _ { A } ) = 0 . 1 3 , w i t h p v a l u e a m o l s t 0 . } \end{array}$$
This strongly suggests historical bias or discrimination is encoded in the data. Our approach not only quantifies but also allows ways to mitigate bias to create an admissible prediction rule; this will be discussed in Step 4. The preceding discussions necessitate the following, new general class of fairness metrics.
Definition 5 (Admissible Fairness Criterion) . To check whether an algorithmic decision is fair given the sensitive attributes and the set of admissible features (identified from infogram), define AdmissibLe FAirness criterion, in short the ALFA-test, as
$$\alpha _ { Y } \, \colon = \, \alpha ( Y \, | \, S , X _ { A } ) \, = \, M I ( Y , S \, | \, X _ { A } ) .$$
Three Different Interpretations . The ALFA-statistic (3.8) can be interpreted from three different angles.
9 or at least should be assessed by experts to determine their appropriateness.
<details>
<summary>Image 10 Details</summary>

### Visual Description
\n
## Scatter Plot & Mosaic Plots: Census Income Data
### Overview
The image presents a scatter plot alongside a series of mosaic plots (also known as treemaps or Marimekko charts). The scatter plot visualizes the relationship between "Relevance-index" and "Safety-index" for different factors related to census income data: "capital_gain", "education", "occupation", "capital_loss", and "relationship". The mosaic plots break down the distribution of income levels (<=50K and >50K) based on combinations of these factors.
### Components/Axes
* **Scatter Plot:**
* X-axis: Relevance-index (Scale: 0.0 to 1.0)
* Y-axis: Safety-index (Scale: 0.0 to 1.0)
* Data Points: Labeled as "capital_gain", "education", "occupation", "capital_loss", and "relationship".
* **Mosaic Plots:** Each plot represents a specific combination of factors and income level.
* The width of each rectangle represents the proportion of individuals in that category.
* The rectangles are divided into two sections, representing income levels: <=50K and >50K.
* Each section displays the percentage of individuals within that income level for the given category.
* **Legend:** Located in the top-right corner, indicating income levels: "yes" (<=50K) and "no" (>50K). The colors are white and dark gray respectively.
### Detailed Analysis or Content Details
**Scatter Plot:**
* **capital_gain:** Located approximately at (0.3, 0.3).
* **education:** Located approximately at (0.2, 0.7).
* **occupation:** Located approximately at (0.3, 0.5).
* **capital_loss:** Located approximately at (0.1, 0.2).
* **relationship:** Located approximately at (0.9, 0.1).
**Mosaic Plots (from top to bottom, left to right):**
1. **capital_gain < 5119:**
* <=50K: 78%, 24 individuals
* >50K: 100%
2. **capital_loss < 1821:**
* <=50K: 81%, 19 individuals
* >50K: 80%, 20 individuals
* Total: 95%
3. **occupationExec-managerial < 0.5:**
* <=50K: 81%, 19 individuals
* >50K: 28%, 72 individuals
* Total: 92%
4. **capital_loss >= 1979:**
* <=50K: 61%, 39 individuals
* >50K: 59%, 41 individuals
* Total: 10%
5. **educationMasters < 0.5:**
* <=50K: 84%, 16 individuals
* >50K: 64%, 36 individuals
* Total: 82%
6. **capital_gain >= 2365:**
* <=50K: 85%, 15 individuals
* >50K: 21%, 79 individuals
* Total: 1%
7. **<=50K:**
* <=50K: 35%, 65 individuals
* >50K: 11%, 89 individuals
* Total: 2%
8. **>50K:**
* <=50K: 0%, 0 individuals
* >50K: 95%, 5 individuals
* Total: 5%
### Key Observations
* The scatter plot suggests a weak or non-linear relationship between the Relevance-index and Safety-index for the factors considered. "relationship" has a high Relevance-index and low Safety-index.
* The mosaic plots reveal varying distributions of income levels across different factor combinations.
* The "capital_gain < 5119" plot shows a strong association with >50K income.
* The "capital_loss >= 1979" plot shows a relatively even distribution between <=50K and >50K income.
* The "educationMasters < 0.5" plot shows a higher proportion of individuals with <=50K income.
* The last two mosaic plots show a very skewed distribution of income levels.
### Interpretation
The image presents an exploratory data analysis of census income data, attempting to identify factors that correlate with income level. The scatter plot provides a high-level overview of the relationships between different factors and their relevance/safety indices. The mosaic plots offer a more granular view, revealing how specific combinations of factors influence income distribution.
The strong association between "capital_gain < 5119" and >50K income suggests that individuals with lower capital gains are more likely to earn higher incomes. Conversely, the relatively even distribution in the "capital_loss >= 1979" plot indicates that capital loss may not be a strong predictor of income level. The "educationMasters < 0.5" plot suggests that individuals with a Master's degree but a low score on the education index are more likely to earn <=50K.
The skewed distributions in the final two mosaic plots (<=50K and >50K) suggest that these income levels are heavily influenced by specific factor combinations. The fact that the >50K plot has a high proportion of individuals (95%) indicates a strong positive correlation between the factors considered and high income.
The overall analysis suggests that income level is a complex phenomenon influenced by multiple factors, and that certain combinations of factors are more strongly associated with income than others. Further investigation would be needed to determine the causal relationships between these factors and income level.
</details>
Relevance-index
Figure 10: Census Income Data. The left plot shows the infogram. And FINEtree is displayed on the right.
- It quantifies the trade-off between fairness and model performance: how much netpredictive value is contained within S (and its close associates)? This is the price we pay in terms of accuracy to ensure a higher degree of fairness.
- A small ฮฑ -inadmissibility value ensures that individuals with similar 'admissible characteristics' receive a similar outcome. Note that our strategy of comparing individuals with respect to only (infogram-learned) 'admissible' features allows us to avoid the (direct and indirect) influences of sensitive attributes on the decision making.
- Lastly, the ฮฑ -statistic can also be interpreted as 'bias in response Y .' For a given problem, if we have access to several 'comparable' outcome variables 10 then we choose the one which minimizes the ฮฑ -inadmissibility measure. In this way, we can minimize the loss of predictive accuracy while mitigating the bias as best as we can.
Remark 13 (Generalizability) . Note that, unlike traditional fairness measures, the proposed ALFA-statistic is valid for multi-class problems with a set of multivariate mixed protected attributes-which is, in itself, a significant step forward.
Step 4
.
FINEtree.
The inherent historical footprints of bias (as noted in eq.
3.7) need
10 e.g, Obermeyer et al. (2019) showed that healthcare cost can be a poor proxy of health, especially for Black patients; similarly, Blattner and Nelson (2021) showed that credit scores could be a poor proxy for creditworthiness especially for low-income and minority groups.
to be deconstructed to build a less-discriminatory classification model for the income data. Fig. 10 shows FINEtree-a simple decision tree based on the four admissible features, which attains 83.5% accuracy.
Remark 14. FINEtree is an inherently explainable, fair, and highly competent (decent accuracy) model whose design was guided by the principles of admissible machine learning.
Step 5 . Trust in algorithm through risk assessment and ALFA-ranking: The current standard for evaluating ML models is primarily based on predictive accuracy on a test set, which is narrow and inadequate. For an algorithm to be deployable it has to be admissible ; an unguided ML carries the danger of inheriting bias from data. To see that, consider the following two models:
Model A : Decision tree based on X A (FINEtree)
Model R : Decision tree based on X A Yt relationship u .
Both models have comparable accuracy around 83 . 5%. Let p Y A and p Y R be the predicted labels based on these two models, respectively. Our goal is to compare and rank different models based on their risk of discrimination using ALFA-statistic:
$$\begin{array} { r l r } { \widehat { \alpha } _ { A } } & = } & { \widehat { M I } ( \widehat { Y } _ { A } , S | X _ { A } ) } & = } & { 0 . 0 0 0 4 2 , w i t h p v a l u e 0 . 9 5 } \end{array}$$
$$\begin{array} { r l r } { \widehat { \alpha } _ { R } } & = } & { \widehat { M I } ( \widehat { Y } _ { R } , S | X _ { A } ) = 0 . 1 9 5 , w i t h p v a l u e a m o s t 0 . } \end{array}$$
ฮฑ -inadmissibility statistic measures how much the final decision (prediction) was impacted by the protective features. A smaller value is better in the sense that it indicates improved fairness of the algorithm's decision. Eqs (3.9)-(3.10) immediately imply that Model A is better (less discriminatory without being inefficient) than Model R , and can be safely put into production.
Remark 15. Infogram and ALFA-testing can be used (by oversight board or regulators) as a fully-automated exploratory auditing tool that can systematically monitor and discover signs of bias or other potential gaps in compliance 11 ; see Appendix A.3.
Example 9. Taiwanese Credit Card data . This dataset was collected in October 2005, from a Taiwan-based bank (a cash and credit card issuer). It is available in the UCI Machine Learning Repository. We have records of n 30 , 000 cardholders, and for each we have
11 Under the Algorithmic Accountability Act, large AI-driven corporations have to perform broader 'admissibility' tests to keep a check on their algorithms' fairness and trustworthiness; see Appx. A.2.
Figure 11: Left: Infogram of UCI credit card data. It selects two admissible features (i.e., those that are relevant and less-biased) that lie in the complementary of the 'L'-shaped region. Right: The FINEtree (test data accuracy 82%).
<details>
<summary>Image 11 Details</summary>

### Visual Description
## Decision Tree: UCI Credit Data
### Overview
The image depicts a decision tree generated from the UCI Credit Data dataset. The tree is used to classify data points based on the values of two features: `PAY_0` and `PAY_2`. The left side of the image shows a scatter plot of `Safety-index` vs. `Relevance-index` with data points labeled `PAY_0` and `PAY_2`. The right side shows the decision tree structure with nodes representing decisions based on feature thresholds.
### Components/Axes
* **Scatter Plot:**
* X-axis: `Relevance-index` (Scale: 0.0 to 1.0)
* Y-axis: `Safety-index` (Scale: 0.0 to 1.0)
* Data Points: Labeled `PAY_0` and `PAY_2`
* **Decision Tree:**
* Nodes: Rectangular boxes containing information about the decision.
* Branches: Lines connecting nodes, labeled "yes" or "no" based on the decision outcome.
* Leaf Nodes: Rounded rectangular boxes containing the class label (0 or 1), the proportion of samples belonging to that class, and the total number of samples at that node.
### Detailed Analysis or Content Details
**Scatter Plot:**
The scatter plot shows a distribution of points. The majority of points cluster near the bottom-left corner, with `Relevance-index` values close to 0.0 and `Safety-index` values close to 0.0. There are a few points with higher `Relevance-index` values (around 0.2-0.3) and slightly higher `Safety-index` values (around 0.6).
**Decision Tree:**
* **Node 1 (Root):** `PAY_0 < 1.5`
* "yes" branch leads to Node 2.
* "no" branch leads to Node 3.
* **Node 2:** `PAY_2 < 1.5`
* "yes" branch leads to Node 4.
* "no" branch leads to Node 10.
* **Node 3:** `PAY_2 < 0.5`
* "yes" branch leads to Node 6.
* "no" branch leads to Node 7.
* **Node 4:** Leaf Node: Class 0, 86/100 samples (82%).
* **Node 6:** Leaf Node: Class 0, 56/100 samples (0%).
* **Node 7:** Leaf Node: Class 1, 29/100 samples (10%).
* **Node 10:** `PAY_2 < 2.5`
* "yes" branch leads to Node 11.
* "no" branch leads to Node 12.
* **Node 11:** Leaf Node: Class 1, 47/100 samples (1%).
* **Node 12:** Leaf Node: Class 0, 60/100 samples (7%).
### Key Observations
* The decision tree is relatively shallow, with a maximum depth of 3 levels.
* The feature `PAY_0` is used as the first split, suggesting it is the most important feature for classification.
* The leaf nodes show varying proportions of class 0 and class 1 samples.
* Node 6 has a very low proportion of class 1 samples (0%).
* Node 4 has a high proportion of class 0 samples (82%).
### Interpretation
The decision tree aims to classify credit card applicants based on their repayment history, as indicated by the `PAY_0` and `PAY_2` features. `PAY_0` likely represents the number of late payments on the most recent bill, and `PAY_2` represents the number of late payments on the second most recent bill.
The tree suggests that if an applicant has a low number of late payments on their most recent bill (`PAY_0 < 1.5`), they are more likely to be classified as a good credit risk (class 0). However, further splits based on `PAY_2` refine this classification.
The scatter plot shows a general trend of lower `Relevance-index` and `Safety-index` values, suggesting that the dataset may be imbalanced or that the features are not highly predictive on their own. The points labeled `PAY_0` and `PAY_2` may represent different segments of the data, but their specific meaning is not clear without additional context.
The leaf nodes with extreme proportions (e.g., 0% or 100%) indicate strong predictive power for those specific combinations of feature values. For example, applicants who have a low number of late payments on both `PAY_0` and `PAY_2` are highly likely to be classified as good credit risks.
</details>
a response variable Y denoting: default payment status (Yes = 1, No = 0), along with p 23 predictor variables, including demographic factors, credit data, history of payment, etc. Among these 23 features we have two protected attributes: gender and age .
The infogram, shown in the left panel of Fig. 11, clearly selects the variable Pay 0 and Pay 2 as the key admissible factors that determine the likelihood of default. Once we know the admissible features, the next question is: 'how' Pay 0 and Pay 2 are impacting the credit risk? Can we extract an admissible decision rule? For that we construct the FINEtree: a decision tree model based on the infogram-selected admissible features; see Fig. 11. The resulting predictive model is extremely transparent (with shallow yet accurate decision trees 12 ) and also mitigates unwanted bias by avoiding inadmissible variables. Lenders, regulators, and bank managers can use this model for automating credit decisions.
12 One can slightly improve accuracy by combining hundreds or thousands of trees (based on only the admissible features) using random forest or boosting. But the opacity of such models renders them unfit for deployment in financial and bank sectors (Fahner, 2018).
Figure 12: ProPublica's COMPAS Data: Top row: infogram and the estimated FINEtree. Bottom row: The two-sample distribution of the continuous variable end and binary event show their usefulness for predicting whether a defendant will recidivate or not.
<details>
<summary>Image 12 Details</summary>

### Visual Description
## Chart/Diagram Type: COMPAS Data Visualization
### Overview
The image presents a multi-panel visualization of ProPublica's COMPAS data, likely related to risk assessment in the criminal justice system. It includes a scatter plot, a decision tree, and two density/frequency plots. The visualization aims to show the relationship between "Relevance Index", "Safety Index", "Event" and "End" variables, and how they relate to recidivism (represented by "Event").
### Components/Axes
* **Scatter Plot:**
* X-axis: Relevance Index (Scale: 0.0 to 1.0)
* Y-axis: Safety Index (Scale: 0.0 to 1.0)
* Data Points: Labeled "event" and "end".
* Shaded Area: Represents a region of interest, potentially highlighting areas of higher risk.
* **Decision Tree:**
* Nodes: Represent decisions based on thresholds for "end" and "event".
* Branches: "yes" and "no" indicating whether a condition is met.
* Leaf Nodes: Display counts, percentages, and labels.
* **Density Plot (Variable: End):**
* X-axis: Values of "End" (Scale: 0 to 1200)
* Y-axis: Density
* Lines: Represent density curves for Y=0 (red) and Y=1 (blue).
* **Frequency Plot (Variable: Event):**
* X-axis: Values of "Event" (0 or 1)
* Y-axis: Frequency (Scale: 0 to 3000)
* Bars: Represent frequency counts for Y=0 (red) and Y=1 (blue).
* **Legend:**
* Density/Frequency Plots: Groups labeled Y=0 (red) and Y=1 (blue).
### Detailed Analysis or Content Details
**Scatter Plot:**
The scatter plot shows a sparse distribution of points. The "event" points are clustered near the origin (low Relevance Index, low Safety Index). The "end" points are more dispersed, with some extending towards higher Relevance and Safety Index values.
**Decision Tree:**
The decision tree splits based on two conditions: "end >= 729" and "event < 0.5".
* **Top Node:** "end >= 729"
* "yes" branch: 0/46 (100%)
* "no" branch: 1/19 (8.1%)
* **Second Level (from "no" branch):** "event < 0.5"
* "yes" branch: 6/0 (20%)
* "no" branch: 12/13 (9%)
* **Third Level (from "no" branch):** "end >= 183"
* "yes" branch: 2/99 (43%)
* "no" branch: 13/40 (10%)
* **Final Node:** 7/1 (37%)
**Density Plot (Variable: End):**
The red line (Y=0) shows a higher density around the lower end of the "End" scale (approximately 0-400), with a peak around 200. The blue line (Y=1) shows a lower density overall, with a broader peak around 600-800.
**Frequency Plot (Variable: Event):**
The red bar (Y=0) is significantly taller than the blue bar (Y=1), indicating a much higher frequency of Event=0. The frequency for Y=0 is approximately 2500, while the frequency for Y=1 is approximately 500.
### Key Observations
* The "event" points in the scatter plot are concentrated in the lower-left corner, suggesting a correlation between low Relevance and Safety Index and the occurrence of an event.
* The decision tree shows that the "end" variable is a strong predictor, with the "end >= 729" split having a significant impact on the outcome.
* The density plot indicates that higher "End" values are more common when "Event" is 1 (Y=1).
* The frequency plot clearly shows that "Event=0" is much more frequent than "Event=1".
### Interpretation
This visualization appears to be exploring the predictive power of the COMPAS risk assessment tool. The scatter plot suggests that individuals assessed as having lower relevance and safety scores are more likely to re-offend ("event"). The decision tree demonstrates how the tool uses thresholds on the "end" and "event" variables to categorize individuals. The density and frequency plots provide further insight into the distribution of "End" and "Event" values, highlighting the imbalance between the two groups (Y=0 and Y=1).
The data suggests that the COMPAS tool may be able to identify individuals at higher risk of re-offending, but the decision tree reveals a complex set of rules and thresholds. The imbalance in the frequency of "Event" values (more non-events than events) could potentially bias the tool's predictions. The visualization raises questions about the fairness and accuracy of the COMPAS tool, and the potential for disparate impact on different groups. The spatial arrangement of the plots suggests a flow of analysis: starting with a broad overview (scatter plot), then refining the prediction with a decision tree, and finally examining the distributions of the key variables.
</details>
## 3.2.4 Admissible Criminal Justice Risk Assessment
Example 10. ProPublica's COMPAS Data . COMPAS-an acronym for Correctional Offender Management Profiling for Alternative Sanctions-is a most widely used commercial algorithm within the criminal justice system for predicting recidivism risk (the likelihood of re-offending). The data 13 -complied by a team of journalists from ProPublica-constitute all criminal defendants who were subject to COMPAS screening in Broward County, Florida, during 2013 and 2014. For each defendant, p 14 features were gathered, including demographic information, criminal history, and other administrative information. Besides, the dataset also contains information on whether the defendant did in fact actually recidivate (or not) within two years of the COMPAS administration date (i.e., through the end of March 2016); and 3 additional sensitive attributes (gender, race, and age) for each case.
The goal is to develop a accurate and fairer algorithm to predict whether a defendant will engage in violent crime or fail to appear in court if released. Fig. 12 shows our results. Infogram selects event and end as the vital admissible features. The bottom row of Fig. 12 confirms their predictive power. Unfortunately, these two variables are not explicitly defined by ProPublica in the data repository. Based on Brennan et al. (2009), we feel that event indicates some kind of crime that resulted in a prison sentence during a past observation period (we suspect the assessments were conducted by local probation officers during some period between January 2001 and December 2004), and the variable end denotes the number of days under observation (first event or end of study, whichever occurred first). The associated FINEtree recidivism algorithm based on event and end reaches 93% accuracy with AUC 0 . 92 on a test set (consist of 20% of the data). Also see Appendix A.5.
## 3.2.5 FINEglm and Application to Marketing Campaign
We are interested in the following question: how does one systematically build fairnessenhancing parametric statistical algorithms, such as a generalized linear model (GLM)?
Example 11. Thera Bank Financial Marketing Campaign . This is a case study about Thera Bank, the majority of whose customers are liability customers (depositors) with varying sizes of deposits-and among them, very few are borrowers (asset customers). The bank wants to expand its client network to bring more loan business and in the process, earn more through the interest on loans. To test the viability of this business idea they ran a small marketing campaign with n 5000 customers where a 480 (= 9.6%) accepted the personal loan offer. Motivated by the healthy conversion rate, the marketing department wants to devise a much more targeted digital campaign to boost loan applications with a minimal budget.
13 Data: https://github.com/propublica/compas-analysis/raw/master/compas-scores-two-years.csv
Figure 13: Thera Bank marketing campaign data. Left: infogram. Right: scatter plot based on the two admissible features; the color blue and red indicate two different classes.
<details>
<summary>Image 13 Details</summary>

### Visual Description
## Scatter Plots: Financial Marketing Data
### Overview
The image presents two scatter plots related to financial marketing data. The left plot displays the relationship between "Relevance-index" and "Safety-index" for different marketing factors. The right plot shows the relationship between "Income" and "CCAvg", with data points color-coded based on the value of "Y".
### Components/Axes
**Left Plot:**
* **Title:** Financial Marketing Data
* **X-axis:** Relevance-index (Scale: 0.0 to 1.0)
* **Y-axis:** Safety-index (Scale: 0.0 to 1.0)
* **Data Points:** Representing "Income", "CCAvg", "Family", and "Education".
**Right Plot:**
* **X-axis:** Income (Scale: 0 to 200)
* **Y-axis:** CCAvg (Scale: 0 to 10)
* **Legend:**
* Y=0 (Color: Blue)
* Y=1 (Color: Red)
### Detailed Analysis or Content Details
**Left Plot:**
* **Income:** Approximately (0.1, 0.9), (0.15, 0.95), (0.2, 0.85)
* **CCAvg:** Approximately (0.3, 0.1), (0.4, 0.15), (0.5, 0.2)
* **Family:** Approximately (0.2, 0.05), (0.3, 0.1), (0.4, 0.15)
* **Education:** Approximately (0.7, 0.05), (0.8, 0.1), (0.9, 0.2)
**Right Plot:**
* **Y=0 (Blue):** The data points form a dense cluster in the lower-left corner, with a general trend of increasing CCAvg as Income increases. The points are concentrated between Income values of 0 and 100, with CCAvg values generally between 0 and 3. There is some scatter, but the overall trend is positive.
* **Y=1 (Red):** The data points are more dispersed and generally have higher CCAvg values than the Y=0 points. The points are concentrated between Income values of 100 and 200, with CCAvg values generally between 3 and 8. There is a positive correlation between Income and CCAvg, but with more variability.
* **Approximate Data Points (Y=0):** (10, 0.5), (50, 1.5), (90, 2.5)
* **Approximate Data Points (Y=1):** (110, 3.5), (150, 5.5), (190, 7.5)
### Key Observations
* The left plot shows a wide spread of data points across the Relevance-index and Safety-index, suggesting varying levels of these factors for different marketing elements.
* The right plot clearly differentiates between the two groups (Y=0 and Y=1) based on both Income and CCAvg. Higher Income generally corresponds to higher CCAvg, and the Y=1 group tends to have both higher Income and CCAvg.
* There appears to be a positive correlation between Income and CCAvg in both groups.
### Interpretation
The data suggests a relationship between financial factors (Income, CCAvg) and a binary variable "Y", potentially representing a customer segment or a marketing outcome. The Relevance-index and Safety-index on the left plot may be factors influencing the marketing of these financial products.
The right plot indicates that individuals with higher incomes tend to have higher CCAvg values. The distinction between Y=0 and Y=1 suggests that "Y" is a key differentiator, with Y=1 representing a more affluent or engaged customer segment. The left plot provides context for the marketing elements, showing their positioning in terms of relevance and safety.
The scatter plots are useful for identifying potential target audiences and tailoring marketing strategies based on income levels and customer characteristics. The separation between the two groups (Y=0 and Y=1) suggests that different marketing approaches may be required for each segment. The positive correlation between Income and CCAvg indicates that higher-value customers are likely to have higher CCAvg values, which could be used for cross-selling or upselling opportunities.
</details>
Data and the problem . For each of 5000 customers, we have binary response Y : customer response to the last personal loan campaign, and 12 other features like customer's annual income, family size, education level, value of house mortgage if any, etc. Among these 12 variables, there are two protected features: age and zip code . We consider zip code as a sensitive attribute, since it often acts as a proxy for race.
Based on this data, we want to devise an automatic and fair digital marketing campaign that will maximize the targeting effectiveness of the advertising campaign while minimizing the discriminatory impact on protected classes to avoid legal landmines.
Customer targeting using admissible machine learning . Our approach is summarized below:
Step 1 . Graphical tool for algorithmic risk management. Fig. 13 shows the infogram, which identifies two admissible features for loan decision: Income (annual income in $ 000), and CCAvg (Avg. spending on credit cards per month). However, the two highly predictive variables education (education level: undergraduate, graduate, or advanced) and family (family size of the customer) turn out to be inadmissible, even though they look completely 'reasonable' on the surface. Consequently, including these variables in a model can do more harm than good by discriminating against minority applicants.
Remark 16. It is evident that infogram can be used as an algorithmic risk management tool to quickly identify and combat unwanted hidden bias. Financial regulators can use infogram to quickly spot and remediate issues of historic discrimination; see Appendix A.3.
Remark 17. Infogram runs a 'combing operation' to distill down a large, complex problem to its core that holds the bulk of the 'admissible information.' In our problem, the useful information is mostly concentrated into two variables-Income and CCAvg, as seen in the scatter diagram.
Step 2 . FINE-Logistic model: We train a logistic regression model based on the two admissible features, leading to the following model:
$$\log i t \left \{ \mu ( x ) \right \} = - 6 . 1 3 + . 0 4 \, I n c o m e + . 0 6 \, C A v g ,$$
where ยต p x q Pr p Y 1 | X x q . This simple model achieves 91% accuracy. It provides a clear understanding of the 'core' factors that are driving the model's recommendation.
Remark 18 (Trustworthy algorithmic decision-making) . FINEml models provide a transparent and self-explainable algorithmic decision-making system that comes with protection against unfair discrimination-which is essential for earning the trust and confidence of customers. The financial services industry can immensely benefit from this tool.
Step 3 . FINElasso . One natural question would be, How can we extend this idea to highdimensional glm models? In particular, we are interested in the following question: Is there any way we can directly embed 'admissibility' into the lasso regression model? The key idea is as follows: use adaptive regularization by choosing the weights to be the inverse of safety-indices, as computed in formula (3.6) of definition 4. Estimate FINElasso model by solving the following adaptive version:
$$\hat { \beta } _ { F I N E } = \arg \min _ { \beta } \sum _ { i = 1 } ^ { n } \left [ - y _ { i } ( x _ { i } ^ { T } \beta ) + \log \left ( 1 + e ^ { x _ { i } ^ { T } \beta } \right ) \right ] \, - \, \lambda \sum _ { j = 1 } ^ { p } w _ { j } \left | \beta _ { j } \right | ,$$
where the weights are defined as
$$w _ { j } ^ { - 1 } \, = M I \left ( Y , X _ { j } | \{ S _ { 1 } , \dots , S _ { q } \} \right ) .$$
The adaptive penalization in (3.12) acts as a bias-mitigation mechanism by dropping (that is, heavily penalizing) the variables with very low safety-indices. This whole procedure can be easily implemented using the penalty.factor argument of glmnet R-package (Friedman et al., 2010). No doubt a similar strategy can be adopted for other regularized methods such as ridge or elastic-net. For an excellent review on different kinds of regularization procedures, see Hastie (2020).
Remark 19. Afull lasso on X selects the strong surrogates (variables family and education ) as some of the top features due to their high predictive power, and hence carries enhanced risk of being discriminatory. On the other hand, an infogram-guided FINElasso provides an automatic defense mechanism for combating bias without significantly compromising accuracy.
Remark 20 (Towards A Systematic Recipe) . This idea of data-adaptive 're-weighting' as a bias mitigation strategy, can be easily translated to other types of machine learning models. For example, to incorporate fairness into the traditional random forest method, choose splitting variables at each node by performing weighted random sampling. The selection probability is determined by
$$P r ( s e l e c t i n g v a r i a b l e X _ { j } ) \, = \, \frac { F _ { j } } { \sum _ { j } F _ { j } } ,$$
where the F-values F j is defined in equation (3.6). This can be easily operationalized using the mtry.select.prob argument of the randomForest() function in iRF R-package. Following this line of thought, one can (re)design a variety of less-discriminatory ML techniques without changing a single architecture of the original algorithms.
## 4 Conclusion
Faced with the profound changes that AI technologies can produce, pressure for 'more' and 'tougher' regulation is probably inevitable. (Stone et al., 2019).
Over the last 60 years or so-since the early 1960s-there's been an explosion of powerful ML algorithms with increasing predictive performance. However, the challenge for the next few decades will be to develop sound theoretical principles and computational mechanisms that transform those conventional ML methods into more safe, reliable, and trustworthy ones.
The fact of the matter is that doing machine learning in a 'responsible' way is much harder than developing another complex ML technique. A highly accurate algorithm that does not comply with regulations is (or will soon be) unfit for deployment, especially in safetycritical areas that directly affect human lives. For example, the Algorithmic Accountability Act 14 (see Appx. A.2) introduced in April 2019 requires large corporations (including tech companies, as well as banks, insurance, retailers, and many other consumer businesses) to be
14 Also see, EU's 'Artificial Intelligence Act' released on April 21, 2021, whose key points are summarized in Appendix A.8.
cognizant of the potential for biased decision-making due to algorithmic methods; otherwise, civil lawsuits can be filed against those firms. As a result, it is becoming necessary to develop tools and methods that can provide ways to enhance interpretability and efficiency of classical ML models while guarding against bias. With this goal in mind, this paper introduces a new kind of statistical learning technology and information-theoretic automated monitoring tools that can guide a modeler to quickly build 'better' algorithms that are lessbiased, more-interpretable, and sufficiently accurate.
One thing is clear: rather than being passive recipients of complex automated ML technologies, we need more general-purpose statistical risk management tools for algorithmic accountability and oversight. This is critical to the responsible adoption of regulatorycompliant AI-systems. This paper has taken some important steps towards this goal by introducing the concepts and principles of 'Admissible Machine Learning.'
## Acknowledgement
The author thanks the editor, associate editor, and four anonymous reviewers for their helpful suggestions. I would like to specially thank Erin LeDell for bringing this problem to my attention. The author was benefited from many useful discussions with Michael Guerzhoy, Hany Farid, Julia Dressel, Beau Coker, and Hanchen Wang on demystifying some aspects of COMPASS data; Daniel Osei on the data pre-processing steps of Lending Club loan data. This research was supported by H2O.ai .
## References
- Allen, B., S. Agarwal, L. Coombs, C. Wald, and K. Dreyer (2021). 2020 ACR Data Science Institute Artificial Intelligence Survey. Journal of the American College of Radiology .
- Berrett, T. B., Y. Wang, R. F. Barber, and R. J. Samworth (2019). The conditional permutation test for independence while controlling for confounders. Journal of the Royal Statistical Society: Series B (Statistical Methodology) .
- Blattner, L. and S. Nelson (2021). How costly is noise? Data and disparities in consumer credit. arXiv preprint:2105.07554 .
- Breiman, L. et al. (2004). Population theory for boosting ensembles. The Annals of Statistics 32 (1), 1-11.
- Brennan, T., W. Dieterich, and B. Ehret (2009). Evaluating the predictive validity of the compas risk and needs assessment system. Criminal Justice and Behavior 36 (1), 21-40.
- Candes, E., Y. Fan, L. Janson, and J. Lv (2018). Panning for gold: 'model-x' knockoffs for high dimensional controlled variable selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 80 (3), 551-577.
- Chouldechova, A. and A. Roth (2020). A snapshot of the frontiers of fairness in machine learning. Communications of the ACM 63 (5), 82-89.
- Fahner, G. (2018). Developing transparent credit risk scorecards more effectively: An explainable artificial intelligence approach. Data Anal 2018 , 17.
- Friedman, J., T. Hastie, and R. Tibshirani (2010). Regularization paths for generalized linear models via coordinate descent. Journal of statistical software 33 (1), 1.
- Friedman, J. H. (2001). Greedy function approximation: a gradient boosting machine. Annals of statistics , 1189-1232.
- Hastie, T. (2020). Ridge regularization: An essential concept in data science. Technometrics 62 (4), 426-433.
- Hastie, T., R. Tibshirani, and M. Wainwright (2015). Statistical learning with sparsity: the lasso and generalizations . CRC press.
- Hellman, D. (2020). Measuring algorithmic fairness. Va. L. Rev. 106 , 811.
- Kleinberg, J. (2018). Inherent trade-offs in algorithmic fairness. In Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems , pp. 40-40.
- Lakkaraju, H. and O. Bastani (2020). 'How do I fool you?' manipulating user trust via misleading black box explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society , pp. 79-85.
- Mukhopadhyay, S. and K. Wang (2020). Breiman's 'Two Cultures' revisited and reconciled. arXiv:2005.13596 , 1-51.
- Narayanan, A. (2018). Translation tutorial: 21 fairness definitions and their politics. In Proc. Conf. Fairness Accountability Transp., New York, USA , Volume 1170.
- Obermeyer, Z., B. Powers, C. Vogeli, and S. Mullainathan (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science 366 (6464), 447-453.
- Reardon, S. (2019). Rise of robot radiologists. Nature 576 (7787), S54.
- Rosenbaum, P. R. (1984). Conditional permutation tests and the propensity score in observational studies. Journal of the American Statistical Association 79 (387), 565-574.
- Stone, P., R. Brooks, E. Brynjolfsson, et al. (2019). One hundred year study on artificial intelligence. Stanford University; https://ai100.stanford.edu .
- Thrun, S. B., J. Bala, E. Bloedorn, I. Bratko, B. Cestnik, J. Cheng, K. De Jong, S. Dzeroski, S. E. Fahlman, D. Fisher, et al. (1991). The monk's problems a performance comparison of different learning algorithms.
- Wall, L. D. (2018). Some financial regulatory implications of artificial intelligence. Journal of Economics and Business 100 , 55-63.
- Wyner, A. D. (1978). A definition of conditional mutual information for arbitrary ensembles. Information and Control 38 (1), 51-59.
- Yeh, I.-C. and C.-h. Lien (2009). The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients. Expert Systems with Applications 36 (2), 2473-2480.
- Zech, J. R., M. A. Badgeley, M. Liu, A. B. Costa, J. J. Titano, and E. K. Oermann (2018). Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLoS medicine 15 (11), e1002683.
- Zhang, T. (2004). Statistical behavior and consistency of classification methods based on convex risk minimization. Annals of Statistics , 56-85.
## 5 Appendix
## A.1 Proof of Theorem 1
The conditional entropy H p Y | X , S q can be expressed as
$$H ( Y \, | \, X , S ) \ = \ & \ \iint H ( Y \, | \, X = x , S = s ) \, d F _ { x , s } \\ \ = \ & \ \iint \left \{ \ - \ \int _ { y } f _ { Y | X , S } ( y , x | S ) \log \left ( f _ { Y | X , S } ( y , x | S ) \right ) \, d y \right \} \, d F _ { x , s } \\ \ = \ & \ - \iint \log \left ( f _ { Y | X , S } ( y , x | S ) \right ) \, d F _ { x , s , y } .$$
Similarly,
$$\begin{array} { r l } { H ( Y | S ) } & { = } & { \int _ { s } H ( Y | S = s ) \, d F _ { s } } \\ & { = } & { \int _ { s } \left \{ - \int _ { y } f _ { Y | S } ( y | s ) \log \left ( f _ { Y | S } ( y | s ) \right ) \, d y \right \} \, d F _ { s } } \\ & { = } & { - \iint _ { x , s , y } \log \left ( f _ { Y | S } ( y | s ) \right ) \, d F _ { x , s , y } . } \end{array}$$
Take the difference H p Y | S q H p Y | X , S q by substituting (6.2) and (6.1) to complete the proof.
## A.2 The Algorithmic Accountability Act
This bill 15 was introduced by Senators Cory Booker (D-NJ) and Ron Wyden (D-OR) in the Senate and Rep. Yvette Clarke (D-N.Y.) in the House on April, 2019. It requires large companies to conduct automated decision system impact assessments of their algorithms. Entities that develop, acquire, and/or utilize AI must be cognizant of the potential for biased decision-making and outcomes resulting from its use, otherwise civil lawsuits can be filed against those firms. Interestingly, on Jan. 13, 2020, the Office of Management and Budget released a draft memorandum 16 to make sure the federal government doesn't over-regulate industry's AI to the extent that it hampers innovation and development.
15 https://www.congress.gov/bill/116th-congress/house-bill/2231/all-info
16 The draft memo is available at:whitehouse.gov/wp-content/uploads/2020/01/Draft-OMB-Memoon-Regulation-of-AI-1-7-19.pdf
## A.3 Fair Housing Act's Disparate Impact Standard
Detecting inadmissible (proxy) variables can be used as a first defense against algorithmic bias. Consider the Fair Housing Act's Disparate Impact Standard 17 (U.S. Aug. 19, 2019)according to ยง 100.500 (c)(2)(i) of the Act, a defendant can rebut a claim of discrimination by showing that 'none of the factors used in the algorithm rely in any material part on factors which are substitutes or close proxies for protected classes under the Fair Housing Act.' Therefore regulators, judges, and model developers can use infogram as a statistical diagnostic tool to keep a check on the algorithmic disparity of automated decision systems.
## A.4 Beware of The 'Spurious Bias' Problem
Using a real data example, here we alert practitioners some of the flaws of current fairness criteria and discuss their remedies. Consider the admission data shown in Table 1. We are interested to know: is there a gender bias in the admission process?
Marginal analysis: the overall acceptance rate in two departments for female applicants is 37%, whereas for male applicants it is roughly 50%. The disparity can be quantified using the adverse impact ratio (AIR), also known as disparate impact:
$$A I R ( Y , G ) = { \frac { P r ( Y = 1 | G = f e m a l e ) } { P r ( Y = 1 | G = m a l e ) } } = { \frac { . 3 7 } { . 5 0 } } = 0 . 7 4 < 0 . 8 0$$
The conventional '80% rule' 18 indicates that the admission process is biased.
The bias-reversal phenomena: admission chances within Department I: Male 63% (male), and female 68%; within Department II: Male 33%, and female F 35%. Thus, when we investigate the admissions by department, the discrimination against women vanishes; in fact, the bias gets reversed (in the favor of women)!
Department-specific 'subgroup' analysis: Here we investigate the adverse impact ratio (AIR) within each department.
For Dept I (no bias):
$$A I R ( Y , G | D = I ) \, = \, { \frac { P r ( Y = 1 | G = m a l e ) } { P r ( Y = 1 | G = f e m a l e ) } } \, = \, . 6 3 / . 6 8 = 0 . 9 2 \, > \, 0 . 8 0 . \quad ( 6 . 4 )$$
17 https://www.govinfo.gov/content/pkg/FR-2019-08-19/pdf/2019-17542.pdf
18 The US Equal Employment Opportunity Commission states that fair employment should abide the 80% rule: the acceptance rate for any group should be no less than 80% of that of the highest-accepted group.
Table 1: Admission data classified by gender and departments. This is actually a part of the 1973 UC Berkeley graduate admission data; here, for simplicity, we have taken the data of Departments B and D.
| Dept (D) | Gender (G) | Admitted ( y 1) | Rejected ( y 0) |
|------------|--------------|-----------------------------|-----------------------------|
| I | Male | 353 | 207 |
| I | Female | 17 | 8 |
| II | Male | 138 | 279 |
| II | Female | 131 | 244 |
For Dept II (no bias):
$$A I R ( Y , G \, | \, D = I I ) \ = \ \frac { \Pr ( Y = 1 \, | \, G = m a l e ) } { \Pr ( Y = 1 \, | \, G = f e n a l e ) } \ = \ . 3 3 / . 3 5 = 0 . 9 4 \, > \, 0 . 8 0 .$$
Eqs. (6.3)-(6.5) present us with a paradoxical situation. What will be our final conclusion on the fairness of the admission process? How to resolve it in a principled way?
A resolution: Compute a measure of overall (university-wide) discrimination by ALFAstatistic (see definition 5 for more details):
$$\alpha _ { Y } \colon = M I ( Y , G | D ) \ = \ \sum _ { d = 0 } ^ { 1 } P r ( D = d ) \, M I ( Y , G | D = d ) ,$$
where ฮฑ -inadmissibility statistic measures the discrimination (how predictive the admission variable Y is based on gender G ) in a particular department's admission. Applying the formula (2.6) we get
$$\widehat { \alpha } _ { Y } = \widehat { M I } ( Y , G | D ) = 0 . 0 0 0 2 8 5 , w i t h p - v a l u e \colon 0 . 7 1 5 .$$
This suggests Y K K G | D , i.e., the gender contains no additional predictive information for admission beyond what is already captured by the department variable. The apparent gender bias can be 'explained away' by the choice of the department. Graphically, this can be represented as a Markov chain:
<details>
<summary>Image 14 Details</summary>

### Visual Description
\n
## Diagram: Sequential Process
### Overview
The image depicts a simple linear diagram illustrating a sequential process or relationship between three elements: Y, D, and G. The elements are represented as circles connected by horizontal lines, indicating a flow from left to right.
### Components/Axes
The diagram consists of three labeled circles:
* **Y**: Located on the left.
* **D**: Located in the center.
* **G**: Located on the right.
Horizontal lines connect each circle to the next, indicating a directional flow. There are no axes, scales, or legends present.
### Detailed Analysis or Content Details
The diagram shows a linear progression: Y leads to D, and D leads to G. The lines are straight and horizontal, suggesting a direct and uncomplicated relationship. There are no branching paths or feedback loops. The circles are of equal size and spacing.
### Key Observations
The diagram is extremely simple and lacks any quantitative data. It only conveys a sequential order. The meaning of Y, D, and G is not defined within the image itself.
### Interpretation
The diagram likely represents a process, a chain of events, or a series of steps. The order is crucial: Y must occur before D, and D must occur before G. Without further context, it's impossible to determine the nature of these elements or the process they represent. It could be a simplified model of a biological pathway, a manufacturing process, a logical sequence, or any other ordered system. The diagram's simplicity suggests it's intended to convey a high-level overview rather than detailed information. It is a symbolic representation, and its meaning is dependent on the context in which it is used.
</details>
Note that there is no direct link between the gender (G) and admission (Y). Conclusion: there is no evidence of any direct sex-discrimination in the admission process.
Improved AIR measure: one can generalize the (marginal) adverse impact ratio (6.3) to the following conditional one (which is similar in spirit to eq. (6.6)):
$$\text {CAIR} ( Y , G | D ) \ = \ \int A I R ( Y , G | D = d ) \, d F _ { D } ,$$
which, in this case, can be decomposed as
$$C A I R ( Y , G | D ) = \Pr ( D = I ) A I R ( Y , G | D = I ) + \Pr ( D = I ) A I R ( Y , G | D = I ) .$$
Applying (6.8) for our Berkeley example data yields the following estimate:
$$\widehat { C A I R } ( Y , G | D ) \ = \ 0 . 4 3 \times 0 . 9 2 + 0 . 5 7 \times 0 . 9 4 \\ \ = \ 0 . 9 3 \ > \ 0 . 8 0 .$$
This shows no evidence of sex bias in graduate admissions! The moral is: beware of spurious bias, and be aware of two types of errors that might occur due to an incorrect fairness-metric: falsely rejecting a fair algorithm as unfair (Type-I fairness error), and falsely accepting an unfair algorithm as fair (Type-II fairness error).
## A.5 Revisiting COMPAS Data
There is another version of the COMPAS data 19 (binarized features) that researchers have used for evaluating the accuracy of their algorithms. This dataset contains a list of handpicked p 22 features over n 10 , 747 criminal records. Goal is to build an interpretable and accurate recidivism prediction model. Infogram-selected COREtree is displayed below.
10-fold cross-validation shows p 72 1 . 50 q % classification accuracy of our model, which is close to the best known performance on this version of the COMPAS data.
## A.6 Two Cultures of Machine Learning
Black-box ML culture: it builds large complex models, keeping solely the predictive accuracy in mind. White-box ML culture: it directly builds interpretable models, often by enforcing domain-knowledge-based constraints on traditional ML algorithms like decision tree or neural net. Orthodox 'black-or-white thinkers' of each camp have been at loggerheads for some time. This raises the question: is there any way to get the best of both worlds? If so, how?
19 https://raw.githubusercontent.com/Jimmy-Lin/TreeBenchmark/master/datasets/compas/data.csv
Figure 14: Infogram-selected COREtree.
<details>
<summary>Image 15 Details</summary>

### Visual Description
## Diagram: Decision Tree
### Overview
The image depicts a decision tree, likely used for classification or prediction. The tree branches based on conditions related to "Age_first_offense", "Misdem_count", and "Probation", ultimately leading to terminal nodes representing outcomes with associated probabilities. The tree is structured hierarchically, starting from a root node (node 0) and branching down to leaf nodes.
### Components/Axes
The diagram consists of rectangular nodes connected by branches. Each node contains:
* A node number (e.g., 0, 1, 2)
* A condition or outcome (e.g., "Age_first_offense >= 21", "0", "1")
* Three numerical values, likely representing counts or probabilities. The first two are decimal values, and the third is a percentage.
The branches are labeled with the conditions that lead to the split. The tree is oriented from top to bottom.
### Detailed Analysis or Content Details
**Node 0:**
* Condition: None (Root Node)
* Values: .70 .30 100%
**Node 1 (Branching from Node 0 - "yes" for Age_first_offense >= 21):**
* Condition: None
* Values: .78 .22 62%
**Node 2 (Branching from Node 1 - "Misdem_count < 2.5"):**
* Condition: None
* Values: .65 .35 16%
**Node 3 (Branching from Node 1 - "Probation < 0.5"):**
* Condition: None
* Values: .57 .43 7%
**Node 4 (Branching from Node 3 - "Misdem_count < 9.5"):**
* Condition: None
* Values: None
**Node 5 (Branching from Node 0 - "no" for Age_first_offense >= 21):**
* Condition: None
* Values: .56 .44 38%
**Node 6 (Branching from Node 5 - "Probation < 0.5"):**
* Condition: None
* Values: .45 .55 14%
**Node 7 (Branching from Node 5 - "Misdem_count < 2.5"):**
* Condition: None
* Values: .42 .58 12%
**Node 8 (Branching from Node 7 - "Misdem_count < 14"):**
* Condition: None
* Values: .45 .55 9%
**Node 9 (Branching from Node 8 - "Age >= 32"):**
* Condition: None
* Values: None
**Leaf Nodes (Terminal Nodes):**
* Node 10: 0 .82 .18 46%
* Node 11: 0 .71 .29 8%
* Node 12: 0 .65 .35 5%
* Node 13: 1 .39 .61 2%
* Node 14: 0 .63 .37 24%
* Node 15: 0 .61 .39 2%
* Node 16: 0 .59 .41 2%
* Node 17: 1 .40 .60 6%
* Node 18: 1 .31 .69 3%
### Key Observations
* The tree splits primarily on "Age_first_offense", "Misdem_count", and "Probation".
* The percentage values at the terminal nodes likely represent the proportion of instances falling into that category.
* The values .70 and .30 at the root node suggest a baseline distribution.
* The leaf nodes have varying percentages, indicating different levels of confidence or prevalence.
* The values "0" and "1" appear in the leaf nodes, suggesting a binary outcome.
### Interpretation
This decision tree appears to be a model for predicting a binary outcome (represented by "0" and "1") based on the characteristics of an individual's first offense. The tree attempts to classify individuals based on their age at the first offense, the number of misdemeanors, and their probation status.
The root node suggests that, overall, there's a 70% chance of outcome "0" and a 30% chance of outcome "1". The subsequent splits refine these probabilities based on the specified conditions. For example, if the age at the first offense is greater than or equal to 21, the probability of outcome "0" increases to 78%, while the probability of outcome "1" decreases to 22%.
The leaf nodes provide the final predicted probabilities for each combination of conditions. The varying percentages at the leaf nodes indicate that the model is more confident in some predictions than others. The model is likely used to assess risk or make decisions related to sentencing or rehabilitation. The presence of "0" and "1" in the leaf nodes suggests a binary classification problem, such as predicting whether an individual will re-offend or not.
</details>
An Integrated (third?) culture : In this paper, we have taken the middle path between two extremes. We leverage (instead of boycotting) the power (scalability and flexibility) of modern machine learning methods by viewing them as a heavy-duty 'toolkit' that can efficiently drill through big complex datasets to systematically search for the hidden admissible models.
## A.7 COREtree: Iris Data
The dataset includes three kinds of iris flowers (setosa, versicolor, or virginica) with 50 samples from each class. The task is to develop a model (preferably a compact model based on only important features) to accurately classify iris flowers based on their sepals and petals' length and width ( p 4). Before we start our analysis, it is important to be aware of the highly-correlated nature of the 4-features; the estimated 4 4 correlation matrix is displayed below:
$$\hat { \Sigma } _ { \rho } \ = \ & \begin{bmatrix} 1 . 0 0 0 & - 0 . 1 1 8 & 0 . 8 7 2 & 0 . 8 1 8 \\ - 0 . 1 1 8 & 1 . 0 0 0 & - 0 . 4 2 8 & - 0 . 3 6 6 \end{bmatrix}$$
The infogram for the iris data, constructed using the recipe given in section 3.1, is shown at the top-left corner of Fig. 15, which clearly identifies petal.length and petal.width as
the core relevant features. Since we have reduced the problem to a bivariate one (variables: petal.length and petal.width ), we can now simply plot the data. This is done in the top-right of Fig. 15. We can even visually draw the linear decision surfaces to separate the three classes; see the red and blue lines in the scatter plot. Finally, we train a decision tree classifier based on the selected core features: petal.length and petal.width . The estimated COREtree is shown in the bottom panel, which gives a beautifully crisp (readily interpretable) decision rule for classifying iris flowers.
## A.8 EU's Artificial Intelligence Act
On 21st April 2021, the European Union (EU) unveiled strict regulations 20 to govern highrisk AI systems, which provides one of the first formal and comprehensive regulatory frameworks on AI. Few key takeaways from the report:
A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems.
In identifying the most appropriate risk management measures, the following shall be ensured: elimination or reduction of risks as far as possible through adequate design and development.
Bias monitoring, detection, and correction mechanism should be at place for high-risk AI systems in the pre-as well as the post-deployment stages.
High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system's output and use it appropriately.
High-risk AI systems should equip with appropriate human-machine interface toolswhich allows the system to be effectively overseen by natural persons during the period in which the AI system is in use.
High-risk AI technology providers shall ensure that their systems undergo regulatory compliant assessments. If the AI system is not in conformity with the requirements, they need to take the necessary corrective actions before putting them into service. Companies that fail to do so could face fines of up to 6% of their global sales.
20 The full report is available online at https://bit.ly/EUAI act. Also see the New York Times article https://www.nytimes.com/2021/04/16/business/artificial-intelligence-regulation.html
Figure 15: Iris data analysis. Top left: infogram; top right: the scatter plot of the data based on the selected core features; three different classes are indicated by red, green, and blue colors; bottom: the estimated decision tree classifier using the variables petal-length and petal-width.
<details>
<summary>Image 16 Details</summary>

### Visual Description
## Decision Tree: Iris Dataset Analysis
### Overview
The image presents a decision tree visualizing the classification of the Iris dataset based on petal length and petal width. The top portion of the image contains two scatter plots, likely used in the construction or understanding of the decision tree. The bottom portion displays the decision tree itself, with node splits and resulting classifications.
### Components/Axes
* **Top-Left Scatter Plot:**
* X-axis: Total Information (0.0 to 1.0)
* Y-axis: Conditional Information (0.0 to 1.0)
* Data Points: Two labeled points: "Petal Length" and "Petal Width".
* **Top-Right Scatter Plot:**
* X-axis: Petal Length (1.0 to 7.0)
* Y-axis: Petal Width (0.0 to 2.5)
* Data Points: Three distinct types of markers: circles, squares, and triangles.
* **Decision Tree:**
* Root Node: "Petal.Length < 2.5"
* Second Level Nodes: "Petal.Width < 1.8" (under the "no" branch)
* Leaf Nodes: Representing classifications with counts, probabilities, and percentages.
* **Legend (within Decision Tree Nodes):**
* Number: Total samples at the node.
* First Value: Number of samples belonging to class 0.
* Second Value: Probability of class 0.
* Third Value: Percentage of class 0.
### Detailed Analysis or Content Details
**Top-Left Scatter Plot:**
The plot shows the relationship between Total Information and Conditional Information for "Petal Length" and "Petal Width". "Petal Length" is located at approximately (0.8, 0.2), while "Petal Width" is at approximately (0.2, 0.8).
**Top-Right Scatter Plot:**
This plot displays the distribution of petal length versus petal width.
* **Circles:** Appear to be concentrated around Petal Length values between 4 and 6, and Petal Width values between 1.0 and 2.5.
* **Squares:** Appear to be concentrated around Petal Length values between 1.5 and 3.0, and Petal Width values between 0.5 and 1.5.
* **Triangles:** Appear to be concentrated around Petal Length values between 4.5 and 7.0, and Petal Width values between 1.5 and 2.5.
**Decision Tree:**
* **Root Node (Petal.Length < 2.5):** Contains 7 samples.
* "yes" branch leads to a leaf node with 2 samples.
* Class 0: 0 samples (0.00, 0.00%)
* Class 1: 2 samples (1.00, 100.00%)
* Percentage of Class 1: 33%
* "no" branch leads to a second-level node (Petal.Width < 1.8).
* **Second Level Node (Petal.Width < 1.8):** Contains 5 samples.
* "yes" branch leads to a leaf node with 1 sample.
* Class 0: 1 sample (0.00, 91.09%)
* Class 1: 0 samples (0.00, 9.09%)
* Percentage of Class 1: 33%
* "no" branch leads to a leaf node with 2 samples.
* Class 0: 2 samples (0.02, 98%)
* Class 1: 0 samples (0.00, 2%)
* Percentage of Class 1: 31%
### Key Observations
* The decision tree uses petal length as the primary splitting criterion.
* The second split is based on petal width, but only for samples where petal length is greater than or equal to 2.5.
* The leaf nodes indicate the classification of the Iris species based on the petal length and width.
* The scatter plots suggest a clear separation between the three Iris species based on petal length and width.
### Interpretation
The decision tree effectively classifies the Iris dataset based on petal length and width. The tree suggests that petal length is the most important feature for distinguishing between the species, with petal width playing a secondary role. The scatter plots visually confirm the separability of the Iris species based on these two features. The probabilities and percentages in the leaf nodes indicate the confidence of the classification at each node. The tree's structure reflects a hierarchical decision-making process, where the most informative feature is used first to split the data, and subsequent splits refine the classification based on additional features. The data suggests that the Iris species can be accurately classified using a relatively simple decision tree based on petal measurements.
</details>