2202.08081
Model: gemini-2.0-flash
# Reasoning with fuzzy and uncertain evidence using epistemic random fuzzy sets: general framework and practical models111This paper was published in Fuzzy Sets and Systems, 453:1â36, 2023. This version corrects an error in Equation (23).
**Authors**: Thierry DenĆux
> Thierry.Denoeux@utc.frUniversité de technologie de CompiÚgne, CNRSUMR 7253 Heudiasyc, CompiÚgne, FranceInstitut universitaire de France, Paris, France
Abstract
We introduce a general theory of epistemic random fuzzy sets for reasoning with fuzzy or crisp evidence. This framework generalizes both the Dempster-Shafer theory of belief functions, and possibility theory. Independent epistemic random fuzzy sets are combined by the generalized product-intersection rule, which extends both Dempsterâs rule for combining belief functions, and the product conjunctive combination of possibility distributions. We introduce Gaussian random fuzzy numbers and their multi-dimensional extensions, Gaussian random fuzzy vectors, as practical models for quantifying uncertainty about scalar or vector quantities. Closed-form expressions for the combination, projection and vacuous extension of Gaussian random fuzzy numbers and vectors are derived.
keywords: Belief functions, evidence theory, possibility theory, random sets, uncertainty. journal: Fuzzy Sets and Systems
1 Introduction
The Dempster-Shafer (DS) theory of belief functions [29] and possibility theory [38] were introduced independently in the late 1970âs as non-probabilistic frameworks for reasoning with uncertainty [11, 10]. The former approach is based on the idea of representing elementary pieces of evidence as completely monotone capacities, or belief functions, and combining them using an operator known as the product-intersection rule or Dempsterâs rule. As probability measures are special belief functions, and Dempsterâs rule extends Bayesian conditioning, DS can be seen as an extension of Bayesian probability theory, particularly suitable to reasoning with severe uncertainty. There is also a strong relation between DS theory and the theory of random sets [23]: specifically, any random set induces a belief function and, conversely, any belief function can be seen as being induced by some random set [26]. In DS theory, a random set underlying a belief function does not represent a random mechanism for generating sets of outcomes, but the imprecise meanings of a piece of evidence under different interpretations with known probabilities [30]. To avoid confusion, we use the term epistemic random set for random sets representing evidence in DS theory.
In contrast, possibility theory originates from the theory of fuzzy sets [36]. In this approach, a fuzzy statement about the variable of interest, seen as a flexible constraint on its precise but unknown value in some domain $\Theta$ , induces a possibility measure and a dual necessity measure on $\Theta$ . Interestingly, a necessity measure is a belief function, and the dual possibility measure is the corresponding plausibility function, but the converse is not true (a belief function is not, in general, a necessity measure). For this reason, possibility theory has sometimes been presented as âa special branch of evidence theoryâ (another name for DS theory) [21, page 187]. However, combining two necessity measures by Dempsterâs rule yields a belief function that is no longer a necessity measure: this combination rule is, thus, not compatible with possibilistic reasoning. In contrast, possibility theory has its own conjunctive combination operators based on triangular norms (or t-norms) [16]. Possibility and DS theory are, thus, two distinct models of uncertain reasoning based on related knowledge representation languages but different information processing mechanisms.
In a companion paper [9], we have revisited Zadehâs notion of âevidence of the second kindâ, defined as a pair $(X,\Pi_{(Y\mid X)})$ in which $X$ is a discrete random variable on a set $\Omega$ and $\Pi_{(Y\mid X)}$ a collection of conditional possibility distributions of a variable $Y$ given $X=x$ , for all $xâ\Omega$ . If random variable $X$ is constant, we get a unique possibility distribution for variable $Y$ ; if the conditional possibility distributions $\Pi_{(Y\mid X)}$ take values in $\{0,1\}$ , then the pair $(X,\Pi_{(Y\mid X)})$ defines a random set equivalent to a DS mass function. The mappings associating, to each event, its expected necessity and its expected possibility are, respectively, belief and plausibility functions. In this framework, a possibility distribution thus represents certain but fuzzy evidence, while a DS mass function is a model of uncertain and crisp evidence. In general, a pair $(X,\Pi_{(Y\mid X)})$ defines an epistemic random fuzzy set, allowing us to describe evidence that is both uncertain and fuzzy. (The term âepistemicâ emphasizes the distinction between this interpretation and that of random fuzzy sets as mechanisms for generating fuzzy data considered, for instance in [28, 17]). In [9], we have proposed a family of combination rules for epistemic random fuzzy sets in the finite setting, generalizing both Dempsterâs rule and the conjunctive combination rules of possibility theory. One of these rules, based on the product t-norm, is associative and arguably well suited for combining independent evidence. Equipped with this combination rule (called here the generalized product-intersection rule), the theory of epistemic random fuzzy sets can be seen as an extension of both DS theory and possibility theory, making it possible to combine evidence of various types, including expert assessments (possibly expressed in natural language), sensor information, and statistical evidence about a model parameter.
In this paper, drawing from mathematical results presented by Couso and SĂĄnchez in [2], we give a more general exposition of the theory of epistemic fuzzy sets, considering arbitrary probability and measurable spaces. We define combination, marginalization and vacuous extension operations of random fuzzy sets in this general setting, laying the foundations of a wide-ranging theory of uncertainty encompassing DS and possibility theories as special cases. Finally, for the important case where the frame of discernment is $\mathbb{R}^{p}$ , we propose Gaussian random fuzzy numbers and vectors as a practical model, generalizing both Gaussian random variables and vectors on the one hand, and Gaussian possibility distributions on the other hand.
The rest of this paper is organized as follows. Classical models (including random sets, fuzzy sets and possibility theory) are first recalled in Section 2. Epistemic random fuzzy sets are then introduced in a general setting in Section 3. Finally, Gaussian random fuzzy numbers and vectors are studied, respectively, in Sections 4 and 5, and Section 6 concludes the paper.
2 Classical models
In this section, we recall the main definitions and results pertaining to the two models of uncertainty generalized in this paper: random sets and belief functions on the one hand (Section 2.1), fuzzy sets and possibility theory on the other hand (Section 2.2).
2.1 Random sets and belief functions
Whereas belief functions in the finite setting can be introduced without any reference to random sets [29], the mathematical framework of random sets is useful to analyze belief functions in more general spaces, and to define the practical models needed, e.g., in statistical applications. Important references about the link between random sets and belief functions include [26] and [2].
Let $(\Omega,\sigma_{\Omega},P)$ be a probability space, $(\Theta,\sigma_{\Theta})$ a measurable space, and ${\overline{X}}$ a mapping from $\Omega$ to $2^{\Theta}$ . The upper and lower inverses of ${\overline{X}}$ are defined, respectively, as follows:
$$
\displaystyle{\overline{X}}^{*}(B) \displaystyle=B^{*}=\{\omega\in\Omega:{\overline{X}}(\omega)\cap B\neq\emptyset\} \displaystyle{\overline{X}}_{*}(B) \displaystyle=B_{*}=\{\omega\in\Omega:\emptyset\neq{\overline{X}}(\omega)%
\subseteq B\}
$$
for all $Bâeq\Theta$ . It is easy to check that
$$
B^{*}\cap(B^{c})_{*}=\emptyset
$$
and
$$
B^{*}\cup(B^{c})_{*}=\{\omega\in\Omega:{\overline{X}}(\omega)\neq\emptyset\}=%
\Theta^{*},
$$
where $B^{c}$ denotes the complement of $B$ in $\Theta$ .
The mapping ${\overline{X}}$ is said to be $\sigma_{\Omega}-\sigma_{\Theta}$ strongly measurable [26] if, for all $Bâ\sigma_{\Theta}$ , $B^{*}â\sigma_{\Omega}$ (or, equivalently, if for all $Bâ\sigma_{\Theta}$ , $B_{*}â\sigma_{\Omega}$ ). The tuple $(\Omega,\sigma_{\Omega},P,\Theta,\sigma_{\Theta},{\overline{X}})$ is called a random set. When there is no confusion about the domain and co-domain, we will call the $\sigma_{\Omega}-\sigma_{\Theta}$ strongly measurable mapping ${\overline{X}}$ itself a random set.
In the special case where $|{\overline{X}}(\omega)|=1$ for all $\omegaâ\Omega$ , we can define the mapping $X:\Omegaâ\Theta$ such that ${\overline{X}}(\omega)=\{X(\omega)\}$ for all $\omegaâ\Omega$ . We then have $B^{*}=B_{*}=X^{-1}(B)$ for all $Bâeq\Theta$ , and $X$ is $\sigma_{\Omega}-\sigma_{\Theta}$ measurable. The notion of random set thus extends that of random variable.
Belief and plausibility functions
From now on, we will assume, for simplicity, that $P(\Theta^{*})=1$ . (If not verified, this property can be enforced by conditioning $P$ on $\Theta^{*}$ ). Let $P^{*}$ and $P_{*}$ be the lower and upper probability measures associated with random set ${\overline{X}}$ , defined as the mappings from $\sigma_{\Theta}$ to $[0,1]$ such that
$$
P_{*}(B)=P(B_{*}) \tag{2}
$$
and
$$
P^{*}(B)=P(B^{*})=1-P_{*}(B^{c}), \tag{3}
$$
for all $Bâ\sigma_{\Theta}$ . Mapping $P_{*}$ is a completely monotone capacity, i.e., a belief function, and $P^{*}$ is the dual plausibility function [26, Proposition 1]. In the following, they will be denoted, respectively, as $Bel_{\overline{X}}$ and $Pl_{\overline{X}}$ . The corresponding contour function is defined as the mapping $pl_{\overline{X}}$ from $\Theta$ to $[0,1]$ such that
$$
pl_{\overline{X}}(\theta)=Pl_{\overline{X}}(\{\theta\})
$$
for all $\thetaâ\Theta$ . The subsets ${\overline{X}}(\omega)âeq\Theta$ , for all $\omegaâ\Omega$ , are called the focal sets of $Bel_{\overline{X}}$ .
Interpretation
In DS theory, $\Omega$ represents a set of interpretations of a piece of evidence about a variable $\boldsymbol{\theta}$ taking values in set $\Theta$ (called the frame of discernment). If interpretation $\omegaâ\Omega$ holds, we know that $\boldsymbol{\theta}â{\overline{X}}(\omega)$ , and nothing more. For any $Aâ\sigma_{\Omega}$ , $P(A)$ is the (subjective) probability that the true interpretation lies in $A$ . For any $Bâ\sigma_{\Theta}$ , the degree of belief $Bel_{\overline{X}}(B)$ is then a measure of support of the proposition â $\boldsymbol{\theta}â B$ â given the evidence, while the degree of plausibility $Pl_{\overline{X}}(B)$ is a measure of lack of support for the proposition â $\boldsymbol{\theta}\notâ B$ â. Under this interpretation, the random set ${\overline{X}}$ represents a state of knowledge: it can be said to be epistemic.
Vacuous random set
A constant random set $(\Omega,\sigma_{\Omega},P,\Theta,\sigma_{\Theta},{\overline{X}})$ such that ${\overline{X}}(\omega)=\Theta$ for all $\omegaâ\Omega$ is said to be vacuous. For such a random set, we have $Bel_{\overline{X}}(A)=0$ for all $Aâ\sigma_{\Theta}\setminus\{\Theta\}$ and $Pl_{\overline{X}}(A)=1$ for all $Aâ\sigma_{\Theta}\setminus\{\emptyset\}$ . A vacuous random set represents complete ignorance about $\boldsymbol{\theta}$ .
Finite case
Assume that $\Theta$ is finite, and $\sigma_{\Theta}=2^{\Theta}$ . The Möbius inverse of $Bel_{\overline{X}}$ is the mapping $m_{\overline{X}}$ from $2^{\Theta}$ to [0,1] such that
$$
m_{\overline{X}}(B)=\sum_{C\subseteq B}(-1)^{|B|-|C|}Bel_{\overline{X}}(C),
$$
for all $Bâeq\Theta$ . It verifies $m(B)â„ 0$ for all $Bâeq\Omega$ , $\sum_{Bâeq\Omega}m(B)=1$ and $m(\emptyset)=0$ . The belief and plausibility can be computed from $m_{\overline{X}}$ , respectively, as
$$
Bel_{\overline{X}}(B)=\sum_{C\subseteq B}m_{\overline{X}}(C)\quad\text{and}%
\quad Pl_{\overline{X}}(B)=\sum_{C\cap B\neq\emptyset}m_{\overline{X}}(C),
$$
for all $Bâeq\Theta$ .
Random closed intervals
Random closed intervals are particularly simple models allowing us to define belief functions on the real line [4, 34, 6]. Let $(\Omega,\sigma_{\Omega},P)$ be a probability space and $X,Y$ two random variables $\Omegaâ\mathbb{R}$ such that $P(\{\omegaâ\Omega:X(\omega)†Y(\omega)\})=1$ . Then, the mapping ${\overline{X}}:\Omegaâ 2^{\mathbb{R}}$ defined by ${\overline{X}}(\omega)=[X(\omega),Y(\omega)]$ is $\sigma_{\Omega}-\beta_{\mathbb{R}}$ strongly measurable, where $\beta_{\mathbb{R}}$ is the Borel $\sigma$ -algebra on $\mathbb{R}$ (see a formal proof in [22]). This mapping defines a random closed interval. For a random closed interval ${\overline{X}}=[X,Y]$ , we have [4]
$$
Bel_{\overline{X}}([x,y])=P([X,Y]\subseteq[x,y])=P(X\geq x;Y\leq y) Pl_{\overline{X}}([x,y])=P([X,Y]\cap[x,y]\neq\emptyset)=1-P(X>y)-P(Y<x),
$$
for all $(x,y)â\mathbb{R}^{2}$ such that $x†y$ . In particular, by letting $x$ tend to $-â$ in (4), we obtain the lower and upper cumulative distribution functions (cdfâs) of ${\overline{X}}$ as
$$
F_{*}(y)=Bel_{\overline{X}}((-\infty,y])=P(Y\leq y)=F_{Y}(y) F^{*}(y)=Pl_{\overline{X}}((-\infty,y])=P(X\leq y)=F_{X}(y).
$$
Lower and upper expectation
Let ${\overline{X}}$ be a random set from $(\Omega,\sigma_{\Omega},P)$ to $(\mathbb{R},\beta_{\mathbb{R}})$ . Following Dempster [3], we can define its lower and upper expectations, respectively, as
$$
\mathbb{E}_{*}({\overline{X}})=\int_{-\infty}^{+\infty}x\,dF^{*}(x)
$$
and
$$
\mathbb{E}^{*}({\overline{X}})=\int_{-\infty}^{+\infty}x\,dF_{*}(x),
$$
where $F_{*}(x)=Bel_{\overline{X}}((-â,x])$ and $F^{*}(x)=Pl_{\overline{X}}((-â,x])$ are the lower and upper cdfâs of ${\overline{X}}$ . When ${\overline{X}}$ is a random closed interval $[X,Y]$ , it follows from (5) that $\mathbb{E}_{*}({\overline{X}})=\mathbb{E}(X)$ and $\mathbb{E}^{*}({\overline{X}})=\mathbb{E}(Y)$ .
Dempsterâs rule
Consider two pieces of evidence represented by random sets
$$
(\Omega_{1},\sigma_{1},P_{1},\Theta,\sigma_{\Theta},{\overline{X}}_{1})\quad%
\text{and}\quad(\Omega_{2},\sigma_{2},P_{2},\Theta,\sigma_{\Theta},{\overline{%
X}}_{2}),
$$
and the mapping ${\overline{X}}_{\cap}$ from $\Omega_{1}Ă\Omega_{2}$ to $2^{\Theta}$ defined by ${\overline{X}}_{\cap}(\omega_{1},\omega_{2})={\overline{X}}_{1}(\omega_{1})%
\cap{\overline{X}}_{2}(\omega_{2})$ . If interpretations $\omega_{1}â\Omega_{1}$ and $\omega_{2}â\Omega_{2}$ both hold, we know that $\boldsymbol{\theta}â{\overline{X}}_{\cap}(\omega_{1},\omega_{2})$ , provided that ${\overline{X}}_{1}(\omega_{1})\cap{\overline{X}}_{2}(\omega_{2})â \emptyset$ . Assume that ${\overline{X}}_{\cap}$ is $(\sigma_{1}\otimes\sigma_{2})-\sigma_{\Theta}$ strongly measurable, where $\sigma_{1}\otimes\sigma_{2}$ is the tensor product $\sigma$ -algebra over the Cartesian product $\Omega_{1}Ă\Omega_{2}$ . The two pieces of evidence are said to be independent if, for any $Aâ\sigma_{1}\otimes\sigma_{2}$ , the probability that $A$ contains the true interpretations of the two pieces of evidence is the conditional probability
$$
P_{12}(A)=(P_{1}\times P_{2})(A\mid\Theta^{*})=\frac{(P_{1}\times P_{2})(A\cap%
\Theta^{*})}{(P_{1}\times P_{2})(\Theta^{*})}, \tag{6}
$$
where $P_{1}Ă P_{2}$ is the product measure satisfying $(P_{1}Ă P_{2})(A_{1}Ă A_{2})=P_{1}(A_{1})P_{2}(A_{2})$ for all $A_{1}â\sigma_{1}$ , $A_{2}â\sigma_{2}$ , and
$$
\Theta^{*}=\{(\omega_{1},\omega_{2})\in\Omega_{1}\times\Omega_{2}:{\overline{X%
}}_{\cap}(\omega_{1},\omega_{2})\neq\emptyset\}
$$
is the set of noncontradictory pairs of interpretations. The quantity
$$
\kappa=1-(P_{1}\times P_{2})(\Theta^{*})=(P_{1}\times P_{2})(\{(\omega_{1},%
\omega_{2})\in\Omega_{1}\times\Omega_{2}:{\overline{X}}_{\cap}(\omega_{1},%
\omega_{2})=\emptyset\})
$$
is called the degree of conflict between the two pieces of evidence. The combined random set
$$
(\Omega_{1}\times\Omega_{2},\sigma_{1}\otimes\sigma_{2},P_{12},\Theta,\sigma_{%
\Theta},{\overline{X}}_{\cap})
$$
is called the orthogonal sum of the two pieces of evidence, and is denoted by ${\overline{X}}_{1}\oplus{\overline{X}}_{2}$ . This combination rule, first introduced by Dempster in [3], is called the product-intersection rule, or Dempsterâs rule of combination.
We can remark that Dempsterâs rule is usually viewed as an operation to combine belief functions, whereas it is defined here as an operation to combine random sets. This distinction is immaterial in the standard setting, as the orthogonal sum of two belief functions does not depend on their particular random set representations and can be defined without reference to the random set framework [32]. However, it becomes crucial when considering random fuzzy sets as a model for generating belief functions, as done in this paper. We will come back to this important point in Section 3.2.
Any vacuous random set is obviously a neutral element for Dempsterâs rule. The following important proposition states that pieces of evidence can be combined by Dempsterâs rule in any order.
**Proposition 1**
*Dempsterâs rule is commutative and associative.*
* Proof*
See A. â
**Example 1**
*Let $X_{1}\sim N(\mu_{1},\sigma_{1}^{2})$ and $X_{2}\sim N(\mu_{2},\sigma_{2}^{2})$ be two independent normal random variables and consider the random intervals ${\overline{X}}_{1}=[X_{1},+â)$ and ${\overline{X}}_{2}=(-â,X_{2}]$ . The degree of conflict between ${\overline{X}}_{1}$ and ${\overline{X}}_{2}$ is
$$
\kappa=P(X_{1}>X_{2})=P(X_{2}-X_{1}<0)=\Phi\left(\frac{\mu_{1}-\mu_{2}}{\sqrt{%
\sigma_{1}^{2}+\sigma_{2}^{2}}}\right),
$$
where $\Phi$ is the standard normal cdf. The orthogonal sum of ${\overline{X}}_{1}$ and ${\overline{X}}_{2}$ is the random closed interval $[X^{\prime}_{1},X^{\prime}_{2}]$ , where $(X^{\prime}_{1},X^{\prime}_{2})$ is the two-dimensional random vector with distribution equal the conditional distribution of $(X_{1},X_{2})$ given $X_{1}†X_{2}$ . Its density is
$$
f_{X^{\prime}_{1},X^{\prime}_{2}}(x_{1},x_{2})=\frac{\sigma_{1}^{-1}\sigma_{2}%
^{-1}\phi\left(\frac{x_{1}-\mu_{1}}{\sigma_{1}}\right)\phi\left(\frac{x_{2}-%
\mu_{2}}{\sigma_{2}}\right)I(x_{1}\leq x_{2})}{\Phi\left(\frac{\mu_{2}-\mu_{1}%
}{\sqrt{\sigma_{1}^{2}+\sigma_{2}^{2}}}\right)},
$$
where $\phi$ is the standard normal probability density function (pdf) and $I(·)$ is the indicator function.*
The following proposition states that the contour function of the orthogonal sum of two independent random sets ${\overline{X}}_{1}$ and ${\overline{X}}_{2}$ is proportional to the product of the contour functions of ${\overline{X}}_{1}$ and ${\overline{X}}_{2}$ .
**Proposition 2**
*Let ${\overline{X}}_{1}$ and ${\overline{X}}_{2}$ be two independent random sets on the same domain $\Theta$ , with contour functions $pl_{{\overline{X}}_{1}}$ and $pl_{{\overline{X}}_{2}}$ . For any $\thetaâ\Theta$ ,
$$
pl_{{\overline{X}}_{1}\oplus{\overline{X}}_{2}}(\theta)=\frac{pl_{{\overline{X%
}}_{1}}(\theta)pl_{{\overline{X}}_{2}}(\theta)}{1-\kappa}, \tag{7}
$$
where $\kappa$ is the degree of conflict between ${\overline{X}}_{1}$ and ${\overline{X}}_{2}$ .*
* Proof*
We have
| | $\displaystyle pl_{{\overline{X}}_{1}\oplus{\overline{X}}_{2}}(\theta)$ | $\displaystyle=\frac{(P_{1}Ă P_{2})(\{(\omega_{1},\omega_{2})â\Omega_{1}%
Ă\Omega_{2}:\thetaâ{\overline{X}}_{\cap}(\omega_{1},\omega_{2})\})}{1-\kappa}$ | |
| --- | --- | --- | --- |
â
**Example 2**
*Let us consider again the two random intervals of Example 1. The contour functions of ${\overline{X}}_{1}$ and ${\overline{X}}_{2}$ are, respectively,
$$
pl_{{\overline{X}}_{1}}(x)=P(X_{1}\leq x)=\Phi\left(\frac{x-\mu_{1}}{\sigma_{1%
}}\right)
$$
and
$$
pl_{{\overline{X}}_{2}}(x)=P(X_{2}\geq x)=1-\Phi\left(\frac{x-\mu_{2}}{\sigma_%
{2}}\right).
$$
Now, the contour function of ${\overline{X}}_{1}\oplus{\overline{X}}_{2}$ is
| | $\displaystyle pl_{{\overline{X}}_{1}\oplus{\overline{X}}_{2}}(x)$ | $\displaystyle=P(X^{\prime}_{1}†x†X^{\prime}_{2})$ | |
| --- | --- | --- | --- |*
Marginalization and vacuous extension
Let us now consider the case where we have two variables $\boldsymbol{\theta}_{1}$ and $\boldsymbol{\theta}_{2}$ with domains $\Theta_{1}$ and $\Theta_{2}$ . (The case of $n$ variables is not more difficult conceptually but it requires heavier notations). Let $\sigma_{\Theta_{1}}$ and $\sigma_{\Theta_{2}}$ be $\sigma$ -algebras defined, respectively, on $\Theta_{1}$ and $\Theta_{2}$ . Let $\Theta_{12}=\Theta_{1}Ă\Theta_{2}$ and $\sigma_{\Theta_{12}}=\sigma_{\Theta_{1}}\otimes\sigma_{\Theta_{2}}$ . Let ${\overline{X}}_{12}$ be a random set from $(\Omega,\sigma_{\Omega},P)$ to $(\Theta_{12},\sigma_{\Theta_{12}})$ , and ${\overline{X}}_{1}$ the mapping from $\Omega$ to $2^{\Theta_{1}}$ that maps each $\omegaâ\Omega$ to the projection of ${\overline{X}}_{12}(\omega)$ onto $\Theta_{1}$ :
$$
{\overline{X}}_{1}(\omega)={\overline{X}}_{12}(\omega)\downarrow\Theta_{1}=\{%
\theta_{1}\in\Theta_{1}:\exists\theta_{2}\in\Theta_{2},(\theta_{1},\theta_{2})%
\in{\overline{X}}_{12}(\omega)\}.
$$
It is easy to see that ${\overline{X}}_{1}$ is $\sigma_{\Omega}-\sigma_{\Theta_{1}}$ measurable: for any $Bâ\sigma_{\Theta_{1}}$ ,
| | $\displaystyle{\overline{X}}_{1}^{*}(B)$ | $\displaystyle=\{\omegaâ\Omega:{\overline{X}}_{1}(\omega)\cap Bâ \emptyset\}$ | |
| --- | --- | --- | --- |
As $BĂ\Theta_{2}â\sigma_{\Theta_{12}}$ and ${\overline{X}}_{12}$ is $\sigma_{\Omega}-\sigma_{\Theta_{12}}$ strongly measurable, it thus follows that ${\overline{X}}_{1}^{*}(B)â\sigma_{\Omega}$ . The random set ${\overline{X}}_{1}$ will be called the marginal of ${\overline{X}}_{12}$ on $\Theta_{1}$ .
Conversely, let ${\overline{X}}_{1}$ be a random set from $(\Omega,\sigma_{\Omega})$ to $(\Theta_{1},\sigma_{\Theta_{1}})$ and let ${\overline{X}}_{1\uparrow 2}$ be the mapping from $\Omega$ to $\Theta_{12}$ defined by
$$
{\overline{X}}_{1\uparrow(1,2)}(\omega)={\overline{X}}_{1}(\omega)\times\Theta%
_{2}.
$$
For any $Bâ\sigma_{\Theta_{12}}$ ,
| | $\displaystyle{\overline{X}}_{1\uparrow(1,2)}^{*}(B)$ | $\displaystyle=\{\omegaâ\Omega:{\overline{X}}_{1\uparrow 2}(\omega)\cap Bâ \emptyset\}$ | |
| --- | --- | --- | --- |
If for all $Bâ\sigma_{\Theta_{12}}$ , ${\overline{X}}_{1}^{*}(B\downarrow\Theta_{1})â\sigma_{\Omega}$ , then ${\overline{X}}_{1\uparrow(1,2)}$ is $\sigma_{\Omega}-\sigma_{\Theta_{12}}$ strongly measurable. It is said to be the vacuous extension of ${\overline{X}}_{1}$ in $\Theta_{1}Ă\Theta_{2}$ .
We say that a random set ${\overline{X}}_{12}$ from $(\Omega,\sigma_{\Omega},P)$ to $(\Theta_{12},\sigma_{\Theta_{12}})$ with marginals ${\overline{X}}_{1}$ and ${\overline{X}}_{2}$ is noninteractive if it is equal to the orthogonal sum of its marginals, i.e.,
$$
{\overline{X}}_{12}={\overline{X}}_{1\uparrow(1,2)}\oplus{\overline{X}}_{2%
\uparrow(1,2)}\quad\text{denoted by}\quad{\overline{X}}_{1}\oplus{\overline{X}%
}_{2}.
$$
**Example 3**
*Let $(X_{1},X_{2})$ be a two dimensional random vector from $(\Omega,\sigma_{\Omega},P)$ to $(\mathbb{R}^{2},\beta_{\mathbb{R}^{2}})$ and consider the mapping ${\overline{X}}_{12}:\Omegaâ 2^{\mathbb{R}^{2}}$ defined as
$$
{\overline{X}}_{12}(\omega)=(-\infty,X_{1}(\omega)]\times(-\infty,X_{2}(\omega%
)].
$$
This mapping defines a random set [23, page 3]. Its marginals are the random closed intervals $(-â,X_{1}]$ and $(-â,X_{2}]$ . If $X_{1}$ and $X_{2}$ are independent, then ${\overline{X}}_{12}=(-â,X_{1}]\oplus(-â,X_{2}]$ and ${\overline{X}}_{12}$ is noninteractive.*
2.2 Fuzzy sets and possibility theory
A fuzzy subset of a set $\Theta$ is a pair ${\widetilde{F}}=(\Theta,\mu_{{\widetilde{F}}})$ , where $\mu_{{\widetilde{F}}}$ is a mapping from $\Theta$ to $[0,1]$ , called the membership function of ${\widetilde{F}}$ . Each number $\mu_{{\widetilde{F}}}(\theta)$ is interpreted as a degree of membership of element $\theta$ to the fuzzy set ${\widetilde{F}}$ . In the following, to simplify the notation, we will identify fuzzy sets to their membership functions and write ${\widetilde{F}}(\theta)$ for $\mu_{{\widetilde{F}}}(\theta)$ . The height of fuzzy set ${\widetilde{F}}$ is defined as
$$
0pt({\widetilde{F}})=\sup_{\theta\in\Theta}{\widetilde{F}}(\theta).
$$
If $0pt({\widetilde{F}})=1$ , ${\widetilde{F}}$ is said to be normal. For any $\alphaâ[0,1]$ , the (weak) $\alpha$ -cut of ${\widetilde{F}}$ is the set
$$
{}^{\alpha}{\widetilde{F}}=\{\theta\in\Theta:{\widetilde{F}}(\theta)\geq\alpha\}.
$$
Possibility and necessity measures
Let $\boldsymbol{\theta}$ be a variable taking values in $\Theta$ . Assume that we receive a piece of evidence telling us that â $\boldsymbol{\theta}$ is ${\widetilde{F}}$ â, where ${\widetilde{F}}$ is a normal fuzzy subset of $\Theta$ . This evidence induces a possibility measure $\Pi_{\widetilde{F}}$ from $2^{\Theta}$ to $[0,1]$ defined by
$$
\Pi_{\widetilde{F}}(B)=\sup_{\theta\in B}{\widetilde{F}}(\theta), \tag{8}
$$
for all $Bâeq\Theta$ . The number $\Pi_{\widetilde{F}}(B)$ is interpreted as the degree of possibility that $\boldsymbol{\theta}â B$ , given that $\boldsymbol{\theta}$ is ${\widetilde{F}}$ [38]. The corresponding possibility distribution is the mapping from $\Theta$ to $[0,1]$ defined by
$$
\pi_{\widetilde{F}}(\theta)=\Pi_{\widetilde{F}}(\{\theta\})={\widetilde{F}}(%
\theta),
$$
i.e., it is identical to the membership function ${\widetilde{F}}$ . The dual necessity measure is defined as
$$
N_{\widetilde{F}}(B)=1-\Pi_{\widetilde{F}}(B^{c})=\inf_{\theta\not\in B}\left[%
1-{\widetilde{F}}(\theta)\right]. \tag{9}
$$
It can easily be shown that mapping $N_{\widetilde{F}}:2^{\Omega}â[0,1]$ is completely monotone, i.e., it is a belief function, and $\Pi_{\widetilde{F}}$ is the dual plausibility function [15]. These belief and plausibility functions are formally induced by the random set $([0,1],\beta_{[0,1]},\lambda,\Theta,2^{\Theta},{\overline{X}})$ , where $\beta_{[0,1]}$ is the Borel $\sigma$ -algebra on $[0,1]$ , $\lambda$ is the uniform probability measure, and ${\overline{X}}$ is the mapping $[0,1]â 2^{\Theta}$ defined by ${\overline{X}}(\alpha)={}^{\alpha}{\widetilde{F}}$ . However, as we will see in Section 3.2, it is important, when combining evidence, to distinguish between possibility distributions induced by fuzzy sets, and consonant belief functions induced by random sets.
Conjunctive combination of possibility distributions
Assume that we receive two independent pieces of information telling us that â $\boldsymbol{\theta}$ is ${\widetilde{F}}$ â and â $\boldsymbol{\theta}$ is ${\widetilde{G}}$ â, where ${\widetilde{F}}$ and ${\widetilde{G}}$ are two fuzzy subsets of $\Theta$ . The conjunctive combination of these two pieces of evidence requires some notion of intersection between fuzzy sets. As reviewed in [13], the intersection operation can be extended to fuzzy sets using triangular norms (or t-norms for short). Given a t-norm $âp$ , the $âp$ -intersection of two fuzzy subsets ${\widetilde{F}}$ and ${\widetilde{G}}$ of the same domain $\Theta$ can be defined as
$$
({\widetilde{F}}\cap_{\top}{\widetilde{G}})(\theta)={\widetilde{F}}(\theta)%
\top{\widetilde{G}}(\theta)
$$
for all $\thetaâ\Theta$ . The most common choices for $âp$ are the minimum and product t-norms, as originally proposed by Zadeh [36]; the corresponding operations are called, respectively, the minimum and product intersections. However, the intersection of two normal fuzzy sets is generally not normal. To obtain a normal fuzzy set, as needed for the definitions of possibility and necessity measures in (8)-(9), we define the normalized $âp$ -intersection as
$$
({\widetilde{F}}\cap^{*}_{\top}{\widetilde{G}})(\theta)=\begin{cases}%
\displaystyle\frac{{\widetilde{F}}(\theta)\top{\widetilde{G}}(\theta)}{0pt({%
\widetilde{F}}\cap_{\top}{\widetilde{G}})}&\text{if }0pt({\widetilde{F}}\cap_{%
\top}{\widetilde{G}})>0\\
0&\text{otherwise.}\end{cases}
$$
The fuzzy set ${\widetilde{F}}\cap^{*}_{âp}{\widetilde{G}}$ is normal provided that $0pt({\widetilde{F}}\cap_{âp}{\widetilde{G}})>0$ . In general, the normalized intersection $\cap^{*}_{âp}$ associated with a t-norm $âp$ is not associative. A notable exception is the case where $âp$ is the product t-norm: the normalized product intersection, denoted by $\varodot$ , is associative (see [16], and a simple proof in [9]). By abuse of notation, we can use the same symbol to denote the conjunctive combination of possibility measures and the normalized product intersection of fuzzy sets, and write
$$
\Pi_{\widetilde{F}}\varodot\Pi_{\widetilde{G}}=\Pi_{{\widetilde{F}}\varodot{%
\widetilde{G}}}.
$$
As noted by Dubois and Prade [16, page 352], product intersection has a reinforcement effect that is appropriate when the information sources are assumed to be independent. The choice of the normalized product intersection for combining possibility distributions makes possibility theory fit in the framework of valuation-based systems [33] and allows for possibilistic reasoning with a large number of variables. The normalized product intersection operator also has an interesting property with respect to Gaussian fuzzy numbers, as recalled in the next paragraph.
Gaussian fuzzy numbers
A fuzzy number (or fuzzy interval) can be defined as a normal and convex fuzzy subset of the real line. In particular, a Gaussian fuzzy number (GFN) is a normal fuzzy subset of $\mathbb{R}$ with membership function
$$
\varphi(x;m,h)=\exp\left(-\frac{h}{2}(x-m)^{2}\right),
$$
where $mâ\mathbb{R}$ is the mode and $hâ[0,+â]$ is the precision. Such a fuzzy number will be denoted by $\textsf{GFN}(m,h)$ . If $h=0$ , $\varphi(x;m,h)=1$ for all $xâ\mathbb{R}$ : $\textsf{GFN}(m,0)$ is then maximally imprecise and identical to the whole real line, whatever the value of $m$ . If $h=+â$ , $\varphi(x;m,h)=I(x=m)$ , where $I(·)$ is the indicator function; the fuzzy number $\textsf{GFN}(m,+â)$ is then maximally precise and equivalent to the real number $m$ .
It can easily be shown that the family of GFNâs is closed under the normalized product intersection (see, e.g., [1]). More precisely, we have the following proposition, proved in [1].
**Proposition 3**
*For any $xâ\mathbb{R}$ ,
$$
\varphi(x;m_{1},h_{1})\cdot\varphi(x;m_{2},h_{2})=\exp\left(-\frac{h_{1}h_{2}(%
m_{1}-m_{2})^{2}}{2(h_{1}+h_{2})}\right)\varphi(x;m_{12},h_{12}),
$$
with
$$
m_{12}=\frac{h_{1}m_{1}+h_{2}m_{2}}{h_{1}+h_{2}}\quad\text{and}\quad h_{12}=h_%
{1}+h_{2}.
$$
Consequently,
$$
\textsf{GFN}(m_{1},h_{1})\varodot\textsf{GFN}(m_{2},h_{2})=\textsf{GFN}(m_{12}%
,h_{12}),
$$
and
$$
0pt\left[\textsf{GFN}(m_{1},h_{1})\cdot\textsf{GFN}(m_{2},h_{2})\right]=\exp%
\left(-\frac{h_{1}h_{2}(m_{1}-m_{2})^{2}}{2(h_{1}+h_{2})}\right). \tag{10}
$$*
Marginalization and cylindrical extension
Let us now assume that we have two variables $\boldsymbol{\theta}_{1}$ and $\boldsymbol{\theta}_{2}$ jointly constrained by a possibility distribution $\pi_{\widetilde{F}}$ , where ${\widetilde{F}}$ is a fuzzy subset of $\Theta_{12}=\Theta_{1}Ă\Theta_{2}$ . As a result of (8), variable $\boldsymbol{\theta}_{1}$ alone is constrained by the possibility distribution
$$
\pi_{1}(\theta_{1})=\Pi(\{\theta_{1}\}\times\Theta_{2})=\sup_{\theta_{2}\in%
\Theta_{2}}\pi_{\widetilde{F}}(\theta_{1},\theta_{2})=\sup_{\theta_{2}\in%
\Theta_{2}}{\widetilde{F}}(\theta_{1},\theta_{2})=({\widetilde{F}}\downarrow%
\Theta_{1})(\theta_{1}),
$$
where ${\widetilde{F}}\downarrow\Theta_{1}$ is the projection of ${\widetilde{F}}$ on $\Theta_{1}$ . We say that $\pi_{1}$ is the marginal of $\pi_{\widetilde{F}}$ on $\Theta_{1}$ . Conversely, given a possibility distribution $\pi_{{\widetilde{F}}_{1}}$ , where ${\widetilde{F}}_{1}$ is a fuzzy subset of $\Theta_{1}$ , its cylindrical extension in $\Theta_{1}Ă\Theta_{2}$ is the possibility distribution $\pi_{{\widetilde{F}}_{1}Ă\Theta_{2}}$ defined as
$$
\pi_{{\widetilde{F}}_{1}\times\Theta_{2}}(\theta_{1},\theta_{2})=\pi_{{%
\widetilde{F}}_{1}}(\theta_{1})
$$
for all $(\theta_{1},\theta_{2})â\Theta_{1}Ă\Theta_{2}$ . We say that the joint possibility distribution $\pi_{{\widetilde{F}}}$ on $\Theta_{12}$ is noninteractive with respect to the product intersection if it is the product of its marginals:
$$
\pi_{{\widetilde{F}}}(\theta_{1},\theta_{2})=\pi_{{\widetilde{F}}\downarrow%
\Theta_{1}}(\theta_{1})\cdot\pi_{{\widetilde{F}}\downarrow\Theta_{2}}(\theta_{%
2}).
$$
**Example 4**
*Let $\pi_{12}$ be the possibility distribution on $\mathbb{R}^{2}$ defined by
| | $\displaystyle\pi_{12}(x_{1},x_{2})$ | $\displaystyle=\exp\left(-\frac{h_{1}}{2}(x_{1}-m_{1})^{2}-\frac{h_{2}}{2}(x_{2%
}-m_{2})^{2}\right)$ | |
| --- | --- | --- | --- |
Its marginals are
$$
\pi_{1}(x_{1})=\max_{\theta_{2}}\pi_{12}(x_{1},x_{2})=\exp\left(-\frac{h_{1}}{%
2}(x_{1}-m_{1})^{2}\right)
$$
and
$$
\pi_{2}(x_{2})=\max_{\theta_{1}}\pi_{12}(x_{1},x_{2})=\exp\left(-\frac{h_{2}}{%
2}(x_{2}-m_{2})^{2}\right).
$$
Consequenty, $\pi_{12}$ is noninteractive with respect to the product intersection.*
3 Epistemic random fuzzy sets
The proposed epistemic random fuzzy set model is introduced in this section. The main definitions are first given in Section 3.1, and the generalized product-intersection rule is introduced in Section 3.2. Marginalization and vacuous extension are then addressed in Section 3.3, and an application to statistical inference is briefly discussed in Section 3.4.
3.1 General definitions
As before, let $(\Omega,\sigma_{\Omega},P)$ be a probability space and let $(\Theta,\sigma_{\Theta})$ be a measurable space. Let ${\widetilde{X}}$ by a mapping from $\Omega$ to the set $[0,1]^{\Theta}$ of fuzzy subsets of $\Theta$ . For any $\alphaâ[0,1]$ , let ${}^{\alpha}{\widetilde{X}}$ be the mapping from $\Omega$ to $2^{\Theta}$ defined as
$$
{}^{\alpha}{\widetilde{X}}(\omega)={}^{\alpha}[{\widetilde{X}}(\omega)],
$$
where ${}^{\alpha}[{\widetilde{X}}(\omega)]$ is the weak $\alpha$ -cut of ${\widetilde{X}}(\omega)$ . If for any $\alphaâ[0,1]$ , ${}^{\alpha}{\widetilde{X}}$ is $\sigma_{\Omega}-\sigma_{\Theta}$ strongly measurable, the tuple $(\Omega,\sigma_{\Omega},P,\Theta,\sigma_{\Theta},{\widetilde{X}})$ is said to be a random fuzzy set (also called a fuzzy random variable) [2]. It is clear that the class of random fuzzy sets includes that of random sets, just as the class of fuzzy sets includes that of classical (crisp) sets.
**Example 5**
*Let $M$ be a Gaussian random variable from $(\Omega,\sigma_{\Omega},P)$ to $(\mathbb{R},\beta_{\mathbb{R}})$ , with mean $\mu$ and standard deviation $\sigma$ , and let ${\widetilde{X}}$ be the mapping from $\Omega$ to $[0,1]^{\mathbb{R}}$ that maps each $\omegaâ\Omega$ to the triangular fuzzy number with mode $M(\omega)$ and support $[M(\omega)-a,M(\omega)+a]$ :
$$
{\widetilde{X}}(\omega)(x)=\begin{cases}\frac{a-|x-M(\omega)|}{a}&\text{if }|x%
-M(\omega)|\leq a\\
0&\text{otherwise. }\end{cases}
$$
for some $a>0$ . For any $\alphaâ[0,1]$ , the $\alpha$ -cut of ${\widetilde{X}}(\omega)$ is
$$
{}^{\alpha}{\widetilde{X}}(\omega)=\left[M(\omega)-a(1-\alpha),M(\omega)+a(1-%
\alpha)\right].
$$
The random set ${}^{\alpha}{\widetilde{X}}:\omegaâ{}^{\alpha}{\widetilde{X}}(\omega)$ is $\sigma_{\Omega}-\beta_{\mathbb{R}}$ strongly measurable (it is a random closed interval). Consequently, ${\widetilde{X}}$ is a random fuzzy set. In the following, such random fuzzy sets with domain $[0,1]^{\mathbb{R}}$ will be called random fuzzy numbers.*
Interpretation
Here, as in [9], we use random fuzzy sets as a model of unreliable and fuzzy evidence. In this model, we see $\Omega$ as a set of interpretations of a piece of evidence about a variable $\boldsymbol{\theta}$ taking values in $\Theta$ . If interpretation $\omegaâ\Omega$ holds, we know that â $\boldsymbol{\theta}\text{ is }{\widetilde{X}}(\omega)$ â, i.e., $\boldsymbol{\theta}$ is constrained by the possibility distribution $\pi_{{\widetilde{X}}(\omega)}$ . We qualify such random fuzzy sets as epistemic, because they encode a state of knowledge about some variable $\boldsymbol{\theta}$ . It should be noted that this semantics of random fuzzy sets is different from those reviewed in [2]. The conditional possibility interpretation developed in [2] is the closest to ours, since we also see the fuzzy sets ${\widetilde{X}}(\omega)$ as defining conditional possibility measures. However, in [2], the authors use the random fuzzy set formalism to model a situation in which we have two random experiments, one of which is completely determined; the family of possibility distributions $\{\pi_{{\widetilde{X}}(\omega)}:\omegaâ\Omega\}$ then models our knowledge about the relationship between the outcomes $\omega$ of the first experiment and the possible outcomes of the second one. This formalism allows the authors of [2] to compute lower and upper bounds on the probability of any event related to the second experiment. In contrast, our model does not rely on the notion of random experiment. In particular, we do not postulate the existence of an objective probability measure on $\Theta$ , and the belief and plausibility measures introduced below are not interpreted as lower and upper bounds on âtrueâ probabilities.
Belief and plausibility
We say that random fuzzy set ${\widetilde{X}}$ is normalized if it verifies the following conditions:
1. For all $\omegaâ\Omega$ , ${\widetilde{X}}(\omega)$ is either the empty set, or a normal fuzzy set, i.e., $0pt({\widetilde{X}}(\omega))â\{0,1\}$ .
1. $P(\{\omegaâ\Omega:{\widetilde{X}}(\omega)=\emptyset\})=0$ .
These conditions will be assumed in the rest of this section. For any $\omegaâ\Omega$ , let $\Pi_{\widetilde{X}}(·\mid\omega)$ be the possibility measure on $\Theta$ induced by ${\widetilde{X}}(\omega)$ :
$$
\Pi_{\widetilde{X}}(B\mid\omega)=\sup_{\theta\in B}{\widetilde{X}}(\omega)(%
\theta), \tag{11}
$$
and let $N_{\widetilde{X}}(·\mid\omega)$ be the dual necessity measure:
$$
N_{\widetilde{X}}(B\mid\omega)=\begin{cases}1-\Pi_{\widetilde{X}}(B^{c}\mid%
\omega)&\text{if }{\widetilde{X}}(\omega)\neq\emptyset\\
0&\text{otherwise. }\end{cases}
$$
Let $Bel_{\widetilde{X}}$ and $Pl_{\widetilde{X}}$ be the mappings from $\sigma_{\Theta}$ to $[0,1]$ defined as
$$
Bel_{\widetilde{X}}(B)=\int_{\Omega}N(B\mid\omega)dP(\omega) \tag{12}
$$
and
$$
Pl_{\widetilde{X}}(B)=\int_{\Omega}\Pi(B\mid\omega)dP(\omega). \tag{13}
$$
Function $Bel_{\widetilde{X}}$ is a belief function, and $Pl_{\widetilde{X}}$ is the dual plausibility function. As shown in [2, Lemma 6.2], they are induced by the random set $(\OmegaĂ[0,1],\sigma_{\Omega}\otimes\beta_{[0,1]},P\otimes\lambda,\Theta,%
\sigma_{\Theta},{\overline{X}})$ , where ${\overline{X}}:\OmegaĂ[0,1]â 2^{\Theta}$ is the multi-valued mapping defined as
$$
{\overline{X}}(\omega,\alpha)={}^{\alpha}{\widetilde{X}}(\omega). \tag{14}
$$
As a consequence, $Bel_{\widetilde{X}}(B)$ and $Pl_{\widetilde{X}}(B)$ can also be written as follows:
$$
Bel_{\widetilde{X}}(B)=\int_{0}^{1}Bel_{{}^{\alpha}{\widetilde{X}}}(B)d\alpha Pl_{\widetilde{X}}(B)=\int_{0}^{1}Pl_{{}^{\alpha}{\widetilde{X}}}(B)d\alpha.
$$
Lower and upper expectations of a random fuzzy number
Let ${\widetilde{X}}$ be a random fuzzy number (i.e., a random fuzzy set with domain $[0,1]^{\mathbb{R}}$ ), and let ${\overline{X}}$ be the corresponding random set defined by (14). We define the lower and upper expectations of ${\widetilde{X}}$ as the lower and upper expectations of ${\overline{X}}$ , i.e., $\mathbb{E}_{*}({\widetilde{X}})=\mathbb{E}_{*}({\overline{X}})$ and $\mathbb{E}^{*}({\widetilde{X}})=\mathbb{E}^{*}({\overline{X}})$ . It follows from (15) that
$$
\mathbb{E}_{*}({\widetilde{X}})=\int_{0}^{1}\mathbb{E}_{*}({}^{\alpha}{%
\widetilde{X}})d\alpha\quad\text{and}\quad\mathbb{E}^{*}({\widetilde{X}})=\int%
_{0}^{1}\mathbb{E}^{*}({}^{\alpha}{\widetilde{X}})d\alpha. \tag{16}
$$
**Example 6**
*Let us consider again the random fuzzy number of Example 5. Its lower and upper cdfâs are, respectively, the mappings $xâ Bel_{\widetilde{X}}((-â,x])$ and $xâ Pl_{\widetilde{X}}((-â,x])$ . Let us illustrate the calculation of the upper cdf first, using two methods.*
Method 1
From (11),
$$
\Pi\left((-\infty,x]\mid\omega\right)=\sup_{x^{\prime}\leq x}{\widetilde{X}}(%
\omega)(x^{\prime})=\begin{cases}1&\text{if }M(\omega)\leq x\\
\frac{x-M(\omega)+a}{a}&\text{if }x<M(\omega)\leq x+a\\
0&\text{otherwise.}\end{cases}
$$
Using (13), we get
| | $\displaystyle Pl_{\widetilde{X}}((-â,x])$ | $\displaystyle=P(M†x)Ă 1+P(x<M†x+a)\mathbb{E}\left[\frac{x-M+a}{a}%
\mid x<M†x+a\right]$ | |
| --- | --- | --- | --- |
Now, using a well-known result about the truncated normal distribution,
$$
\mathbb{E}\left[M\mid x<M\leq x+a\right]=\mu+\sigma\frac{\phi\left(\frac{x-\mu%
}{\sigma}\right)-\phi\left(\frac{x+a-\mu}{\sigma}\right)}{\Phi\left(\frac{x+a-%
\mu}{\sigma}\right)-\Phi\left(\frac{x-\mu}{\sigma}\right)}.
$$
After rearranging the terms, we finally obtain
$$
Pl_{\widetilde{X}}((-\infty,x])=\left(\frac{x+a-\mu}{a}\right)\Phi\left(\frac{%
x+a-\mu}{\sigma}\right)-\left(\frac{x-\mu}{a}\right)\Phi\left(\frac{x-\mu}{%
\sigma}\right)+\\
\frac{\sigma}{a}\left[\phi\left(\frac{x+a-\mu}{\sigma}\right)-\phi\left(\frac{%
x-\mu}{\sigma}\right)\right]. \tag{17}
$$
Method 2
Let us now use (15b). We have
| | $\displaystyle Pl_{\widetilde{X}}((-â,x])$ | $\displaystyle=ât_{0}^{1}P(M-a(1-\alpha)†x)d\alpha$ | |
| --- | --- | --- | --- |
Using the formula
$$
\int\Phi(u+vx)dx=\frac{1}{v}\left[(u+vx)\Phi(u+vx)+\phi(u+vx)\right]+C,
$$
we get the same result as (17). Using any of the two methods demonstrated above, we obtain the following expression for the lower cdf:
$$
Bel_{\widetilde{X}}((-\infty,x])=\left(\frac{x-\mu}{a}\right)\Phi\left(\frac{x%
-\mu}{\sigma}\right)-\left(\frac{x-a-\mu}{a}\right)\Phi\left(\frac{x-a-\mu}{%
\sigma}\right)+\\
\frac{\sigma}{a}\left[\phi\left(\frac{x-\mu}{\sigma}\right)-\phi\left(\frac{x-%
a-\mu}{\sigma}\right)\right]. \tag{18}
$$
It can easily be checked that, when $a=0$ ,
$$
Bel_{\widetilde{X}}((-\infty,x])=Pl_{\widetilde{X}}((-\infty,x])=\Phi\left(%
\frac{x-\mu}{\sigma}\right).
$$
Examples of functions $Bel_{\widetilde{X}}((-â,x])$ and $Pl_{\widetilde{X}}((-â,x])$ for different values of $a$ are shown in Figure 1.
Figure 1: Lower and upper cdfâs for the random fuzzy numbers studied in Examples 5 and 6, with $\mu=0$ , $\sigma=1$ , and $a=0.5$ (blue curves) or $a=1.5$ (red curves). The Gaussian cdf corresponding to $a=0$ is shown as a broken line.
Now, the lower and upper expectations of ${\widetilde{X}}$ can be computed from (16) as
$$
\mathbb{E}_{*}({\widetilde{X}})=\int_{0}^{1}\mathbb{E}_{*}({}^{\alpha}{%
\widetilde{X}})d\alpha=\int_{0}^{1}[\mu-a(1-\alpha)]d\alpha=\mu-\frac{a}{2},
$$
and
$$
\mathbb{E}^{*}({\widetilde{X}})=\int_{0}^{1}\mathbb{E}^{*}({}^{\alpha}{%
\widetilde{X}})d\alpha=\int_{0}^{1}[\mu+a(1-\alpha)]d\alpha=\mu+\frac{a}{2}.
$$
3.2 Generalized product-intersection rule
Dempsterâs rule and the possibilistic product intersection rule recalled, respectively, in Sections 2.1 and 2.2 can be generalized to combine epistemic random fuzzy sets. Consider two epistemic random fuzzy sets $(\Omega_{1},\sigma_{1},P_{1},\Theta,\sigma_{\Theta},{\widetilde{X}}_{1})$ and $(\Omega_{2},\sigma_{2},P_{2},\Theta,\sigma_{\Theta},{\widetilde{X}}_{2})$ encoding independent pieces of evidence. The independence assumption means here that the relevant probability measure on the joint measurable space $(\Omega_{1}Ă\Omega_{2},\sigma_{1}\otimes\sigma_{2}$ ) is the product measure $P_{1}Ă P_{2}$ .
If interpretations $\omega_{1}â\Omega_{1}$ and $\omega_{2}â\Omega_{2}$ both hold, we know that â $\boldsymbol{\theta}\text{ is }{\widetilde{X}}_{1}(\omega_{1})$ â and â $\boldsymbol{\theta}\text{ is }{\widetilde{X}}_{2}(\omega_{2})$ â. It is then natural to combine the fuzzy sets ${\widetilde{X}}_{1}(\omega_{1})$ and ${\widetilde{X}}_{2}(\omega_{2})$ by an intersection operator. As discussed in Section 2.2, normalized product intersection is a good candidate as it suitable for combining fuzzy information from independent sources and it is associative. We will thus consider the mapping ${\widetilde{X}}_{\varodot}(\omega_{1},\omega_{2})={\widetilde{X}}_{1}(\omega_{%
1})\varodot{\widetilde{X}}_{2}(\omega_{2})$ , which we will assume to be $\sigma_{1}\otimes\sigma_{2}$ - $\sigma_{\Theta}$ strongly measurable.
As in the crisp case recalled in Section 2.1, if $0pt({\widetilde{X}}_{1}(\omega_{1}){\widetilde{X}}_{2}(\omega_{2}))=0$ , the two interpretations $\omega_{1}$ and $\omega_{2}$ are inconsistent and they must be discarded. If $0pt({\widetilde{X}}_{1}(\omega_{1}){\widetilde{X}}_{2}(\omega_{2}))=1$ , the two interpretations are fully consistent. If $0<0pt({\widetilde{X}}_{1}(\omega_{1}){\widetilde{X}}_{2}(\omega_{2}))<1$ , $\omega_{1}$ and $\omega_{2}$ are partially consistent. As proposed in [9], instead of simply discarding only fully inconsistent pairs $(\omega_{1},\omega_{2})$ , it makes sense to give all pairs $(\omega_{1},\omega_{2})$ a weight proportional to the degree of consistency between ${\widetilde{X}}_{1}(\omega_{1})$ and ${\widetilde{X}}_{2}(\omega_{2})$ . This can be achieved by conditioning $P_{1}Ă P_{2}$ on the fuzzy set ${\widetilde{\Theta}}^{*}$ of consistent pairs of interpretations, with membership function
$$
{\widetilde{\Theta}}^{*}(\omega_{1},\omega_{2})=0pt\left({\widetilde{X}}_{1}(%
\omega_{1})\cdot{\widetilde{X}}_{2}(\omega_{2})\right).
$$
Using Zadehâs definition of a fuzzy event [37], we get the following expression for the conditional probability measure ${\widetilde{P}}_{12}=(P_{1}Ă P_{2})(·\mid{\widetilde{\Theta}}^{*})$ , for any $Bâ\sigma_{1}\otimes\sigma_{2}$ :
$$
{\widetilde{P}}_{12}(B)=\frac{(P_{1}\times P_{2})(B\cap{\widetilde{\Theta}}^{*%
})}{(P_{1}\times P_{2})({\widetilde{\Theta}}^{*})}=\frac{\int_{\Omega_{1}}\int%
_{\Omega_{2}}B(\omega_{1},\omega_{2})0pt\left({\widetilde{X}}_{1}(\omega_{1})%
\cdot{\widetilde{X}}_{2}(\omega_{2})\right)dP_{2}(\omega_{2})dP_{1}(\omega_{1}%
)}{\int_{\Omega_{1}}\int_{\Omega_{2}}0pt\left({\widetilde{X}}_{1}(\omega_{1})%
\cdot{\widetilde{X}}_{2}(\omega_{2})\right)dP_{2}(\omega_{2})dP_{1}(\omega_{1}%
)},
$$
where $B(·,·)$ denotes the indicator function of $B$ . This conditioning operation, called soft normalization was first proposed in [35] in the finite case and with a different justification.
The combined random fuzzy set
$$
(\Omega_{1}\times\Omega_{2},\sigma_{1}\otimes\sigma_{2},{\widetilde{P}}_{12},%
\Theta,\sigma_{\Theta},{\widetilde{X}}_{\varodot})
$$
is called the orthogonal sum of the two pieces of evidence. This operation generalizes both Dempsterâs rule and the normalized product of possibility distribution. We will refer to it as the generalized product-intersection rule, and it will be denoted by the same symbol $\oplus$ as Dempsterâs rule. It is clear that ${\widetilde{X}}\oplus{\overline{X}}_{0}={\widetilde{X}}$ for any random fuzzy set ${\widetilde{X}}$ and any vacuous random set ${\overline{X}}_{0}$ on the same domain $\Theta$ . The degree of conflict between two random fuzzy sets ${\widetilde{X}}_{1}$ and ${\widetilde{X}}_{2}$ is naturally defined as
$$
\kappa=1-(P_{1}\times P_{2})({\widetilde{\Theta}}^{*})=1-\int_{\Omega_{1}}\int%
_{\Omega_{2}}0pt\left({\widetilde{X}}_{1}(\omega_{1}){\widetilde{X}}_{2}(%
\omega_{2})\right)dP_{2}(\omega_{2})dP_{1}(\omega_{1}). \tag{19}
$$
The associativity of $\oplus$ was proved in [9] in the finite case; we give a similar proof in the general case.
**Proposition 4**
*The generalized product-intersection rule $\oplus$ for random fuzzy sets is commutative and associative.*
* Proof*
See B â
The following proposition states that a counterpart of Proposition 2 is still valid when combining independent random fuzzy sets, i.e., the combined contour function is still proportional to the product of the contour functions.
**Proposition 5**
*Let ${\widetilde{X}}_{1}$ and ${\widetilde{X}}_{2}$ be two random fuzzy sets on the same domain $\Theta$ , with contour functions $pl_{{\widetilde{X}}_{1}}$ and $pl_{{\widetilde{X}}_{2}}$ and with degree of conflict $\kappa$ defined by (19). The contour function $pl_{{\widetilde{X}}_{1}\oplus{\widetilde{X}}_{2}}$ of ${\widetilde{X}}_{1}\oplus{\widetilde{X}}_{2}$ verifies
$$
(pl_{{\widetilde{X}}_{1}\oplus{\widetilde{X}}_{2}})(\theta)=\frac{pl_{{%
\widetilde{X}}_{1}}(\theta)pl_{{\widetilde{X}}_{2}}(\theta)}{1-\kappa}, \tag{20}
$$
for all $\thetaâ\Theta$ .*
* Proof*
We have
| | $\displaystyle(pl_{{\widetilde{X}}_{1}\oplus{\widetilde{X}}_{2}})(\theta)$ | $\displaystyle=\frac{ât_{\Omega_{1}}ât_{\Omega_{2}}0pt\left({\widetilde{X}}%
_{1}(\omega_{1})·{\widetilde{X}}_{2}(\omega_{2})\right){\widetilde{X}}_{%
\varodot}(\omega_{1},\omega_{2})(\theta)dP_{2}(\omega_{2})dP_{1}(\omega_{1})}{%
1-\kappa}$ | |
| --- | --- | --- | --- |
â
As remarked in Section 2.2, a belief function induced by a random fuzzy set is also induced by a random (crisp) set. However, combining random fuzzy sets or random crisp sets does not result in the same belief function in general. In particular, it is well-known that Dempsterâs rule does not preserve consonance. To combine two belief functions, we must, therefore, examine the evidence on which they are based, not only to determine whether the bodies of evidence are independent or not, but also to determine whether the evidence is fuzzy or crisp. This point is illustrated by the following example.
**Example 7**
*Consider the following two mappings from $\mathbb{R}$ to $[0,1]$ represented in Figure 2a:
$$
\pi_{1}(x)=\textsf{GFN}(0,0.3),\quad\pi_{2}(x)=\textsf{GFN}\left(1,0.5\right).
$$
If these two mappings are possibility distributions encoding fully reliable but fuzzy evidence, they correspond to âconstant random fuzzy setsâ, i.e., mappings ${\widetilde{X}}_{1}(\omega)=\pi_{1}$ and ${\widetilde{X}}_{2}(\omega)=\pi_{2}$ with $P(\{\omega\})=1$ . The combined random fuzzy set ${\widetilde{X}}_{1}\oplus{\widetilde{X}}_{2}$ is then defined by $({\widetilde{X}}_{1}\oplus{\widetilde{X}}_{2})(\omega)=\pi_{1}\varodot\pi_{2}$ . From Proposition 3, the normalized product of two GFNâs is a GFN. Here, we get the combined possibility distribution plotted as a red broken curve in Figure 2a:
$$
(\pi_{1}\varodot\pi_{2})(x)=\textsf{GFN}(0.625,0.8).
$$
(a)
(b)
Figure 2: (a): Two Gaussian possibility distributions (black solid curves) with their normalized product intersection (red broken curve) and the contour function of the combined random set (blue solid curve). (b): Lower and upper cdfâs of the combined possibility distribution (red broken curves) and of the combined random set (blue solid curves). The corresponding lower and upper cumulative distribution functions (cdfâs) are, respectively
$$
Bel_{{\widetilde{X}}_{1}\oplus{\widetilde{X}}_{2}}((-\infty,x])=\begin{cases}0%
&\text{if }x\leq 0.625\\
1-\exp\left(-0.4(x-0.625)^{2}\right)&\text{if }x>0.625\end{cases}
$$
and
$$
Pl_{{\widetilde{X}}_{1}\oplus{\widetilde{X}}_{2}}((-\infty,x])=\begin{cases}%
\exp\left(-0.4(x-0.625)^{2}\right)&\text{if }x\leq 0.625\\
1&\text{if }x>0.625.\end{cases}
$$ These two functions are plotted as red broken curves in Figure 2b. Alternatively, as explained in Section 2.1, we may see $\pi_{1}$ and $\pi_{2}$ as encoding crisp but partially reliable evidence, in which case they define two independent consonant random intervals ${\overline{X}}_{1}(\alpha_{1})={}^{\alpha_{1}}\pi_{1}$ and ${\overline{X}}(\alpha_{2})={}^{\alpha_{2}}\pi_{2}$ , where $(\alpha_{1},\alpha_{2})$ has a uniform distribution on $[0,1]^{2}$ . These two random intervals can be combined numerically using Monte-Carlo simulation, as explained in [20]. The contour function and the lower and upper cdfâs are plotted as solid blue lines in Figures 2a and 2b, respectively. We notice that the contour functions are proportional, as a consequence of Proposition 2.*
3.3 Marginalization and vacuous extension
Let us now consider again the case where we have two variables $\boldsymbol{\theta}_{1}$ and $\boldsymbol{\theta}_{2}$ with respective domains $\Theta_{1}$ and $\Theta_{2}$ . Let ${\widetilde{X}}_{12}$ be a random fuzzy set from a probability space $(\Omega,\sigma_{\Omega},P)$ to the measurable space $(\Theta_{12},\sigma_{\Theta_{12}})$ with $\Theta_{12}=\Theta_{1}Ă\Theta_{2}$ and $\sigma_{\Theta_{12}}=\sigma_{\Theta_{1}}\otimes\sigma_{\Theta_{2}}$ , where $\sigma_{\Theta_{1}}$ and $\sigma_{\Theta_{2}}$ are $\sigma$ -algebras on $\Theta_{1}$ and $\Theta_{2}$ , respectively. Let ${\widetilde{X}}_{1}$ be the mapping from $\Omega$ to $[0,1]^{\Theta_{1}}$ defined by
$$
{\widetilde{X}}_{1}(\omega)={\widetilde{X}}_{12}(\omega)\downarrow\Theta_{1},
$$
where, as before, $\downarrow$ denotes fuzzy set projection. If, for all $\alphaâ[0,1]$ , the mapping ${}^{\alpha}{\widetilde{X}}_{1}$ is $\sigma_{\Omega}-\sigma_{\Theta_{1}}$ strongly measurable, then the random fuzzy set ${\widetilde{X}}_{1}$ is called the marginal of ${\widetilde{X}}_{12}$ on $\Theta_{1}$ .
Conversely, given a random fuzzy set ${\widetilde{X}}_{1}$ from $(\Omega,\sigma_{\Omega},P)$ to $(\Theta_{1},\sigma_{\Theta_{1}})$ , let ${\widetilde{X}}_{1\uparrow(1,2)}$ be the mapping from $\Omega$ to $[0,1]^{\Theta_{12}}$ that maps each $\omegaâ\Omega$ to the cylindrical extension of ${\widetilde{X}}_{1}(\omega)$ in $\Theta_{12}$
$$
{\widetilde{X}}_{1\uparrow(1,2)}(\omega)={\widetilde{X}}_{1}(\omega)\times%
\Theta_{2},
$$
i.e., for all $(\theta_{1},\theta_{2})â\Theta_{12}$ ,
$$
{\widetilde{X}}_{1\uparrow(1,2)}(\omega)(\theta_{1},\theta_{2})={\widetilde{X}%
}_{1}(\omega)(\theta_{1}).
$$
If the mapping ${\widetilde{X}}_{1\uparrow(1,2)}$ is $\sigma_{\Omega}-\sigma_{\Theta_{12}}$ strongly measurable, then the random fuzzy set ${\widetilde{X}}_{1\uparrow(1,2)}$ is called the vacuous extension of ${\widetilde{X}}_{1}$ in $\Theta_{12}$ .
We say that a joint random fuzzy set is noninteractive if it is equal to the orthogonal sum of the vacuous extensions of its projections:
$$
{\widetilde{X}}_{12}={\widetilde{X}}_{1\uparrow(1,2)}\oplus{\widetilde{X}}_{2%
\uparrow(1,2)}\quad\text{denoted as}\quad{\widetilde{X}}_{1}\oplus{\widetilde{%
X}}_{2}.
$$
A particular kind of noninteractive random fuzzy sets will be studied in Section 5.3.
3.4 Application to statistical inference
Epistemic random fuzzy sets naturally arise in the context of statistical inference. As proposed by Shafer [29] and formally justified in [7] [8], the information conveyed by the likelihood function in statistical inference problems can be represented by a consonant belief function, whose contour function is equal to the relative likelihood function. For a statistical model $f({\boldsymbol{x}},\theta)$ , where ${\boldsymbol{x}}â{\cal X}$ is the observation and $\thetaâ\Theta$ is the unknown parameter, the likelihood-based belief function $Bel(·,{\boldsymbol{x}})$ on $\Theta$ after observing ${\boldsymbol{x}}$ is, thus, consonant and defined by the contour function
$$
pl(\theta;{\boldsymbol{x}})=\frac{L(\theta;{\boldsymbol{x}})}{\sup_{\theta^{%
\prime}\in\Theta}L(\theta^{\prime};{\boldsymbol{x}})}, \tag{21}
$$
where $L(·,{\boldsymbol{x}}):\thetaâ f({\boldsymbol{x}};\theta)$ is the likelihood function, and it is assumed that the denominator in (21) is finite. The corresponding plausibility function is, thus, defined by
$$
Pl(A;{\boldsymbol{x}})=\sup_{\theta\in A}pl(\theta;{\boldsymbol{x}})
$$
for any $Aâ\Theta$ , i.e., it is a possibility measure. However, as noticed by Shafer in [29] and [31], and also discussed in [7], this construction is not compatible with Dempsterâs rule: if we consider two independent observations ${\boldsymbol{x}}$ and ${\boldsymbol{x}}^{\prime}$ , the belief function $Bel(·;{\boldsymbol{x}},{\boldsymbol{x}}^{\prime})$ is not equal to the orthogonal sum $Bel(·;{\boldsymbol{x}})\oplus Bel(·;{\boldsymbol{x}}^{\prime})$ , which is not consonant. As argued in [9], this problem disappears if we do not consider the likelihood-based belief function to be induced by a consonant random crisp set, but by a constant random fuzzy set ${\widetilde{\theta}}_{\boldsymbol{x}}$ with membership function ${\widetilde{\theta}}_{\boldsymbol{x}}(\theta)=pl(\theta;{\boldsymbol{x}})$ . We can interpret ${\widetilde{\theta}}_{\boldsymbol{x}}$ as the fuzzy set of likely values of $\theta$ after observing ${\boldsymbol{x}}$ . Combining the contour functions (21) by the normalized product intersection rule then yields the correct result, i.e., the constant random fuzzy set ${\widetilde{\theta}}_{{\boldsymbol{x}},{\boldsymbol{x}}^{\prime}}$ with membership function ${\widetilde{\theta}}_{{\boldsymbol{x}},{\boldsymbol{x}}^{\prime}}(\theta)={%
\widetilde{\theta}}_{\boldsymbol{x}}(\theta)\varodot{\widetilde{\theta}}_{{%
\boldsymbol{x}}^{\prime}}(\theta)$ .
Now, consider a prediction problem, where we want to predict the value of a random variable $Y$ whose distribution also depends on $\theta$ . We can always write $Y$ in the form $Y=\varphi(\theta,U)$ , where $U$ is a pivotal random variable with known distribution [19, 20]. After observing the data ${\boldsymbol{x}}$ , our knowledge of $\theta$ is represented by the fuzzy set ${\widetilde{\theta}}_{\boldsymbol{x}}$ . Conditionally on $U=u$ , our knowledge of $Y$ is, thus, represented by the fuzzy set ${\widetilde{Y}}(u)=\varphi({\widetilde{\theta}}_{\boldsymbol{x}},u)$ , with membership function
$$
{\widetilde{Y}}(u)(y)=\sup_{\theta:\varphi(\theta,u)=y}{\widetilde{\theta}}_{%
\boldsymbol{x}}(\theta).
$$
The mapping ${\widetilde{Y}}:uâ{\widetilde{Y}}(u)$ is, then, a random fuzzy set representing statistical evidence about $Y$ .
**Example 8**
*Let ${\boldsymbol{X}}=(X_{1},...,X_{n})$ be an independent and identically distributed (iid) Gaussian sample with parent distribution $N(\theta,1)$ , and let $Y\sim N(\theta,1)$ . After observing a realization ${\boldsymbol{x}}$ of ${\boldsymbol{X}}$ , the likelihood function is
$$
L(\theta;{\boldsymbol{x}})=(2\pi)^{-n/2}\exp\left(-\frac{1}{2}\sum_{i=1}^{n}(x%
_{i}-\theta)^{2}\right).
$$
Denoting by $\widehat{\theta}$ the sample mean, the fuzzy set ${\widetilde{\theta}}_{\boldsymbol{x}}$ of likely values of $\theta$ after observing ${\boldsymbol{x}}$ is the relative likelihood
$$
{\widetilde{\theta}}_{\boldsymbol{x}}(\theta)=\frac{L(\theta;{\boldsymbol{x}})%
}{L(\widehat{\theta};{\boldsymbol{x}})}=\exp\left(-\frac{n}{2}(\theta-\widehat%
{\theta})^{2}\right).
$$
It is the Gaussian fuzzy number $\textsf{GFN}(\widehat{\theta},n)$ with mode $\widehat{\theta}$ and precision $n$ . Now, $Y$ can be written as $Y=\theta+U$ , with $U\sim N(0,1)$ . Consequently, the conditional possibility distribution on $Y$ given $U=u$ is the Gaussian fuzzy number ${\widetilde{\theta}}_{\boldsymbol{x}}+u=\textsf{GFN}(\widehat{\theta}+u,n)$ , and our knowledge of $Y$ is described by the random fuzzy set $Uâ\textsf{GFN}(\widehat{\theta}+U,n)$ , with $U\sim N(0,1)$ . This is a Gaussian fuzzy number with fixed precision $h=n$ and normal random mode $M=\widehat{\theta}+U\sim N(\widehat{\theta},1)$ . This important class of random fuzzy sets will be studied in the next section.*
4 Gaussian random fuzzy numbers
In this section, we introduce Gaussian random fuzzy numbers (GRFNâs) as a practical model for representing uncertainty on a real variable. As we will see, this model encompasses Gaussian random variables and Gaussian fuzzy numbers as special cases. A GRFN can be seen, equivalently, as a Gaussian random variable with fuzzy mean, or as a Gaussian fuzzy number with random mode. The definition and main properties will first be presented in Section 4.1. The expression of the orthogonal sum of two GRFNâs will then be derived in Section 4.2. Finally, arithmetic operations on GRFNâs will be addressed in Section 4.3.
4.1 Definition and main properties
**Definition 1**
*Let $(\Omega,\sigma_{\Omega},P)$ be a probability space and let $M:\Omegaâ\mathbb{R}$ be a Gaussian random variable with mean $\mu$ and variance $\sigma^{2}$ . The random fuzzy set ${\widetilde{X}}:\Omegaâ[0,1]^{\mathbb{R}}$ defined as
$$
{\widetilde{X}}(\omega)=\textsf{GFN}(M(\omega),h)
$$
is called a Gaussian random fuzzy number (GRFN) with mean $\mu$ , variance $\sigma^{2}$ and precision $h$ , which we write ${\widetilde{X}}\sim{\widetilde{N}}(\mu,\sigma^{2},h)$ .*
In the definition of a GRFN, $\mu$ is a location parameter, while parameters $h$ and $\sigma^{2}$ correspond, respectively, to possibilistic and probabilistic uncertainty. If $h=0$ , imprecision is maximal whatever the values of $\mu$ and $\sigma^{2}$ : the GRFN ${\widetilde{X}}$ then induces the vacuous belief function on $\mathbb{R}$ , in which case $Bel_{\widetilde{X}}(A)=0$ for all $Aâ\mathbb{R}$ , and $Pl_{\widetilde{X}}(A)=1$ for all $Aâeq\mathbb{R}$ such that $Aâ \emptyset$ ; such a GRFN will be said to be vacuous and will be denoted by ${\widetilde{X}}\sim{\widetilde{N}}(0,1,0)$ . If $h=+â$ , each fuzzy number $\textsf{GFN}(M(\omega),h)$ is reduced to a point: the GRFN ${\widetilde{X}}$ is then equivalent to a Gaussian random variable with mean $\mu$ and variance $\sigma^{2}$ , which we can write: ${\widetilde{N}}(\mu,\sigma^{2},+â)=N(\mu,\sigma^{2})$ . Another special case of interest is that where $\sigma^{2}=0$ , in which case $M$ is a constant random variable taking value $\mu$ , and ${\widetilde{X}}$ is a possibilistic variable with possibility distribution $\textsf{GFN}(\mu,h)$ .
The following proposition gives the expression of the contour functions $pl_{\widetilde{X}}(x)$ associated to ${\widetilde{X}}$ .
**Proposition 6**
*The contour function of GRFN ${\widetilde{X}}\sim{\widetilde{N}}(\mu,\sigma^{2},h)$ is
$$
pl_{\widetilde{X}}(x)=\frac{1}{\sqrt{1+h\sigma^{2}}}\exp\left(-\frac{h(x-\mu)^%
{2}}{2(1+h\sigma^{2})}\right). \tag{22}
$$*
* Proof*
See C. â
A shown by Proposition 6, the contour function $pl_{\widetilde{X}}$ is constant in two cases: if $h=0$ , ${\widetilde{X}}$ is vacuous, and $pl_{\widetilde{X}}(x)=1$ for all $xâ\mathbb{R}$ ; if $h=+â$ , ${\widetilde{X}}$ is a random variable, and $pl_{\widetilde{X}}(x)=0$ for all $xâ\mathbb{R}$ . We also note that, if $\sigma^{2}=0$ , $pl_{\widetilde{X}}$ is equal to the possibility distribution $\textsf{GFN}(\mu,h)$ . When $\sigma^{2}â+â$ and $h>0$ , $pl_{\widetilde{X}}(x)â 0$ for all $x$ . The next proposition gives the expressions of the belief and plausibility of any real interval.
**Proposition 7**
*For any real interval $[x,y]$ , the degrees of belief and plausibility of $[x,y]$ induced by the GRFN ${\widetilde{X}}\sim{\widetilde{N}}(\mu,\sigma^{2},h)$ are, respectively,
$$
Bel_{\widetilde{X}}([x,y])=\Phi\left(\frac{y-\mu}{\sigma}\right)-\Phi\left(%
\frac{x-\mu}{\sigma}\right)-\\
pl_{\widetilde{X}}(x)\left[\Phi\left(\frac{(x+y)/2-\mu+(y-x)h\sigma^{2}/2}{%
\sigma\sqrt{h\sigma^{2}+1}}\right)-\Phi\left(\frac{x-\mu}{\sigma\sqrt{h\sigma^%
{2}+1}}\right)\right]-\\
pl_{\widetilde{X}}(y)\left[\Phi\left(\frac{y-\mu}{\sigma\sqrt{h\sigma^{2}+1}}%
\right)-\Phi\left(\frac{(x+y)/2-\mu-(y-x)h\sigma^{2}/2}{\sigma\sqrt{h\sigma^{2%
}+1}}\right)\right], \tag{23}
$$
and
$$
Pl_{\widetilde{X}}([x,y])=\Phi\left(\frac{y-\mu}{\sigma}\right)-\Phi\left(%
\frac{x-\mu}{\sigma}\right)+pl_{\widetilde{X}}(x)\Phi\left(\frac{x-\mu}{\sigma%
\sqrt{h\sigma^{2}+1}}\right)+\\
pl_{\widetilde{X}}(y)\left[1-\Phi\left(\frac{y-\mu}{\sigma\sqrt{h\sigma^{2}+1}%
}\right)\right]. \tag{24}
$$*
* Proof*
See D. â
**Corollary 1**
*The lower and upper cdfâs of the GRFN ${\widetilde{X}}\sim{\widetilde{N}}(\mu,\sigma^{2},h)$ are, respectively
$$
Bel_{\widetilde{X}}((-\infty,y])=\Phi\left(\frac{y-\mu}{\sigma}\right)-pl_{%
\widetilde{X}}(y)\Phi\left(\frac{y-\mu}{\sigma\sqrt{h\sigma^{2}+1}}\right) \tag{25}
$$
and
$$
Pl_{\widetilde{X}}((-\infty,y])=\Phi\left(\frac{y-\mu}{\sigma}\right)+pl_{%
\widetilde{X}}(y)\left[1-\Phi\left(\frac{y-\mu}{\sigma\sqrt{h\sigma^{2}+1}}%
\right)\right]. \tag{26}
$$*
* Proof*
Immediate from Proposition 7 by letting $x$ tend to $-â$ in (23) and (24) â
We can easily check from (23) and (24) that $Bel_{\widetilde{X}}([x,y])$ and $Pl_{\widetilde{X}}([x,y])$ both tend to $\Phi\left(\frac{y-\mu}{\sigma}\right)-\Phi\left(\frac{x-\mu}{\sigma}\right)$ when $hââ$ , which is consistent with the fact that a GRFN with infinite precision is equivalent to a Gaussian random variable. Finally, the following proposition gives the expressions of the lower and upper expectations of a GRFN.
**Proposition 8**
*Let ${\widetilde{X}}\sim{\widetilde{N}}(\mu,\sigma^{2},h)$ be a GRFN with $h>0$ . Its lower and upper expectations are, respectively,
$$
\mathbb{E}_{*}({\widetilde{X}})=\mu-\sqrt{\frac{\pi}{2h}}\quad\text{and}\quad%
\mathbb{E}^{*}({\widetilde{X}})=\mu+\sqrt{\frac{\pi}{2h}}. \tag{27}
$$*
* Proof*
See E. â
As expected, we can see from (27) that the lower and upper expectations boil down to the usual expectation $\mu$ when $h=+â$ .
4.2 Orthogonal sum of Gaussian random fuzzy numbers
In this section, we derive the expression of the orthogonal sum ${\widetilde{X}}_{1}\oplus{\widetilde{X}}_{2}$ of two GRFNâs ${\widetilde{X}}_{1}$ and ${\widetilde{X}}_{2}$ . We start with the following lemma.
**Lemma 1**
*Let $M_{1}\sim N(\mu_{1},\sigma_{1}^{2})$ and $M_{2}\sim N(\mu_{2},\sigma_{2}^{2})$ be two independent Gaussian random variables, and let ${\widetilde{F}}$ be the fuzzy subset of $\mathbb{R}^{2}$ with membership function
$$
{\widetilde{F}}(m_{1},m_{2})=0pt\left(\textsf{GFN}(m_{1},h_{1})\cdot\textsf{%
GFN}(m_{2},h_{2})\right)=\exp\left(-\frac{h_{1}h_{2}(m_{1}-m_{2})^{2}}{2(h_{1}%
+h_{2})}\right).
$$
The conditional probability distribution of $(M_{1},M_{2})$ given ${\widetilde{F}}$ is two-dimensional Gaussian with mean vector ${\widetilde{\boldsymbol{\mu}}}=({\widetilde{\mu}}_{1},{\widetilde{\mu}}_{2})^{T}$ and covariance matrix
$$
{\widetilde{\boldsymbol{\Sigma}}}=\begin{pmatrix}{\widetilde{\sigma}}_{1}^{2}&%
\rho{\widetilde{\sigma}}_{1}{\widetilde{\sigma}}_{2}\\
\rho{\widetilde{\sigma}}_{1}{\widetilde{\sigma}}_{2}&{\widetilde{\sigma}}_{2}^%
{2}\end{pmatrix},
$$
with
$$
\displaystyle{\widetilde{\mu}}_{1} \displaystyle=\frac{\mu_{1}(1+{\overline{h}}\sigma_{2}^{2})+\mu_{2}{\overline{%
h}}\sigma_{1}^{2}}{1+{\overline{h}}(\sigma_{1}^{2}+\sigma_{2}^{2})} \displaystyle{\widetilde{\mu}}_{2} \displaystyle=\frac{\mu_{2}(1+{\overline{h}}\sigma_{1}^{2})+\mu_{1}{\overline{%
h}}\sigma_{2}^{2}}{1+{\overline{h}}(\sigma_{1}^{2}+\sigma_{2}^{2})} \displaystyle{\widetilde{\sigma}}_{1}^{2} \displaystyle=\frac{\sigma_{1}^{2}(1+{\overline{h}}\sigma_{2}^{2})}{1+{%
\overline{h}}(\sigma_{1}^{2}+\sigma_{2}^{2})} \displaystyle{\widetilde{\sigma}}_{2}^{2} \displaystyle=\frac{\sigma_{2}^{2}(1+{\overline{h}}\sigma_{1}^{2})}{1+{%
\overline{h}}(\sigma_{1}^{2}+\sigma_{2}^{2})} \displaystyle\rho \displaystyle=\frac{{\overline{h}}\sigma_{1}\sigma_{2}}{\sqrt{(1+{\overline{h}%
}\sigma_{1}^{2})(1+{\overline{h}}\sigma_{2}^{2})}}, {\overline{h}}=\frac{h_{1}h_{2}}{h_{1}+h_{2}}.
$$
Furthermore, the degree of conflict between two independent GRFNâs ${\widetilde{X}}_{1}\sim{\widetilde{N}}(\mu_{1},\sigma_{1}^{2},h_{1})$ and ${\widetilde{X}}_{2}\sim{\widetilde{N}}(\mu_{2},\sigma_{2}^{2},h_{2})$ is
$$
\kappa=1-\iint f(m_{1},m_{2}){\widetilde{F}}(m_{1},m_{2})dm_{1}dm_{2}=\\
1-\frac{{\widetilde{\sigma}}_{1}{\widetilde{\sigma}}_{2}}{\sigma_{1}\sigma_{2}%
}\sqrt{1-\rho^{2}}\exp\left\{-\frac{1}{2}\left[\frac{\mu_{1}^{2}}{\sigma_{1}^{%
2}}+\frac{\mu_{2}^{2}}{\sigma_{2}^{2}}\right]+\frac{1}{2(1-\rho^{2})}\left[%
\frac{{\widetilde{\mu}}_{1}^{2}}{{\widetilde{\sigma}}_{1}^{2}}+\frac{{%
\widetilde{\mu}}_{2}^{2}}{{\widetilde{\sigma}}_{2}^{2}}-2\rho\frac{{\widetilde%
{\mu}}_{1}{\widetilde{\mu}}_{2}}{{\widetilde{\sigma}}_{1}{\widetilde{\sigma}}_%
{2}}\right]\right\},
$$
where $f(m_{1},m_{2})$ is the pdf of random vector $(M_{1},M_{2})$ .*
* Proof*
See F. â
**Proposition 9**
*Let ${\widetilde{X}}_{1}\sim{\widetilde{N}}(\mu_{1},\sigma_{1}^{2},h_{1})$ and ${\widetilde{X}}_{2}\sim{\widetilde{N}}(\mu_{2},\sigma_{2}^{2},h_{2})$ be two independent GRFNâs, and assume that $h_{1}>0$ or $h_{2}>0$ . We have
$$
{\widetilde{X}}_{1}\oplus{\widetilde{X}}_{2}\sim{\widetilde{N}}({\widetilde{%
\mu}}_{12},{\widetilde{\sigma}}_{12}^{2},h_{12}),
$$
with
$$
h_{12}=h_{1}+h_{2}, \tag{29}
$$
$$
{\widetilde{\mu}}_{12}=\frac{h_{1}{\widetilde{\mu}}_{1}+h_{2}{\widetilde{\mu}}%
_{2}}{h_{1}+h_{2}}, \tag{30}
$$
and
$$
{\widetilde{\sigma}}_{12}^{2}=\frac{h^{2}_{1}{\widetilde{\sigma}}^{2}_{1}+h^{2%
}_{2}{\widetilde{\sigma}}^{2}_{2}+2\rho h_{1}h_{2}{\widetilde{\sigma}}_{1}{%
\widetilde{\sigma}}_{2}}{(h_{1}+h_{2})^{2}}, \tag{31}
$$
where ${\widetilde{\mu}}_{1}$ , ${\widetilde{\mu}}_{2}$ , ${\widetilde{\sigma}}_{1}$ , ${\widetilde{\sigma}}_{2}$ and $\rho$ are given by (28) in Lemma 1.*
* Proof*
Let $M_{1}$ and $M_{2}$ be the Gaussian random variables from $(\Omega_{1},\sigma_{1},P_{1})$ and $(\Omega_{2},\sigma_{2},P_{2})$ to $(\mathbb{R},\beta_{\mathbb{R}})$ corresponding, respectively, to GRFNâs ${\widetilde{X}}_{1}\sim{\widetilde{N}}(\mu_{1},\sigma_{1}^{2},h_{1})$ and ${\widetilde{X}}_{2}\sim{\widetilde{N}}(\mu_{2},\sigma_{2}^{2},h_{2})$ . The orthogonal sum of ${\widetilde{X}}_{1}$ and ${\widetilde{X}}_{2}$ is the random fuzzy set $(\Omega_{1}Ă\Omega_{2},\sigma_{1}\otimes\sigma_{2},{\widetilde{P}}_{12},%
\mathbb{R},\beta_{\mathbb{R}},{\widetilde{X}}_{\varodot})$ , where ${\widetilde{X}}_{\varodot}$ is the mapping
$$
{\widetilde{X}}_{\varodot}:(\omega_{1},\omega_{2})\rightarrow\textsf{GFN}(M_{1%
2}(\omega_{1},\omega_{2}),h_{1}+h_{2}),
$$
with
$$
M_{12}(\omega_{1},\omega_{2})=\frac{h_{1}M_{1}(\omega_{1})+h_{2}M_{2}(\omega_{%
2})}{h_{1}+h_{2}},
$$
and ${\widetilde{P}}_{12}$ is the probability measure on $\Omega_{1}Ă\Omega_{2}$ obtained by conditioning $P_{1}Ă P_{2}$ on the fuzzy set ${\widetilde{\Theta}}^{*}(\omega_{1},\omega_{2})=0pt\left(\textsf{GFN}(M_{1}(%
\omega_{1}),h_{1}),\textsf{GFN}(M_{2}(\omega_{2}),h_{2})\right)$ . From Lemma 1, the pushforward measure of ${\widetilde{P}}_{12}$ by the random vector $(M_{1},M_{2})$ is the two-dimensional Gaussian distribution with parameters $({\widetilde{\mu}}_{1},{\widetilde{\mu}}_{2},{\widetilde{\sigma}}_{1},{%
\widetilde{\sigma}}_{2},\rho)$ . Consequently, $M_{12}$ is a Gaussian random variable with mean
$$
\mathbb{E}(M_{12})=\frac{h_{1}\mathbb{E}(M_{1})+h_{2}\mathbb{E}(M_{2})}{h_{1}+%
h_{2}}=\frac{h_{1}{\widetilde{\mu}}_{1}+h_{2}{\widetilde{\mu}}_{2}}{h_{1}+h_{2%
}},
$$
and variance
| | $\displaystyle\text{Var}(M_{12})$ | $\displaystyle=\frac{h_{1}^{2}\text{Var}(M_{1})+h_{2}^{2}\text{Var}(M_{2})+2h_{%
1}h_{2}\text{Cov}(M_{1},M_{2})}{(h_{1}+h_{2})^{2}}$ | |
| --- | --- | --- | --- |
â
Let us now consider some special cases in which one of two GRFNâs is a Gaussian random variable. The next proposition states that the orthogonal sum of a Gaussian random variable and an arbitrary GRFN with finite precision is a Gaussian random variable.
**Proposition 10**
*Let $X_{1}\sim N(\mu_{1},\sigma_{1}^{2})$ be a Gaussian random variable and ${\widetilde{X}}_{2}\sim{\widetilde{N}}(\mu_{2},\sigma_{2}^{2},h_{2})$ a GRFN with finite precision $h_{2}<+â$ . Their orthogonal sum is a Gaussian random variable $X_{1}\oplus{\widetilde{X}}_{2}\sim N({\widetilde{\mu}}_{12},{\widetilde{\sigma%
}}_{12}^{2})$ with
$$
{\widetilde{\mu}}_{12}=\frac{\mu_{1}(1+h_{2}\sigma_{2}^{2})+\mu_{2}h_{2}\sigma%
_{1}^{2}}{1+h_{2}(\sigma_{1}^{2}+\sigma_{2}^{2})}, \tag{32}
$$
$$
{\widetilde{\sigma}}_{12}^{2}=\frac{\sigma_{1}^{2}(1+h_{2}\sigma_{2}^{2})}{1+h%
_{2}(\sigma_{1}^{2}+\sigma_{2}^{2})}, \tag{33}
$$
and the probability density of $X_{1}\oplus{\widetilde{X}}_{2}$ is proportional to the product of the pdf of $X_{1}$ and the contour function of ${\widetilde{X}}_{2}$ .*
* Proof*
See G. â
The following corollary addresses the special case where ${\widetilde{X}}_{2}$ is a possibilistic GRFN.
**Corollary 2**
*Let $X_{1}\sim N(\mu_{1},\sigma_{1}^{2})$ be a Gaussian random variable and ${\widetilde{X}}_{2}\sim{\widetilde{N}}(\mu_{2},0,h_{2})$ a possibilistic GRFN. Their orthogonal sum $X_{1}\oplus{\widetilde{X}}_{2}$ is a Gaussian random variable and its distribution is the conditional distribution of $X_{1}$ given the fuzzy event $\textsf{GFN}(\mu_{2},h_{2})$ .*
* Proof*
From Proposition 10, $X_{1}\oplus{\widetilde{X}}_{2}\sim{\widetilde{N}}({\widetilde{\mu}}_{12},{%
\widetilde{\sigma}}_{12}^{2})$ with
$$
{\widetilde{\mu}}_{12}=\frac{\mu_{1}+\mu_{2}h_{2}\sigma_{1}^{2}}{1+h_{2}\sigma%
_{1}^{2}}\quad\text{and}\quad{\widetilde{\sigma}}_{12}^{2}=\frac{\sigma_{1}^{2%
}}{1+h_{2}\sigma_{1}^{2}}.
$$
Now, we know from Proposition 10 that the density of $X_{1}\oplus{\widetilde{X}}_{2}$ is proportional to the product of the density of $X_{1}$ and the contour function of ${\widetilde{X}}_{2}$ , which is $\varphi(x;\mu_{2},h_{2})$ . Consequently, we have
$$
f_{X_{1}\oplus{\widetilde{X}}_{2}}(x)=\frac{\frac{1}{\sigma_{1}^{2}}\exp\left(%
-\frac{1}{2}\frac{(x-\mu_{1})^{2}}{\sigma_{1}^{2}}\right)\exp\left(-\frac{h_{2%
}(x-\mu_{2})^{2}}{2(1+h_{2}\sigma_{2}^{2})}\right)}{\int\frac{1}{\sigma_{1}^{2%
}}\exp\left(-\frac{1}{2}\frac{(x-\mu_{1})^{2}}{\sigma_{1}^{2}}\right)\exp\left%
(-\frac{h_{2}(x-\mu_{2})^{2}}{2(1+h_{2}\sigma_{2}^{2})}\right)dx},
$$
which is the conditional density $f_{X_{1}}(x|\textsf{GFN}(\mu_{2},h_{2}))$ . â
Finally, another special case of interest is when both GRFNâs are Gaussian random variables. This case is addressed by the following corollary.
**Corollary 3**
*Let $X_{1}\sim N(\mu_{1},\sigma_{1}^{2})$ and $X_{2}\sim N(\mu_{2},\sigma_{2}^{2})$ be two Gaussian random variables. We have $X_{1}\oplus X_{2}\sim N({\widetilde{\mu}}_{12},\sigma_{12}^{2})$ with
$$
{\widetilde{\mu}}_{12}=\frac{\mu_{1}\sigma_{2}^{2}+\mu_{2}\sigma_{1}^{2}}{%
\sigma_{1}^{2}+\sigma_{2}^{2}}\quad\text{and}\quad{\widetilde{\sigma}}_{12}^{2%
}=\frac{\sigma_{1}^{2}\sigma_{2}^{2}}{\sigma_{1}^{2}+\sigma_{2}^{2}}.
$$*
* Proof*
Immediate from Proposition 10 by letting $h_{2}$ tend to $+â$ in (32) and (33). â
4.3 Arithmetic operations on GRFNâs
Arithmetic operations can be extended to fuzzy numbers using Zadehâs extension principles [14, 12]. More precisely, let ${\widetilde{A}}$ and ${\widetilde{B}}$ be two fuzzy numbers, and let $*$ be a binary operation on reals. Then the fuzzy number ${\widetilde{C}}={\widetilde{A}}*{\widetilde{B}}$ is defined as
$$
{\widetilde{C}}(c)=\sup_{c=a*b}\min({\widetilde{A}}(a),{\widetilde{B}}(b)).
$$
The membership function ${\widetilde{C}}$ is equal to the possibility distribution on $c=a*b$ , if $a$ and $b$ are constrained, respectively, by possibility distributions ${\widetilde{A}}$ and ${\widetilde{B}}$ . Unary or $n$ -ary operations can be extended from real to fuzzy numbers in the same way. For a certain class of fuzzy number called LR-fuzzy numbers [14, page 54], closed-form expressions exist for the addition, subtraction and scalar multiplication of fuzzy numbers. In particular, Gaussian fuzzy numbers with positive precision are LR fuzzy numbers and they verify the following equalities [25]:
| | $\displaystyle\textsf{GFN}(m_{1},h_{1})+\textsf{GFN}(m_{2},h_{2})$ | $\displaystyle=\textsf{GFN}(m_{1}+m_{2},(h_{1}^{-1/2}+h_{2}^{-1/2})^{-2})$ | |
| --- | --- | --- | --- |
As addition of fuzzy numbers is associative, we can express the linear combination of $n$ GFNâs as
$$
\sum_{i=1}^{n}\lambda_{i}\cdot\textsf{GFN}(m_{i},h_{i})=\textsf{GFN}\left(\sum%
_{i=1}^{n}\lambda_{i}m_{i},\left(\sum_{i=1}^{n}|\lambda_{i}|h_{i}^{-1/2}\right%
)^{-2}\right). \tag{34}
$$
Now, let us consider $n$ independent GRFNâs ${\widetilde{X}}_{i}$ from probability spaces $(\Omega_{i},\sigma_{i},P_{i})$ to $[0,1]^{\mathbb{R}}$ defined by
$$
{\widetilde{X}}_{i}(\omega)=\textsf{GFN}(M_{i}(\omega),h_{i})
$$
for all $\omegaâ\Omega_{i}$ , where $M_{i}$ is a Gaussian random variable with mean $\mu_{i}$ and standard deviation $\sigma_{i}$ , and $h_{i}>0$ . Let
$$
{\widetilde{X}}=\sum_{i=1}^{n}\lambda_{i}{\widetilde{X}}_{i}
$$
be the random fuzzy set from $(\Omega_{1}Ă...Ă\Omega_{n},\sigma_{1}\otimes...\otimes\sigma_{%
n},P_{1}Ă...Ă P_{n})$ to $[0,1]^{\mathbb{R}}$ defined by
$$
{\widetilde{X}}(\omega_{1},\ldots,\omega_{n})=\sum_{i=1}^{n}\lambda_{i}\cdot%
\textsf{GFN}(M_{i}(\omega_{i}),h_{i}).
$$
If each GRFN ${\widetilde{X}}_{i}$ represents our knowledge about the value of some quantity $X_{i}$ , ${\widetilde{X}}$ represents our knowledge about $X=\sum_{i=1}^{n}\lambda_{i}X_{i}$ . From (34), ${\widetilde{X}}\sim{\widetilde{N}}(\mu,\sigma,h)$ with
$$
\mu=\sum_{i=1}^{n}\lambda_{i}\mu_{i},\quad\sigma^{2}=\sum_{i=1}^{n}\lambda_{i}%
^{2}\sigma_{i}^{2},\quad\text{and}\quad h=\left(\sum_{i=1}^{n}|\lambda_{i}|h_{%
i}^{-1/2}\right)^{-2}.
$$
5 Gaussian random fuzzy vectors
In this section, we introduce Gaussian random fuzzy vectors (GRFVâs), an extension of the model presented in Section 4 allowing us to describe knowledge about multidimensional quantities. The main definitions and properties are first introduced in Section 5.1. The expression of the orthogonal sum of two GRFVâs is then given in Section 5.2, after which the marginalization and vacuous extension of GRFVâs are described in Section 5.3. Finally, our model is compared to Dempsterâs normal belief function model in Section 5.4.
5.1 Definition and main properties
We consider a $p$ -dimensional variable $\boldsymbol{\theta}$ taking values in $\mathbb{R}^{p}$ . Knowledge about $\boldsymbol{\theta}$ may be encoded as a $p$ -dimensional Gaussian fuzzy vector, defined as follows.
**Definition 2**
*We define the $p$ -dimensional Gaussian fuzzy vector (GFV) with center ${\boldsymbol{m}}â\mathbb{R}^{p}$ and $pĂ p$ symmetric and positive semidefinite precision matrix ${\boldsymbol{H}}$ as the normalized fuzzy subset of $\mathbb{R}^{p}$ with membership function
$$
\varphi({\boldsymbol{x}};{\boldsymbol{m}},{\boldsymbol{H}})=\exp\left(-\frac{1%
}{2}({\boldsymbol{x}}-{\boldsymbol{m}})^{T}{\boldsymbol{H}}({\boldsymbol{x}}-{%
\boldsymbol{m}})\right),
$$
denoted as $\textsf{GFV}({\boldsymbol{m}},{\boldsymbol{H}})$ .*
As shown in [27], the normalized product of two GFVâs is still a GFV. The following proposition generalizes Proposition 3.
**Proposition 11**
*Let $\textsf{GFV}({\boldsymbol{m}}_{1},{\boldsymbol{H}}_{1})$ and $\textsf{GFV}({\boldsymbol{m}}_{2},{\boldsymbol{H}}_{2})$ be two $p$ -dimensional GFVâs with positive definite precision matrices ${\boldsymbol{H}}_{1}$ and ${\boldsymbol{H}}_{2}$ . We have
$$
\varphi({\boldsymbol{x}};{\boldsymbol{m}}_{1},{\boldsymbol{H}}_{1})\cdot%
\varphi({\boldsymbol{x}};{\boldsymbol{m}}_{2},{\boldsymbol{H}}_{2})=\varphi({%
\boldsymbol{x}};{\boldsymbol{m}}_{12},{\boldsymbol{H}}_{12})\times\\
\exp\left(-\frac{1}{2}({\boldsymbol{m}}_{1}-{\boldsymbol{m}}_{2})^{T}({%
\boldsymbol{H}}_{1}^{-1}+{\boldsymbol{H}}_{2}^{-1})^{-1}({\boldsymbol{m}}_{1}-%
{\boldsymbol{m}}_{2})\right),
$$
with
$$
{\boldsymbol{m}}_{12}=({\boldsymbol{H}}_{1}+{\boldsymbol{H}}_{2})^{-1}({%
\boldsymbol{H}}_{1}{\boldsymbol{m}}_{1}+{\boldsymbol{H}}_{2}{\boldsymbol{m}}_{%
2})\quad\text{and}\quad{\boldsymbol{H}}_{12}={\boldsymbol{H}}_{1}+{\boldsymbol%
{H}}_{2}.
$$
Consequently, the following equation holds:
$$
\textsf{GFV}({\boldsymbol{m}}_{1},{\boldsymbol{H}}_{1})\varodot\textsf{GFV}({%
\boldsymbol{m}}_{2},{\boldsymbol{H}}_{2})=\textsf{GFV}({\boldsymbol{m}}_{12},{%
\boldsymbol{H}}_{12}),
$$
and the height of the product intersection between $\textsf{GFV}({\boldsymbol{m}}_{1},{\boldsymbol{H}}_{1})$ and $\textsf{GFV}({\boldsymbol{m}}_{1},{\boldsymbol{H}}_{2})$ is
$$
\displaystyle 0pt\left(\textsf{GFV}({\boldsymbol{m}}_{1},{\boldsymbol{H}}_{1})%
,\textsf{GFV}({\boldsymbol{m}}_{1},{\boldsymbol{H}}_{2})\right) \displaystyle=\max_{\boldsymbol{x}}\varphi({\boldsymbol{x}};{\boldsymbol{m}}_{%
1},{\boldsymbol{H}}_{1})\varphi({\boldsymbol{x}};{\boldsymbol{m}}_{2},{%
\boldsymbol{H}}_{2}) \displaystyle=\exp\left(-\frac{1}{2}({\boldsymbol{m}}_{1}-{\boldsymbol{m}}_{2}%
)^{T}({\boldsymbol{H}}_{1}^{-1}+{\boldsymbol{H}}_{2}^{-1})^{-1}({\boldsymbol{m%
}}_{1}-{\boldsymbol{m}}_{2})\right).
$$*
Equipped with the notion of GFV, we can now introduce a model of random fuzzy set that can be seen as a GFV whose mode is a multidimensional Gaussian random variable. This model is defined formally as follows.
**Definition 3**
*Let $(\Omega,\sigma_{\Omega},P)$ be a probability space, ${\boldsymbol{M}}:\Omegaâ\mathbb{R}^{p}$ a $p$ -dimensional Gaussian random vector with mean $\boldsymbol{\mu}$ and variance matrix $\boldsymbol{\Sigma}$ , and ${\boldsymbol{H}}$ a $pĂ p$ symmetric and positive semidefinite real matrix. The random fuzzy set ${\widetilde{X}}:\Omegaâ[0,1]^{\mathbb{R}^{p}}$ defined as
$$
{\widetilde{X}}(\omega)=\textsf{GFV}({\boldsymbol{M}}(\omega),{\boldsymbol{H}})
$$
is called a Gaussian random fuzzy vector (GRFV), which we denote as ${\widetilde{X}}\sim{\widetilde{N}}(\boldsymbol{\mu},\boldsymbol{\Sigma},{%
\boldsymbol{H}})$ .*
The following proposition generalizes Proposition 6.
**Proposition 12**
*The contour function of GRFV ${\widetilde{X}}\sim{\widetilde{N}}(\boldsymbol{\mu},\boldsymbol{\Sigma},{%
\boldsymbol{H}})$ with positive definite precision matrix ${\boldsymbol{H}}$ is
$$
pl_{\widetilde{X}}({\boldsymbol{x}})=\frac{1}{|{\boldsymbol{I}}_{p}+%
\boldsymbol{\Sigma}{\boldsymbol{H}}|^{1/2}}\exp\left(-\frac{1}{2}({\boldsymbol%
{x}}-\boldsymbol{\mu})^{T}({\boldsymbol{H}}^{-1}+\boldsymbol{\Sigma})^{-1}({%
\boldsymbol{x}}-\boldsymbol{\mu})\right),
$$
where ${\boldsymbol{I}}_{p}$ is the $p$ -dimensional identity matrix.*
* Proof*
See H. â
5.2 Orthogonal sum of Gaussian random fuzzy vectors
The practical interest of GRFVâs arises from the fact that they can be easily combined by the generalized product-intersection rule. The following lemma and proposition, which generalize, respectively, Lemma 1 and Proposition 9, give us the expression of the orthogonal sum of two GRFVâs.
**Lemma 2**
*Let ${\boldsymbol{M}}_{1}\sim{\cal N}(\boldsymbol{\mu}_{1},\boldsymbol{\Sigma}_{1})$ and ${\boldsymbol{M}}_{2}\sim{\cal N}(\boldsymbol{\mu}_{2},\boldsymbol{\Sigma}_{2})$ be two independent Gaussian $p$ -dimensional random vectors and let ${\boldsymbol{H}}_{1}$ and ${\boldsymbol{H}}_{2}$ be two symmetric and positive definite $pĂ p$ matrices. Let ${\widetilde{F}}$ be the fuzzy subset of $\mathbb{R}^{2p}$ with membership function
$$
{\widetilde{F}}({\boldsymbol{m}}_{1},{\boldsymbol{m}}_{2})=0pt\left(\textsf{%
GFV}({\boldsymbol{m}}_{1},{\boldsymbol{H}}_{1})\cdot\textsf{GFV}({\boldsymbol{%
m}}_{2},{\boldsymbol{H}}_{2})\right),
$$
and let ${\boldsymbol{M}}$ be the $2p$ -dimensional vector $({\boldsymbol{M}}_{1},{\boldsymbol{M}}_{2})$ . The conditional probability distribution of ${\boldsymbol{M}}$ given ${\widetilde{F}}$ is $2p$ -dimensional Gaussian with mean vector ${\widetilde{\boldsymbol{\mu}}}$ and covariance matrix ${\widetilde{\boldsymbol{\Sigma}}}$ defined as follows:
$$
{\widetilde{\boldsymbol{\Sigma}}}=\begin{pmatrix}\boldsymbol{\Sigma}_{1}^{-1}+%
{\overline{{\boldsymbol{H}}}}&-{\overline{{\boldsymbol{H}}}}\\
-{\overline{{\boldsymbol{H}}}}&\boldsymbol{\Sigma}_{2}^{-1}+{\overline{{%
\boldsymbol{H}}}}\end{pmatrix}^{-1},
$$
$$
{\widetilde{\boldsymbol{\mu}}}=\begin{pmatrix}{\overline{{\boldsymbol{H}}}}^{-%
1}\boldsymbol{\Sigma}_{1}^{-1}+{\boldsymbol{I}}_{p}&-{\boldsymbol{I}}_{p}\\
-{\boldsymbol{I}}_{p}&{\overline{{\boldsymbol{H}}}}^{-1}\boldsymbol{\Sigma}_{2%
}^{-1}+{\boldsymbol{I}}_{p}\end{pmatrix}^{-1}\begin{pmatrix}{\overline{{%
\boldsymbol{H}}}}^{-1}\boldsymbol{\Sigma}_{1}^{-1}&\boldsymbol{0}\\
\boldsymbol{0}&{\overline{{\boldsymbol{H}}}}^{-1}\boldsymbol{\Sigma}_{2}^{-1}%
\end{pmatrix}\begin{pmatrix}\boldsymbol{\mu}_{1}\\
\boldsymbol{\mu}_{2}\end{pmatrix}, \tag{36}
$$
with
$$
{\overline{{\boldsymbol{H}}}}=({\boldsymbol{H}}_{1}^{-1}+{\boldsymbol{H}}_{2}^%
{-1})^{-1}.
$$
Furthermore, the degree of conflict between two GRFVâs ${\widetilde{X}}_{1}\sim{\widetilde{N}}(\boldsymbol{\mu}_{1},\boldsymbol{\Sigma%
}_{1},{\boldsymbol{H}}_{1})$ and ${\widetilde{X}}_{2}\sim{\widetilde{N}}(\boldsymbol{\mu}_{2},\boldsymbol{\Sigma%
}_{2},{\boldsymbol{H}}_{2})$ is
$$
\kappa=1-\int_{\mathbb{R}^{2p}}f({\boldsymbol{m}}_{1},{\boldsymbol{m}}_{2}){%
\widetilde{F}}({\boldsymbol{m}}_{1},{\boldsymbol{m}}_{2})d{\boldsymbol{m}}_{1}%
d{\boldsymbol{m}}_{2}=\\
1-\sqrt{\frac{|{\widetilde{\boldsymbol{\Sigma}}}|}{|\boldsymbol{\Sigma}_{1}||%
\boldsymbol{\Sigma}_{2}|}}\exp\left\{-\frac{1}{2}\left[\boldsymbol{\mu}_{1}^{T%
}\boldsymbol{\Sigma}_{1}^{-1}\boldsymbol{\mu}_{1}+\boldsymbol{\mu}_{2}^{T}%
\boldsymbol{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}-{\widetilde{\boldsymbol{\mu}}%
}^{T}{\widetilde{\boldsymbol{\Sigma}}}^{-1}{\widetilde{\boldsymbol{\mu}}}%
\right]\right\}.
$$*
* Proof*
See I â
**Proposition 13**
*Let ${\widetilde{X}}_{1}\sim{\widetilde{N}}(\boldsymbol{\mu}_{1},\boldsymbol{\Sigma%
}_{1},{\boldsymbol{H}}_{1})$ and ${\widetilde{X}}_{2}\sim{\widetilde{N}}(\boldsymbol{\mu}_{2},\boldsymbol{\Sigma%
}_{2},{\boldsymbol{H}}_{2})$ be two independent GRFVâs. We have
$$
{\widetilde{X}}_{1}\oplus{\widetilde{X}}_{2}\sim{\widetilde{N}}({\widetilde{%
\boldsymbol{\mu}}}_{12},{\widetilde{\boldsymbol{\Sigma}}}_{12},{\boldsymbol{H}%
}_{12})
$$
with
$$
{\boldsymbol{H}}_{12}={\boldsymbol{H}}_{1}+{\boldsymbol{H}}_{2},
$$
$$
{\widetilde{\boldsymbol{\mu}}}_{12}={\boldsymbol{A}}{\widetilde{\boldsymbol{%
\mu}}},
$$
and
$$
{\widetilde{\boldsymbol{\Sigma}}}_{12}={\boldsymbol{A}}{\widetilde{\boldsymbol%
{\Sigma}}}{\boldsymbol{A}}^{T},
$$
where ${\boldsymbol{A}}$ is the constant $pĂ 2p$ matrix defined as
$$
{\boldsymbol{A}}={\boldsymbol{H}}_{12}^{-1}\begin{pmatrix}{\boldsymbol{H}}_{1}%
&{\boldsymbol{H}}_{2}\end{pmatrix}.
$$*
* Proof*
Let ${\boldsymbol{M}}_{1}$ and ${\boldsymbol{M}}_{2}$ be the Gaussian random vector from $(\Omega_{1},\sigma_{1},P_{1})$ and $(\Omega_{2},\sigma_{2},P_{2})$ to $(\mathbb{R}^{p},\beta_{\mathbb{R}^{p}})$ corresponding, respectively, to GRFVâs ${\widetilde{X}}_{1}\sim{\widetilde{N}}(\boldsymbol{\mu}_{1},\boldsymbol{\Sigma%
}_{1},{\boldsymbol{H}}_{1})$ and ${\widetilde{X}}_{2}\sim{\widetilde{N}}(\boldsymbol{\mu}_{2},\boldsymbol{\Sigma%
}_{2},{\boldsymbol{H}}_{2})$ . The orthogonal sum of ${\widetilde{X}}_{1}$ and ${\widetilde{X}}_{2}$ is defined by the mapping
$$
{\widetilde{X}}_{\varodot}:(\omega_{1},\omega_{2})\rightarrow\textsf{GFV}({%
\boldsymbol{M}}_{12}(\omega_{1},\omega_{2}),{\boldsymbol{H}}_{1}+{\boldsymbol{%
H}}_{2})
$$
with
$$
{\boldsymbol{M}}_{12}=({\boldsymbol{H}}_{1}+{\boldsymbol{H}}_{2})^{-1}({%
\boldsymbol{H}}_{1}{\boldsymbol{M}}_{1}+{\boldsymbol{H}}_{2}{\boldsymbol{M}}_{%
2})={\boldsymbol{A}}\begin{pmatrix}{\boldsymbol{M}}_{1}\\
{\boldsymbol{M}}_{2}\end{pmatrix},
$$
where ${\boldsymbol{A}}$ is the $pĂ 2p$ matrix
$$
{\boldsymbol{A}}=({\boldsymbol{H}}_{1}+{\boldsymbol{H}}_{2})^{-1}\begin{%
pmatrix}{\boldsymbol{H}}_{1}&{\boldsymbol{H}}_{2}\end{pmatrix},
$$
and the probability measure ${\widetilde{P}}_{12}$ on $\Omega_{1}Ă\Omega_{2}$ obtained by conditioning $P_{1}Ă P_{2}$ on the fuzzy set ${\widetilde{\Theta}}^{*}(\omega_{1},\omega_{2})=0pt\left(\textsf{GFV}({%
\boldsymbol{M}}_{1}(\omega_{1}),{\boldsymbol{H}}_{1}),\textsf{GFV}({%
\boldsymbol{M}}_{2}(\omega_{2}),{\boldsymbol{H}}_{2})\right)$ . From Lemma 2, the pushforward measure of ${\widetilde{P}}_{12}$ by the random vector $({\boldsymbol{M}}_{1},{\boldsymbol{M}}_{2})$ is the $p$ -dimensional Gaussian distribution with parameters $({\widetilde{\boldsymbol{\mu}}},{\widetilde{\boldsymbol{\Sigma}}})$ . Consequently, ${\boldsymbol{M}}_{12}$ is a Gaussian random vector with mean
$$
\mathbb{E}({\boldsymbol{M}}_{12})={\boldsymbol{A}}{\widetilde{\boldsymbol{\mu}}}
$$
and variance
$$
\text{Var}({\boldsymbol{M}}_{12})={\boldsymbol{A}}{\widetilde{\boldsymbol{%
\Sigma}}}{\boldsymbol{A}}^{T}.
$$ â
5.3 Marginalization and vacuous extension
In this section, we consider the marginalization and vacuous extension (defined in Section 3.3) of a GRFV. We assume that variable $\boldsymbol{\theta}$ taking values in $\mathbb{R}^{p}$ is decomposed as $\boldsymbol{\theta}=(\boldsymbol{\theta}_{1},\boldsymbol{\theta}_{2})$ with $\boldsymbol{\theta}_{1}â\Theta_{1}=\mathbb{R}^{p-k}$ and $\boldsymbol{\theta}_{2}â\Theta_{2}=\mathbb{R}^{k}$ for $0<k<p$ .
Marginalization
We start with the following lemma.
**Lemma 3**
*Let ${\widetilde{F}}=\textsf{GFV}({\boldsymbol{m}},{\boldsymbol{H}})$ be a $p$ -dimensional Gaussian fuzzy vector with mode ${\boldsymbol{m}}=({\boldsymbol{m}}_{1},{\boldsymbol{m}}_{2})$ , where ${\boldsymbol{m}}_{1}â\Theta_{1}=\mathbb{R}^{p-k}$ and ${\boldsymbol{m}}_{2}â\Theta_{2}=\mathbb{R}^{k}$ for $0<k<p$ , and precision matrix ${\boldsymbol{H}}$ with block decomposition
$$
{\boldsymbol{H}}=\begin{pmatrix}{\boldsymbol{H}}_{11}&{\boldsymbol{H}}_{12}\\
{\boldsymbol{H}}_{21}&{\boldsymbol{H}}_{22}\end{pmatrix}.
$$
Assume that ${\boldsymbol{H}}_{22}$ is nonsingular. The projection of ${\widetilde{F}}$ on $\Theta_{1}$ , denoted as ${\widetilde{F}}\downarrow\Theta_{1}$ is the Gaussian fuzzy vector $\textsf{GFV}({\boldsymbol{m}}_{1},{\boldsymbol{H}}^{\prime}_{11})$ with
$$
{\boldsymbol{H}}^{\prime}_{11}={\boldsymbol{H}}_{11}-{\boldsymbol{H}}_{12}{%
\boldsymbol{H}}_{22}^{-1}{\boldsymbol{H}}_{21}.
$$*
* Proof*
See J â
Let us now consider a $p$ -dimensional GRFV ${\widetilde{X}}\sim{\widetilde{N}}(\boldsymbol{\mu},\boldsymbol{\Sigma},{%
\boldsymbol{H}})$ representing partial knowledge about $\boldsymbol{\theta}=(\boldsymbol{\theta}_{1},\boldsymbol{\theta}_{2})$ . The marginal RFS for $\boldsymbol{\theta}_{1}$ is given by the following proposition, which follows directly from Lemma 3.
**Proposition 14**
*Let ${\widetilde{X}}\sim{\widetilde{N}}(\boldsymbol{\mu},\boldsymbol{\Sigma},{%
\boldsymbol{H}})$ by a $p$ -dimensional GRFV taking values in $2^{\Theta}$ , with $\Theta=\Theta_{1}Ă\Theta_{2}$ , where $\Theta_{1}=\mathbb{R}^{p-k}$ and $\Theta_{2}=\mathbb{R}^{k}$ for $0<k<p$ . Let $\boldsymbol{\mu}=(\boldsymbol{\mu}_{1},\boldsymbol{\mu}_{2})$ with $\boldsymbol{\mu}_{1}â\Theta_{1}$ and $\boldsymbol{\mu}_{2}â\Theta_{2}$ , and consider the block decompositions
$$
\boldsymbol{\Sigma}=\begin{pmatrix}\boldsymbol{\Sigma}_{11}&\boldsymbol{\Sigma%
}_{12}\\
\boldsymbol{\Sigma}_{21}&\boldsymbol{\Sigma}_{22}\end{pmatrix}\quad\text{and}%
\quad{\boldsymbol{H}}=\begin{pmatrix}{\boldsymbol{H}}_{11}&{\boldsymbol{H}}_{1%
2}\\
{\boldsymbol{H}}_{21}&{\boldsymbol{H}}_{22}\end{pmatrix}.
$$
Assume that ${\boldsymbol{H}}_{22}$ is nonsingular. The marginal of ${\widetilde{X}}$ on $\Theta_{1}$ is the GRFV ${\widetilde{X}}_{1}\sim{\widetilde{N}}(\boldsymbol{\mu}_{1},\boldsymbol{\Sigma%
}_{11},{\boldsymbol{H}}^{\prime}_{11})$ with
$$
{\boldsymbol{H}}^{\prime}_{11}={\boldsymbol{H}}_{11}-{\boldsymbol{H}}_{12}{%
\boldsymbol{H}}_{22}^{-1}{\boldsymbol{H}}_{21}.
$$*
Vacuous extension
We now consider a Gaussian fuzzy vector $\textsf{GFV}({\boldsymbol{m}}_{1},{\boldsymbol{H}}_{11})$ in $\Theta_{1}=\mathbb{R}^{p-k}$ for $0<k<p$ . Its cylindrical extension in $\Theta=\Theta_{1}Ă\Theta_{2}$ , with $\Theta_{2}=\mathbb{R}^{k}$ has the following membership function
$$
\varphi({\boldsymbol{x}})=\exp\left(-\frac{1}{2}({\boldsymbol{x}}_{1}-{%
\boldsymbol{m}}_{1})^{T}{\boldsymbol{H}}_{11}({\boldsymbol{x}}_{1}-{%
\boldsymbol{m}}_{1})\right),
$$
which can be written as
$$
\varphi({\boldsymbol{x}})=\exp\left(-\frac{1}{2}({\boldsymbol{x}}-{\boldsymbol%
{m}})^{T}{\boldsymbol{H}}({\boldsymbol{x}}-{\boldsymbol{m}})\right),
$$
where ${\boldsymbol{m}}$ is the $p$ -dimensional vector
$$
{\boldsymbol{m}}=\begin{pmatrix}{\boldsymbol{m}}_{1}\\
\boldsymbol{0}\end{pmatrix}
$$
and ${\boldsymbol{H}}$ is the $pĂ p$ matrix
$$
{\boldsymbol{H}}=\begin{pmatrix}{\boldsymbol{H}}_{11}&\boldsymbol{0}\\
\boldsymbol{0}&\boldsymbol{0}\end{pmatrix}. \tag{37}
$$
Given a GRFV ${\widetilde{X}}_{1}\sim{\widetilde{N}}(\boldsymbol{\mu}_{1},\boldsymbol{\Sigma%
}_{11},{\boldsymbol{H}}_{11})$ taking values in $2^{\Theta_{1}}$ , it follows immediately that its vacuous extension in $\Theta=\Theta_{1}Ă\Theta_{2}$ is the GRFV
$$
{\widetilde{X}}_{1\uparrow(1,2)}\sim{\widetilde{N}}(\boldsymbol{\mu},%
\boldsymbol{\Sigma},{\boldsymbol{H}})
$$
with
$$
\boldsymbol{\mu}=\begin{pmatrix}\boldsymbol{\mu}_{1}\\
\boldsymbol{0}\end{pmatrix},\quad\boldsymbol{\Sigma}=\begin{pmatrix}%
\boldsymbol{\Sigma}_{11}&\boldsymbol{0}\\
\boldsymbol{0}&{\boldsymbol{I}}_{k}\end{pmatrix},
$$
where ${\boldsymbol{I}}_{k}$ is the $kĂ k$ identity matrix, and ${\boldsymbol{H}}$ given by (37).
Noninteractivity
In Section 3.3, we defined the notion of noninteractive random fuzzy vector. The following proposition gives a necessary and sufficient condition for a GRFV to be noninteractive.
**Proposition 15**
*A $p$ -dimensional GRFV ${\widetilde{X}}\sim{\widetilde{N}}(\boldsymbol{\mu},\boldsymbol{\Sigma},{%
\boldsymbol{H}})$ is noninteractive iff matrices $\boldsymbol{\Sigma}$ and ${\boldsymbol{H}}$ are diagonal.*
* Proof*
Let ${\widetilde{X}}_{1},...,{\widetilde{X}}_{p}$ be the marginals of ${\widetilde{X}}$ on each of the $p$ coordinates. Let $\sigma^{2}_{1},...,\sigma^{2}_{p}$ and $h_{1},...,h_{p}$ be the diagonal elements of, respectively, $\boldsymbol{\Sigma}$ and ${\boldsymbol{H}}$ . Let $\Omega$ be the set of departure of ${\widetilde{X}}$ . Let ${\widetilde{X}}_{i\uparrow(1:p)}$ denote the vacuous extension of ${\widetilde{X}}_{i}$ in $\mathbb{R}^{p}$ , defined by
$$
{\widetilde{X}}_{i\uparrow(1:p)}(\omega)({\boldsymbol{x}})=\exp\left(-\frac{h}%
{2}(x_{i}-M_{i}(\omega))^{2}\right)
$$
with $M_{i}\sim N(\mu_{i},\sigma^{2}_{i})$ . The orthogonal sum
$$
{\widetilde{X}}^{\prime}={\widetilde{X}}_{1\uparrow(1:p)}\oplus\ldots\oplus{%
\widetilde{X}}_{p\uparrow(1:p)}
$$
is given by
$$
{\widetilde{X}}^{\prime}(\omega)({\boldsymbol{x}})=\prod_{i=1}^{p}\exp\left(-%
\frac{h}{2}(x_{i}-M_{i}(\omega))^{2}\right)=\exp\left(-\frac{1}{2}({%
\boldsymbol{x}}-{\boldsymbol{M}}^{\prime}(\omega))^{T}{\boldsymbol{H}}^{\prime%
}({\boldsymbol{x}}-{\boldsymbol{M}}^{\prime}(\omega))\right),
$$
where ${\boldsymbol{H}}^{\prime}$ is the diagonal matrix with diagonal elements $h_{1},...,h_{p}$ , and ${\boldsymbol{M}}^{\prime}$ is a random vector with mean $\boldsymbol{\mu}$ and diagonal covariance matrix $\boldsymbol{\Sigma}^{\prime}$ with diagonal elements $\sigma^{2}_{1},...,\sigma^{2}_{p}$ . We have ${\widetilde{X}}={\widetilde{X}}^{\prime}$ iff ${\boldsymbol{H}}={\boldsymbol{H}}^{\prime}$ and $\boldsymbol{\Sigma}=\boldsymbol{\Sigma}^{\prime}$ , i.e., if both ${\boldsymbol{H}}$ and $\boldsymbol{\Sigma}$ are diagonal. â
5.4 Comparison with Dempsterâs normal belief functions
In [5], Dempster introduced another class of continuous belief functions in $\mathbb{R}^{p}$ , called normal belief functions Ref. [5] was actually available as a working paper from the Statistical Department of Harvard University since 1990, but it only appeared as a book chapter in 2001.. It is interesting to compare Dempsterâs model with ours, as both models generalize the multivariate Gaussian distribution. A normal belief function $Bel$ on $\mathbb{R}^{p}$ as defined in [5] is specified by the following components:
- An $n$ -dimensional subspace ${\cal S}$ of $\mathbb{R}^{p}$ ;
- A $q$ -dimensional partition $\Pi$ of ${\cal S}$ into parallel $n-q$ dimensional subspaces; (If $q=0$ , $\Pi=\{{\cal S}\}$ );
- A full-rank $q$ -dimensional Gaussian distribution $N(\mu,\Sigma)$ on $\Pi$ if $q>0$ , or the discrete probability measure with mass function $m({\cal S})=1$ if $q=0$ .
Belief function $Bel$ is then induced by a random set from $\Pi$ , equipped with the normal distribution $N(\mu,\Sigma)$ if $q>0$ or probability mass function $m$ if $q=0$ , to the corresponding family of parallel $n-q$ dimensional subspaces of ${\cal S}$ . The following special cases are of interest:
1. If $p=n=q$ , $Bel$ is a Gaussian probability distribution on $\mathbb{R}^{p}$ ;
1. If $p>n=q$ , $Bel$ is a Gaussian probability distribution limited to an $n$ -dimensional subspace of $\mathbb{R}^{p}$ ;
1. If $p=n$ and $q=0$ , $Bel$ is vacuous;
1. If $q=0$ while $p>n>0$ , $Bel$ is logical with ${\cal S}$ as its only focal set; it is then equivalent to specifying $p-n$ linear equations;
1. If $n=q=0$ , the true point in $\mathbb{R}^{p}$ is known with certainty.
Like GRFVâs, Dempsterâs normal belief functions thus include the vacuous belief function, Gaussian probability distributions, as well as vacuous extensions of marginal Gaussian distributions. However, the two models are clearly distinct. Dempsterâs model is based on the combination of Gaussian probability distributions and linear equations, and is specially useful in relation with linear statistical models such as the Kalman filter [5] or linear regression [24]. In contrast, in the GRFV model, focal sets are fuzzy subsets of $\mathbb{R}^{n}$ ( $n†p$ ) with Gaussian membership functions, or cylindrical extensions of such fuzzy subsets. This model allows us to represent not only probabilistic and logical evidence, but also fuzzy information. In particular, it includes Gaussian probability distribution and Gaussian possibility distributions as special cases. We could attempt to design an even more general model that would contain both Dempsterâs normal belief functions and belief functions induced by GRFVâs as special cases. Such a model would allow us to reason with Gaussian probability and possibility distributions as well as with linear equations. The rigorous development of such a model is left for further research.
6 Conclusions
In this paper, continuing a study started in [9] with the finite case, we have introduced a theory of epistemic random fuzzy sets in a general setting. An epistemic random fuzzy set represents a piece of evidence, which may be crisp or fuzzy. This framework generalizes both epistemic random sets as considered in the Dempster-Shafer theory of belief functions, and possibility distributions considered in possibility theory. Independent epistemic random fuzzy sets are combined by the generalized product-intersection rule, which extends both Dempsterâs rule for combining belief functions and the product intersection rule for combining possibility distributions.
In addition, we have also introduced Gaussian random fuzzy numbers (GRFNâs) and their multidimensional extensions, Gaussian random fuzzy vectors (GRFVâs) as practical models of random fuzzy subsets of, respectively, $\mathbb{R}$ and $\mathbb{R}^{p}$ with $pâ„ 2$ . A GRFN is described by three parameters: its mode $m$ , its variance $\sigma^{2}$ and its precision $h$ . In this setting, a Gaussian random variable can be seen as an infinitely precise GRFN ( $h=+â$ ), while a Gaussian possibility distribution is a constant GRFN ( $\sigma^{2}=0$ ). A maximally imprecise GRFN such that $h=0$ is said to be vacuous: it represents complete ignorance. In GRFVâs, the mode becomes a $p$ -dimensional vector, while the variance and precision become positive semi-definite $pĂ p$ square matrices. The practical convenience of GRFNâs and GRFVâs arises from the fact that they can easily be combined by the generalized product-intersection rule. Also, formulas for the projection and marginal extension fo GRFVâs have been derived.
This work opens up several perspectives. Using random fuzzy sets and, in particular, GRFNâs to represent expert knowledge about numerical quantities will require the development of adequate elicitation procedures. We also consider using this framework in machine learning, to quantify prediction uncertainty in regression problems. Finally, the extension of the model introduced in this paper to take into account linear equations, as well as the development of computational procedures for reasoning with GRFVâs over many variables are promising avenues for further research.
References
References
- [1] P. A. Bromiley. Products and convolutions of Gaussian probability density functions. Technical Report 2003-003, TINA, 2014. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.583.3007&rep=rep1&type=pdf.
- [2] I. Couso and L. SĂĄnchez. Upper and lower probabilities induced by a fuzzy random variable. Fuzzy Sets and Systems, 165(1):1â23, 2011.
- [3] A. P. Dempster. Upper and lower probabilities induced by a multivalued mapping. Annals of Mathematical Statistics, 38:325â339, 1967.
- [4] A. P. Dempster. Upper and lower probabilities generated by a random closed interval. Annals of Mathematical Statistics, 39(3):957â966, 1968.
- [5] A. P. Dempster. Normal belief functions and the Kalman filter. In A. K. M. E. Saleh, editor, Data Analysis from Statistical Foundations, pages 65â68. Nova Science Publishers, 2001.
- [6] T. DenĆux. Extending stochastic ordering to belief functions on the real line. Information Sciences, 179:1362â1376, 2009.
- [7] T. DenĆux. Likelihood-based belief function: justification and some extensions to low-quality data. International Journal of Approximate Reasoning, 55(7):1535â1547, 2014.
- [8] T. DenĆux. Rejoinder on âlikelihood-based belief function: Justification and some extensions to low-quality dataâ. International Journal of Approximate Reasoning, 55(7):1614â1617, 2014.
- [9] T. DenĆux. Belief functions induced by random fuzzy sets: A general framework for representing uncertain and fuzzy evidence. Fuzzy Sets and Systems, 424:63â91, 2021.
- [10] T. DenĆux, D. Dubois, and H. Prade. Representations of uncertainty in artificial intelligence: Beyond probability and possibility. In P. Marquis, O. Papini, and H. Prade, editors, A Guided Tour of Artificial Intelligence Research, volume 1, chapter 4, pages 119â150. Springer Verlag, 2020.
- [11] T. DenĆux, D. Dubois, and H. Prade. Representations of uncertainty in artificial intelligence: Probability and possibility. In P. Marquis, O. Papini, and H. Prade, editors, A Guided Tour of Artificial Intelligence Research, volume 1, chapter 3, pages 69â117. Springer Verlag, 2020.
- [12] D. Dubois, E. Kerre, R. Mesiar, and H. Prade. Fuzzy interval analysis. In D. Dubois and H. Prade, editors, Fundamentals of Fuzzy sets, pages 483â581. Kluwer Academic Publishers, Boston, 2000.
- [13] D. Dubois, H. T. Nguyen, and H. Prade. Possibility theory, probability and fuzzy sets: Misunderstandings, bridges and gaps. In D. Dubois and H. Prade, editors, Fundamentals of Fuzzy sets, pages 343â438. Kluwer Academic Publishers, Boston, 2000.
- [14] D. Dubois and H. Prade. Fuzzy Sets and Systems: Theory and Applications. Academic Press, New York, 1980.
- [15] D. Dubois and H. Prade. Possibility theory: qualitative and quantitative aspects. In D. M. Gabbay and P. Smets, editors, Handbook of Defeasible reasoning and uncertainty management systems, volume 1, pages 169â226. Kluwer Academic Publishers, Dordrecht, 1998.
- [16] D. Dubois, H. Prade, and R. Yager. Merging fuzzy information. In J. C. Bezdek, D. Dubois, and H. Prade, editors, Fuzzy sets in approximate reasoning and information systems, pages 335â401. Kluwer Academic Publishers, Boston, 1999.
- [17] M. A. Gil, M. LĂłpez-DĂaz, and D. A. Ralescu. Overview on the development of fuzzy random variables. Fuzzy Sets and Systems, 157(19):2546â2557, 2006.
- [18] P. R. Halmos. Measure theory. Springer Science, New York, 1950.
- [19] O. Kanjanatarakul, S. Sriboonchitta, and T. DenĆux. Forecasting using belief functions: an application to marketing econometrics. International Journal of Approximate Reasoning, 55(5):1113â1128, 2014.
- [20] O. Kanjanatarakul, S. Sriboonchitta, and T. DenĆux. Prediction of future observations using belief functions: A likelihood-based approach. International Journal of Approximate Reasoning, 72:71â94, 2016.
- [21] G. J. Klir and B. Yuan. Fuzzy sets and fuzzy logic. Theory and applications. Prentice-Hall, Upper Saddle River, NJ., 1995.
- [22] E. Miranda, I. Couso, and P. Gil. Random intervals as a model for imprecise information. Fuzzy Sets and Systems, 154(3):386â412, 2005.
- [23] I. Molchanov. Theory of Random Sets. Springer, New York, 2005.
- [24] P.-A. Monney. A Mathematical Theory of Arguments for Statistical Evidence. Contributions to Statistics. Physica-Verlag, Heidelberg, 2003.
- [25] S. Nahmias. Fuzzy variables. Fuzzy Sets and Systems, 1(2):97â110, 1978.
- [26] H. T. Nguyen. On random sets and belief functions. Journal of Mathematical Analysis and Applications, 65:531â542, 1978.
- [27] K. B. Petersen and M. S. Pedersen. The matrix cookbook, nov 2012. http://www2.compute.dtu.dk/pubdb/pubs/3274-full.html.
- [28] M. L. Puri and D. A. Ralescu. Fuzzy random variables. Journal of Mathematical Analysis and Applications, 114(2):409â422, 1986.
- [29] G. Shafer. A mathematical theory of evidence. Princeton University Press, Princeton, N.J., 1976.
- [30] G. Shafer. Constructive probability. Synthese, 48(1):1â60, 1981.
- [31] G. Shafer. Belief functions and parametric models (with discussion). J. Roy. Statist. Soc. Ser. B, 44:322â352, 1982.
- [32] G. Shafer. Dempsterâs rule of combination. International Journal of Approximate Reasoning, 79:26â40, 2016.
- [33] P. P. Shenoy. Using possibility theory in expert systems. Fuzzy Sets and Systems, 52(2):129â142, 1992.
- [34] P. Smets. Belief functions on real numbers. International Journal of Approximate Reasoning, 40(3):181â223, 2005.
- [35] R. R. Yager. On the normalization of fuzzy belief structure. International Journal of Approximate Reasoning, 14:127â153, 1996.
- [36] L. A. Zadeh. Fuzzy sets. Inform. Control, 8:338â353, 1965.
- [37] L. A. Zadeh. Probability measures of fuzzy events. J. Math. Analysis and Appl., 10:421â427, 1968.
- [38] L. A. Zadeh. Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets and Systems, 1:3â28, 1978.
Appendix A Proof of Proposition 1
Commutativity is obvious. To prove associativity, let us consider three random sets $(\Omega_{i},\sigma_{i},P_{i},\Theta,\sigma_{\Theta},{\overline{X}}_{i})$ , $i=1,2,3$ . Consider the combined random set
$$
(\Omega_{1}\times\Omega_{2}\times\Omega_{3},\sigma_{1}\otimes\sigma_{2}\otimes%
\sigma_{3},P_{123},\Theta,\sigma_{\Theta},{\overline{X}}_{1\cap 2\cap 3}), \tag{38}
$$
where
$$
{\overline{X}}_{1\cap 2\cap 3}(\omega_{1},\omega_{2},\omega_{3})={\overline{X}%
}_{1}(\omega_{1})\cap{\overline{X}}_{2}(\omega_{2})\cap{\overline{X}}_{3}(%
\omega_{3}),
$$
$$
P_{123}=(P_{1}\times P_{2}\times P_{3})(\cdot\mid\Theta^{*}_{123}),
$$
and
$$
\Theta^{*}_{123}=\{(\omega_{1},\omega_{2},\omega_{3})\in\Omega_{1}\times\Omega%
_{2}\times\Omega_{2}:{\overline{X}}_{1\cap 2\cap 3}(\omega_{1},\omega_{2},%
\omega_{3})\neq\emptyset\}.
$$
We will show that we get the same result by combining ${\overline{X}}_{1}$ with ${\overline{X}}_{2}$ first, and then combining the result with ${\overline{X}}_{3}$ . Combining the first two random sets, we get
$$
(\Omega_{1}\times\Omega_{2},\sigma_{1}\otimes\sigma_{2},P_{12},\Theta,\sigma_{%
\Theta},{\overline{X}}_{1\cap 2}),
$$
with ${\overline{X}}_{1\cap 2}(\omega_{1},\omega_{2})={\overline{X}}_{1}(\omega_{1})%
\cap{\overline{X}}_{2}(\omega_{2})$ , $P_{12}=(P_{1}à P_{2})(·\mid\Theta^{*}_{12})$ and
$$
\Theta^{*}_{12}=\{(\omega_{1},\omega_{2})\in\Omega_{1}\times\Omega_{2}:{%
\overline{X}}_{1\cap 2}(\omega_{1},\omega_{2})\neq\emptyset\}.
$$
Combining it with ${\overline{X}}_{3}$ we get
$$
(\Omega_{1}\times\Omega_{2}\times\Omega_{3},\sigma_{1}\otimes\sigma_{2}\otimes%
\sigma_{3},P_{(12)3},\Theta,\sigma_{\Theta},{\overline{X}}_{1\cap 2\cap 3}), \tag{12}
$$
with $P_{(12)3}=(P_{12}Ă P_{3})(·\mid\Theta^{*}_{123})$ . Comparing (38) and (39), we see that we only need to show that $P_{(12)3}=P_{123}$ . For any event $Câeq\Theta^{*}_{123}$ and any $\omega_{3}â\Omega_{3}$ , let $C_{\omega_{3}}=\{(\omega_{1},\omega_{2})â\Omega_{1}Ă\Omega_{2}:(\omega_%
{1},\omega_{2},\omega_{3})â C\}$ . By definition of the product measure $P_{12}Ă P_{3}$ (see [18, page 144]), we have
$$
P_{(12)3}(C)=\frac{(P_{12}\times P_{3})(C)}{(P_{12}\times P_{3})(\Theta^{*}_{1%
23})}=\frac{1}{(P_{12}\times P_{3})(\Theta^{*}_{123})}\int P_{12}(C_{\omega_{3%
}})dP_{3}(\omega_{3}) \tag{12}
$$
Now, as $Câeq\Theta^{*}_{123}$ , for any $(\omega_{1},\omega_{2})â C_{\omega_{3}}$ , ${\overline{X}}_{1}(\omega_{1})\cap{\overline{X}}_{2}(\omega_{2})â \emptyset$ . Consequently, $C_{\omega_{3}}âeq\Theta^{*}_{12}$ , so
$$
P_{12}(C_{\omega_{3}})=\frac{(P_{1}\times P_{2})(C_{\omega_{3}})}{(P_{1}\times
P%
_{2})(\Theta^{*}_{12})}. \tag{41}
$$
From (40) and (41), we get
$$
\displaystyle P_{(12)3}(C) \displaystyle=\frac{1}{(P_{12}\times P_{3})(\Theta^{*}_{123})(P_{1}\times P_{2%
})(\Theta^{*}_{12})}\int(P_{1}\times P_{2})(C_{\omega_{3}})dP_{3}(\omega_{3}) \displaystyle=\frac{(P_{1}\times P_{2}\times P_{3})(C)}{(P_{12}\times P_{3})(%
\Theta^{*}_{123})(P_{1}\times P_{2})(\Theta^{*}_{12})}. \tag{12}
$$
Now,
$$
P_{123}(C)=\frac{(P_{1}\times P_{2}\times P_{3})(C)}{(P_{1}\times P_{2}\times P%
_{3})(\Theta^{*}_{123})}. \tag{43}
$$
As $P_{(12)3}(\Theta^{*}_{123})=P_{123}(\Theta^{*}_{123})=1$ , the denominators in (42b) and (43) are equal, and $P_{(12)3}=P_{123}$ .
Appendix B Proof of Proposition 4
Commutativity is obvious. To prove associativity, consider three random fuzzy sets
$$
(\Omega_{i},\sigma_{i},P_{i},\Theta,\sigma_{\Theta},{\widetilde{X}}_{i}),\quad
i%
=1,2,3.
$$
Let ${\widetilde{\Theta}}^{*}_{12}$ be the fuzzy subset of $\Omega_{1}Ă\Omega_{2}$ with membership function
$$
{\widetilde{\Theta}}_{12}^{*}(\omega_{1},\omega_{2})=0pt\left({\widetilde{X}}_%
{1}(\omega_{1}){\widetilde{X}}_{2}(\omega_{2})\right),
$$
and let ${\widetilde{\Theta}}^{*}_{(12)3}$ and ${\widetilde{\Theta}}^{*}_{123}$ be the fuzzy subsets of $\Omega_{1}Ă\Omega_{2}Ă\Omega_{3}$ defined, respectively, as
$$
{\widetilde{\Theta}}_{(12)3}^{*}(\omega_{1},\omega_{2},\omega_{3})=0pt\left(%
\left[{\widetilde{X}}_{1}(\omega_{1})\varodot{\widetilde{X}}_{2}(\omega_{1})%
\right]{\widetilde{X}}_{3}(\omega_{3})\right) \tag{12}
$$
and
$$
{\widetilde{\Theta}}_{123}^{*}(\omega_{1},\omega_{2},\omega_{3})=0pt\left({%
\widetilde{X}}_{1}(\omega_{1}){\widetilde{X}}_{2}(\omega_{2}){\widetilde{X}}_{%
3}(\omega_{3})\right).
$$
Let ${\widetilde{P}}_{12}=(P_{1}à P_{2})(·\mid{\widetilde{\Theta}}_{12}^{*})$ , ${\widetilde{P}}_{(12)3}=({\widetilde{P}}_{12}à P_{3})(·\mid{%
\widetilde{\Theta}}_{(12)3}^{*})$ , and ${\widetilde{P}}_{123}=(P_{1}à P_{2}à P_{3})(·\mid{\widetilde{%
\Theta}}_{123}^{*})$ . We only need to show that ${\widetilde{P}}_{(12)3}={\widetilde{P}}_{123}$ . For any $Bâ\sigma_{1}\otimes\sigma_{2}\otimes\sigma_{3}$ , we have
$$
\displaystyle{\widetilde{P}}_{(12)3}(B) \displaystyle\propto\int_{\Omega_{1}\times\Omega_{2}}\int_{\Omega_{3}}B(\omega%
_{1},\omega_{2},\omega_{3})0pt\left(\left[{\widetilde{X}}_{1}(\omega_{1})%
\varodot{\widetilde{X}}_{2}(\omega_{1})\right]{\widetilde{X}}_{3}(\omega_{3})%
\right)dP_{3}(\omega_{3})d{\widetilde{P}}_{12}(\omega_{1},\omega_{2}) \displaystyle\propto\int_{\Omega_{1}}\int_{\Omega_{2}}\int_{\Omega_{3}}B(%
\omega_{1},\omega_{2},\omega_{3})0pt\left(\left[{\widetilde{X}}_{1}(\omega_{1}%
)\varodot{\widetilde{X}}_{2}(\omega_{1})\right]{\widetilde{X}}_{3}(\omega_{3})%
\right)\times \displaystyle\hskip 142.26378pt0pt\left({\widetilde{X}}_{1}(\omega_{1}){%
\widetilde{X}}_{2}(\omega_{2})\right)dP_{3}(\omega_{3})dP_{2}(\omega_{2})dP_{1%
}(\omega_{1}). \tag{12}
$$
Now,
| | $\displaystyle 0pt\left(\left[{\widetilde{X}}_{1}(\omega_{1})\varodot{%
\widetilde{X}}_{2}(\omega_{1})\right]{\widetilde{X}}_{3}(\omega_{3})\right)$ | $\displaystyle=0pt\left(\frac{{\widetilde{X}}_{1}(\omega_{1}){\widetilde{X}}_{2%
}(\omega_{1})}{0pt({\widetilde{X}}_{1}(\omega_{1}){\widetilde{X}}_{2}(\omega_{%
2}))}{\widetilde{X}}_{3}(\omega_{3})\right)$ | |
| --- | --- | --- | --- |
Hence,
$$
{\widetilde{P}}_{(12)3}(B)\propto\int_{\Omega_{1}}\int_{\Omega_{2}}\int_{%
\Omega_{3}}B(\omega_{1},\omega_{2},\omega_{3})0pt\left({\widetilde{X}}_{1}(%
\omega_{1}){\widetilde{X}}_{2}(\omega_{2}){\widetilde{X}}_{3}(\omega_{3})%
\right)dP_{3}(\omega_{3})dP_{2}(\omega_{2})dP_{1}(\omega_{1}), \tag{12}
$$
which proves that ${\widetilde{P}}_{(12)3}={\widetilde{P}}_{123}$ , and the associativity of $\oplus$ .
Appendix C Proof of Proposition 6
We have
$$
\displaystyle pl_{\widetilde{X}}(x)= \displaystyle\mathbb{E}_{M}[\varphi(x;M,h)] \displaystyle= \displaystyle\int_{-\infty}^{+\infty}\varphi(x;m,h)\phi(m;\mu,\sigma)dm \displaystyle= \displaystyle\frac{1}{\sigma\sqrt{2\pi}}\int_{-\infty}^{+\infty}\exp\left(-%
\frac{h}{2}(x-m)^{2}\right)\exp\left(-\frac{(m-\mu)^{2}}{2\sigma^{2}}\right)dm. \tag{44}
$$
From Proposition 3, the integrand can be written as
$$
\exp\left(-\frac{(m-\mu_{0})^{2}}{2\sigma_{0}^{2}}\right)\exp\left(-\frac{h(x-%
\mu)^{2}}{2(1+h\sigma^{2})}\right),
$$
with
$$
\mu_{0}=\frac{xh+\mu/\sigma^{2}}{h+1/\sigma^{2}}=\frac{xh\sigma^{2}+\mu}{h%
\sigma^{2}+1}
$$
and
$$
\sigma_{0}=\sqrt{\frac{1}{h+1/\sigma^{2}}}=\frac{\sigma}{\sqrt{1+h\sigma^{2}}}.
$$
Consequently,
$$
\displaystyle pl_{\widetilde{X}}(x)= \displaystyle\frac{1}{\sigma\sqrt{2\pi}}\exp\left(-\frac{h(x-\mu)^{2}}{2(1+h%
\sigma^{2})}\right)\underbrace{\int_{-\infty}^{+\infty}\exp\left(-\frac{(m-\mu%
_{0})^{2}}{2\sigma_{0}^{2}}\right)dm}_{\sigma_{0}\sqrt{2\pi}} \displaystyle=\frac{1}{\sqrt{1+h\sigma^{2}}}\exp\left(-\frac{h(x-\mu)^{2}}{2(1%
+h\sigma^{2})}\right). \tag{47}
$$
Appendix D Proof of Proposition 7
If $h=0$ , we have, trivially, $Bel_{\widetilde{X}}([x,y])=0$ and $Pl_{\widetilde{X}}([x,y])=1$ for all $x†y$ . Let us assume that $h>0$ . We have
$$
Pl_{\widetilde{X}}([x,y])=\mathbb{P}(M\leq x)\mathbb{E}[\varphi(x;M,h)\mid M%
\leq x]+\\
\mathbb{P}(x<M\leq y)\times 1+\mathbb{P}(M>y)\mathbb{E}[\varphi(y;M,h)\mid M>y], \tag{49}
$$
which can be written as
$$
Pl_{\widetilde{X}}([x,y])=\Phi\left(\frac{x-\mu}{\sigma}\right)\mathbb{E}[%
\varphi(x;M,h)\mid M\leq x]+\\
\Phi\left(\frac{y-\mu}{\sigma}\right)-\Phi\left(\frac{x-\mu}{\sigma}\right)+\\
\left[1-\Phi\left(\frac{y-\mu}{\sigma}\right)\right]\mathbb{E}[\varphi(y;M,h)%
\mid M>y]. \tag{50}
$$
Conditionally on $M†x$ , $M$ has a truncated normal distribution on $(-â,x]$ with pdf
$$
f(m)=\frac{1}{\sigma\sqrt{2\pi}}\frac{\exp\left(\frac{-(m-\mu)^{2}}{2\sigma^{2%
}}\right)}{\Phi\left(\frac{x-\mu}{\sigma}\right)}\mathbf{1}_{(-\infty,x]}(m).
$$
Consequently,
$$
\mathbb{E}[\varphi(x;M,h)\mid M\leq x]=\frac{1}{\sigma\sqrt{2\pi}}\frac{1}{%
\Phi\left(\frac{x-\mu}{\sigma}\right)}\underbrace{\int_{-\infty}^{x}\exp\left(%
-\frac{h}{2}(x-m)^{2}\right)\exp\left(-\frac{(m-\mu)^{2}}{2\sigma^{2}}\right)%
dm}_{I}. \tag{51}
$$
From Proposition 3, integral $I$ in (51) can be written as
$$
I=\sigma_{0}\sqrt{2\pi}\Phi\left(\frac{x-\mu_{0}}{\sigma_{0}}\right)\exp\left(%
-\frac{(x-\mu)^{2}}{2(h^{-1}+\sigma^{2})}\right),
$$
with
$$
\mu_{0}=\frac{xh\sigma^{2}+\mu}{h\sigma^{2}+1}\quad\text{and}\quad\sigma_{0}=%
\frac{\sigma}{\sqrt{h\sigma^{2}+1}}.
$$
Consequently,
$$
\mathbb{E}[\varphi(x;M,h)\mid M\leq x]=\frac{1}{\Phi\left(\frac{x-\mu}{\sigma}%
\right)}pl_{\widetilde{X}}(x)\Phi\left(\frac{x-\mu}{\sigma\sqrt{h\sigma^{2}+1}%
}\right).
$$
Using similar calculations, we find
$$
\mathbb{E}[\varphi(y;M,h)\mid M>y]=\frac{1}{1-\Phi\left(\frac{y-\mu}{\sigma}%
\right)}pl_{\widetilde{X}}(y)\left[1-\Phi\left(\frac{y-\mu}{\sigma\sqrt{h%
\sigma^{2}+1}}\right)\right],
$$
which concludes the proof of (24).
Now, let us consider (23). We have
$$
Bel_{\widetilde{X}}([x,y])=1-Pl_{\widetilde{X}}((-\infty,x]\cup[y,+\infty)),
$$
and
$$
Pl_{\widetilde{X}}((-\infty,x]\cup[y,+\infty))=\mathbb{P}(M\leq x)\times 1+\\
\mathbb{P}(x<M\leq(x+y)/2)\mathbb{E}[\varphi(x;M,h)\mid x<M\leq(x+y)/2]+\\
\mathbb{P}((x+y)/2<M\leq y)\mathbb{E}[\varphi(y;M,h)\mid(x+y)/2<M\leq y]+%
\mathbb{P}(M>y)\times 1, \tag{52}
$$
which can be written as
$$
Pl_{\widetilde{X}}((-\infty,x]\cup[y,+\infty))=\Phi\left(\frac{x-\mu}{\sigma}%
\right)+\\
\left[\Phi\left(\frac{(x+y)/2-\mu}{\sigma}\right)-\Phi\left(\frac{x-\mu}{%
\sigma}\right)\right]\mathbb{E}[\varphi(x;M,h)\mid x<M\leq(x+y)/2]+\\
\left[\Phi\left(\frac{y-\mu}{\sigma}\right)-\Phi\left(\frac{(x+y)/2-\mu}{%
\sigma}\right)\right]\mathbb{E}[\varphi(y;M,h)\mid(x+y)/2<M\leq y]+\\
1-\Phi\left(\frac{y-\mu}{\sigma}\right). \tag{53}
$$
Conditionally on $x<Mâ€(x+y)/2$ , $M$ has a truncated normal distribution on $(x,(x+y)/2]$ with pdf
$$
f(m)=\frac{1}{\sigma\sqrt{2\pi}}\frac{\exp\left(\frac{-(m-\mu)^{2}}{2\sigma^{2%
}}\right)}{\Phi\left(\frac{(x+y)/2-\mu}{\sigma}\right)-\Phi\left(\frac{x-\mu}{%
\sigma}\right)}\mathbf{1}_{(x,(x+y)/2]}(m).
$$
Consequently,
$$
\mathbb{E}[\varphi(x;M,h)\mid x<M\leq(x+y)/2]=\frac{1}{\sigma\sqrt{2\pi}}\frac%
{1}{\Phi\left(\frac{(x+y)/2-\mu}{\sigma}\right)-\Phi\left(\frac{x-\mu}{\sigma}%
\right)}\times\\
\underbrace{\int_{x}^{(x+y)/2}\exp\left(-\frac{h}{2}(x-m)^{2}\right)\exp\left(%
-\frac{(m-\mu)^{2}}{2\sigma^{2}}\right)dm}_{I^{\prime}}. \tag{54}
$$
The integral in (54) is, with the same notations as before,
$$
I^{\prime}=\sigma_{0}\sqrt{2\pi}\left[\Phi\left(\frac{(x+y)/2-\mu_{0}}{\sigma_%
{0}}\right)-\Phi\left(\frac{x-\mu_{0}}{\sigma_{0}}\right)\right]\exp\left(-%
\frac{(x-\mu)^{2}}{2(h^{-1}+\sigma^{2})}\right).
$$
Consequently,
$$
\mathbb{E}[\varphi(x;M,h)\mid x<M\leq(x+y)/2]=\\
\frac{1}{\Phi\left(\frac{(x+y)/2-\mu}{\sigma}\right)-\Phi\left(\frac{x-\mu}{%
\sigma}\right)}pl_{\widetilde{X}}(x)\left[\Phi\left(\frac{(x+y)/2-\mu+h\sigma^%
{2}(y-x)/2}{\sigma\sqrt{h\sigma^{2}+1}}\right)\right.\\
\left.-\Phi\left(\frac{x-\mu}{\sigma\sqrt{h\sigma^{2}+1}}\right)\right]. \tag{55}
$$
Similarly, we find
$$
\mathbb{E}[\varphi(y;M,h)\mid(x+y)/2<M\leq y]=\\
\frac{1}{\Phi\left(\frac{y-\mu}{\sigma}\right)-\Phi\left(\frac{(x+y)/2-\mu}{%
\sigma}\right)}pl_{\widetilde{X}}(y)\left[\Phi\left(\frac{y-\mu}{\sigma\sqrt{h%
\sigma^{2}+1}}\right)-\right.\\
\left.\Phi\left(\frac{(x+y)/2-\mu-(y-x)h\sigma^{2}/2}{\sigma\sqrt{h\sigma^{2}+%
1}}\right)\right]. \tag{56}
$$
The expressions of $Pl_{\widetilde{X}}((-â,x]\cup[y,+â))$ and $Bel_{\widetilde{X}}([x,y])$ follow.
Appendix E Proof of Proposition 8
Let ${\widetilde{X}}(\omega)=\textsf{GFN}(M(\omega),h)$ be the image of $\omegaâ\Omega$ by ${\widetilde{X}}$ , with $M\sim N(\mu,\sigma^{2})$ . For any $\alphaâ(0,1]$ , its alpha-cut is the random interval
$$
{}^{\alpha}{\widetilde{X}}(\omega)=\left[M(\omega)-\sqrt{\frac{-2\ln\alpha}{h}%
},M(\omega)+\sqrt{\frac{-2\ln\alpha}{h}}\right].
$$
Consequently, from (16), the lower and upper expectation of ${\widetilde{X}}$ are
$$
\mathbb{E}_{*}({\widetilde{X}})=\mu-\int_{0}^{1}\sqrt{\frac{-2\ln\alpha}{h}}d\alpha,
$$
and
$$
\mathbb{E}^{*}({\widetilde{X}})=\mu+\int_{0}^{1}\sqrt{\frac{-2\ln\alpha}{h}}d\alpha.
$$
By the change of variable $\beta=\sqrt{-2(\ln\alpha)/h}$ , we get
$$
\int_{0}^{1}\sqrt{\frac{-2\ln\alpha}{h}}d\alpha=h\int_{0}^{+\infty}\beta^{2}%
\exp\left(-\frac{h\beta^{2}}{2}\right)d\beta.
$$
Now, the second-order moment of the normal distribution $N(0,1/h)$ is
$$
\sqrt{\frac{h}{2\pi}}\int_{-\infty}^{+\infty}\beta^{2}\exp\left(-\frac{h\beta^%
{2}}{2}\right)d\beta=\frac{1}{h},
$$
from which we get
$$
h\int_{0}^{+\infty}\beta^{2}\exp\left(-\frac{h\beta^{2}}{2}\right)d\beta=h%
\cdot\frac{1}{h}\sqrt{\frac{\pi}{2h}}=\sqrt{\frac{\pi}{2h}}.
$$
Appendix F Proof of Lemma 1
The conditional density of $(M_{1},M_{2})$ is
$$
f(m_{1},m_{2}\mid{\widetilde{F}})=\frac{f(m_{1},m_{2}){\widetilde{F}}(m_{1},m_%
{2})}{\iint f(m_{1},m_{2}){\widetilde{F}}(m_{1},m_{2})dm_{1}dm_{2}}. \tag{57}
$$
The numerator on the right-hand side of (57) is
$$
\frac{1}{2\pi\sigma_{1}\sigma_{2}}\exp\left\{-\frac{1}{2}\left[\left(\frac{m_{%
1}-\mu_{1}}{\sigma_{1}}\right)^{2}+\left(\frac{m_{2}-\mu_{2}}{\sigma_{2}}%
\right)^{2}\right]\right\}\exp\left\{-\frac{{\overline{h}}(m_{1}-m_{2})^{2}}{2%
}\right\}\\
=\frac{1}{2\pi\sigma_{1}\sigma_{2}}\exp\left\{-\frac{1}{2}\left[m_{1}^{2}\left%
(\frac{1}{\sigma_{1}^{2}}+{\overline{h}}\right)-\frac{2m_{1}\mu_{1}}{\sigma_{1%
}^{2}}+\frac{\mu_{1}^{2}}{\sigma_{1}^{2}}+\right.\right.\\
\left.\left.m_{2}^{2}\left(\frac{1}{\sigma_{2}^{2}}+{\overline{h}}\right)-%
\frac{2m_{2}\mu_{2}}{\sigma_{2}^{2}}+\frac{\mu_{2}^{2}}{\sigma_{2}^{2}}-2{%
\overline{h}}m_{1}m_{2}\right]\right\}. \tag{58}
$$
Now, the two-dimensional Gaussian density with parameters $({\widetilde{\mu}}_{1},{\widetilde{\mu}}_{2},{\widetilde{\sigma}}_{1},{%
\widetilde{\sigma}}_{2},\rho)$ equals
$$
\frac{1}{2\pi{\widetilde{\sigma}}_{1}{\widetilde{\sigma}}_{2}\sqrt{1-\rho^{2}}%
}\exp\left\{-\frac{1}{2(1-\rho)^{2}}\left[\left(\frac{m_{1}-{\widetilde{\mu}}_%
{1}}{{\widetilde{\sigma}}_{1}}\right)^{2}-\right.\right.\\
\left.\left.2\rho\left(\frac{m_{1}-{\widetilde{\mu}}_{1}}{{\widetilde{\sigma}}%
_{1}}\right)\left(\frac{m_{2}-{\widetilde{\mu}}_{2}}{{\widetilde{\sigma}}_{2}}%
\right)+\left(\frac{m_{2}-{\widetilde{\mu}}_{2}}{{\widetilde{\sigma}}_{2}}%
\right)^{2}\right]\right\}. \tag{59}
$$
Equating the second and first-order terms inside the exponentials in (58) and (59) gives us
$$
\displaystyle{\widetilde{\sigma}}_{1} \displaystyle=\frac{1}{1-\rho^{2}}\left(\frac{1}{\sigma_{1}^{2}}+{\overline{h}%
}\right)^{-1} \displaystyle{\widetilde{\sigma}}_{2} \displaystyle=\frac{1}{1-\rho^{2}}\left(\frac{1}{\sigma_{2}^{2}}+{\overline{h}%
}\right)^{-1} \displaystyle\rho \displaystyle=\frac{{\overline{h}}\sigma_{1}\sigma_{2}}{\sqrt{(1+{\overline{h}%
}\sigma_{1}^{2})(1+{\overline{h}}\sigma_{2}^{2})}} \displaystyle{\widetilde{\mu}}_{1} \displaystyle=\frac{\mu_{1}{\widetilde{\sigma}}_{1}^{2}}{\sigma_{1}^{2}}+\rho%
\mu_{2}\frac{{\widetilde{\sigma}}_{1}{\widetilde{\sigma}}_{2}}{\sigma_{2}^{2}} \displaystyle{\widetilde{\mu}}_{2} \displaystyle=\frac{\mu_{2}{\widetilde{\sigma}}_{2}^{2}}{\sigma_{2}^{2}}+\rho%
\mu_{1}\frac{{\widetilde{\sigma}}_{1}{\widetilde{\sigma}}_{2}}{\sigma_{1}^{2}}.
$$
Replacing $\rho$ by its expression (60c) in (60a) and (60b) yields (28c) and (28d). Replacing $\rho$ , ${\widetilde{\sigma}}_{1}$ and ${\widetilde{\sigma}}_{2}$ by their expressions in (60d) and (60e) gives (28a) and (28b).
Finally, the degree of conflict between GRFNâs ${\widetilde{X}}_{1}\sim{\widetilde{N}}(\mu_{1},\sigma_{1}^{2},h_{1})$ and ${\widetilde{X}}_{2}\sim{\widetilde{N}}(\mu_{2},\sigma_{2}^{2},h_{2})$ is
$$
\kappa=1-(P_{1}\times P_{2})({\widetilde{\Theta}}^{*}),
$$
with
$$
(P_{1}\times P_{2})({\widetilde{\Theta}}^{*})=\iint f(m_{1},m_{2}){\widetilde{%
F}}(m_{1},m_{2})dm_{1}dm_{2}.
$$
Taking the ratio of (58) to (59), we get
$$
\iint f(m_{1},m_{2}){\widetilde{F}}(m_{1},m_{2})dm_{1}dm_{2}=\\
\frac{{\widetilde{\sigma}}_{1}{\widetilde{\sigma}}_{2}}{\sigma_{1}\sigma_{2}}%
\sqrt{1-\rho^{2}}\exp\left\{-\frac{1}{2}\left[\frac{\mu_{1}^{2}}{\sigma_{1}^{2%
}}+\frac{\mu_{2}^{2}}{\sigma_{2}^{2}}\right]+\frac{1}{2(1-\rho^{2})}\left[%
\frac{{\widetilde{\mu}}_{1}^{2}}{{\widetilde{\sigma}}_{1}^{2}}+\frac{{%
\widetilde{\mu}}_{2}^{2}}{{\widetilde{\sigma}}_{2}^{2}}-2\rho\frac{{\widetilde%
{\mu}}_{1}{\widetilde{\mu}}_{2}}{{\widetilde{\sigma}}_{1}{\widetilde{\sigma}}_%
{2}}\right]\right\}.
$$
Appendix G Proof of Proposition 10
From (29), $h_{12}=+â$ and the combined GRFN ${\widetilde{N}}({\widetilde{\mu}}_{12},{\widetilde{\sigma}}_{12}^{2},h_{12})$ is probabilistic. From (30) and (31),
$$
{\widetilde{\mu}}_{12}=\lim_{h_{1}\rightarrow+\infty}\frac{{\widetilde{\mu}}_{%
1}+\frac{h_{2}}{h_{1}}{\widetilde{\mu}}_{2}}{1+\frac{h_{2}}{h_{1}}}={%
\widetilde{\mu}}_{1},
$$
and
$$
{\widetilde{\sigma}}_{12}^{2}=\lim_{h_{1}\rightarrow+\infty}\frac{{\widetilde{%
\sigma}}^{2}_{1}+\frac{h^{2}_{2}}{h^{2}_{1}}{\widetilde{\sigma}}^{2}_{2}+2\rho%
\frac{h_{2}}{h_{1}}{\widetilde{\sigma}}_{1}{\widetilde{\sigma}}_{2}}{(1+\frac{%
h_{2}}{h_{1}})^{2}}={\widetilde{\sigma}}^{2}_{1}.
$$
From (28f),
$$
{\overline{h}}=\lim_{h_{1}\rightarrow+\infty}\frac{h_{2}}{1+\frac{h_{2}}{h_{1}%
}}=h_{2}.
$$
From (28a) and (28c),
$$
{\widetilde{\mu}}_{1}=\frac{\mu_{1}(1+h_{2}\sigma_{2}^{2})+\mu_{2}h_{2}\sigma_%
{1}^{2}}{1+h_{2}(\sigma_{1}^{2}+\sigma_{2}^{2})},
$$
and
$$
{\widetilde{\sigma}}_{1}^{2}=\frac{\sigma_{1}^{2}(1+h_{2}\sigma_{2}^{2})}{1+h_%
{2}(\sigma_{1}^{2}+\sigma_{2}^{2})}.
$$
Now, using Proposition 3, the product of the probability density of $X_{1}$ and the contour function of ${\widetilde{X}}_{2}$ can be written as
| | $\displaystyle f_{X_{1}}(x)pl_{{\widetilde{X}}_{2}}(x)$ | $\displaystyle\propto\exp\left(-\frac{1}{2}\frac{(x-\mu_{1})^{2}}{\sigma_{1}^{2%
}}\right)\exp\left(-\frac{h_{2}(x-\mu_{2})^{2}}{2(1+h_{2}\sigma_{2}^{2})}\right)$ | |
| --- | --- | --- | --- |
with
$$
\frac{1}{\sigma_{12}^{2}}=\frac{1}{\sigma_{1}^{2}}+\frac{h_{2}}{1+h_{2}\sigma_%
{2}^{2}}=\frac{1+h_{2}(\sigma_{1}^{2}+\sigma_{2}^{2})}{\sigma_{1}^{2}(1+h_{2}%
\sigma_{2}^{2})}
$$
and
$$
\mu_{12}=\frac{\frac{1}{\sigma_{1}^{2}}\mu_{1}+\frac{h_{2}}{1+h_{2}\sigma_{2}^%
{2}}\mu_{2}}{\frac{1}{\sigma_{1}^{2}}+\frac{h_{2}}{1+h_{2}\sigma_{2}^{2}}}=%
\frac{\mu_{1}(1+h_{2}\sigma_{2}^{2})+\mu_{2}h_{2}\sigma_{1}^{2}}{1+h_{2}(%
\sigma_{1}^{2}+\sigma_{2}^{2})}.
$$
We can check that $\mu_{12}={\widetilde{\mu}}_{1}$ and $\sigma_{12}^{2}={\widetilde{\sigma}}_{1}^{2}$ .
Appendix H Proof of Proposition 12
We have
$$
\displaystyle pl_{\widetilde{X}}({\boldsymbol{x}}) \displaystyle=\mathbb{E}_{\boldsymbol{M}}[\varphi({\boldsymbol{x}};{%
\boldsymbol{M}},{\boldsymbol{H}})] \displaystyle=\int_{\mathbb{R}^{p}}\varphi({\boldsymbol{x}};{\boldsymbol{m}},{%
\boldsymbol{H}})\phi({\boldsymbol{m}};\boldsymbol{\mu},\Sigma)d{\boldsymbol{m}} \displaystyle=\frac{1}{(2\pi)^{p/2}|\boldsymbol{\Sigma}|^{1/2}}\int_{\mathbb{R%
}^{p}}\exp\left(-\frac{1}{2}({\boldsymbol{x}}-{\boldsymbol{m}})^{T}{%
\boldsymbol{H}}({\boldsymbol{x}}-{\boldsymbol{m}})\right)\times \displaystyle\hskip 113.81102pt\exp\left(-\frac{1}{2}({\boldsymbol{m}}-%
\boldsymbol{\mu})\boldsymbol{\Sigma}^{-1}({\boldsymbol{m}}-\boldsymbol{\mu})%
\right)d{\boldsymbol{m}}. \tag{61}
$$
From Proposition 3, the integrand can be written as
$$
\exp\left(-\frac{1}{2}({\boldsymbol{m}}-\boldsymbol{\mu}_{0})^{T}\boldsymbol{%
\Sigma}_{0}^{-1}({\boldsymbol{m}}-\boldsymbol{\mu}_{0})\right)\exp\left(-\frac%
{1}{2}({\boldsymbol{x}}-\boldsymbol{\mu})^{T}({\boldsymbol{H}}^{-1}+%
\boldsymbol{\Sigma})^{-1}({\boldsymbol{x}}-\boldsymbol{\mu})\right),
$$
with
$$
\boldsymbol{\mu}_{0}=({\boldsymbol{H}}+\boldsymbol{\Sigma}^{-1})^{-1}({%
\boldsymbol{H}}{\boldsymbol{x}}+\boldsymbol{\Sigma}^{-1}\boldsymbol{\mu})
$$
and
$$
\boldsymbol{\Sigma}_{0}=({\boldsymbol{H}}+\boldsymbol{\Sigma}^{-1})^{-1}.
$$
Consequently,
$$
\displaystyle pl_{\widetilde{X}}({\boldsymbol{x}}) \displaystyle=\frac{1}{(2\pi)^{p/2}|\boldsymbol{\Sigma}|^{1/2}}\exp\left(-%
\frac{1}{2}({\boldsymbol{x}}-\boldsymbol{\mu})^{T}({\boldsymbol{H}}^{-1}+%
\boldsymbol{\Sigma})^{-1}({\boldsymbol{x}}-\boldsymbol{\mu})\right)\times \displaystyle\hskip 113.81102pt\underbrace{\int_{\mathbb{R}^{p}}\exp\left(-%
\frac{1}{2}({\boldsymbol{m}}-\boldsymbol{\mu}_{0})^{T}\boldsymbol{\Sigma}_{0}^%
{-1}({\boldsymbol{m}}-\boldsymbol{\mu}0)\right)d{\boldsymbol{m}}}_{(2\pi)^{p/2%
}|\boldsymbol{\Sigma}_{0}|^{1/2}} \displaystyle=\left(\frac{|\boldsymbol{\Sigma}_{0}|}{|\boldsymbol{\Sigma}|}%
\right)^{1/2}\exp\left(-\frac{1}{2}({\boldsymbol{x}}-\boldsymbol{\mu})^{T}({%
\boldsymbol{H}}^{-1}+\boldsymbol{\Sigma})^{-1}({\boldsymbol{x}}-\boldsymbol{%
\mu})\right) \displaystyle=\frac{1}{|I_{p}+\boldsymbol{\Sigma}H|^{1/2}}\exp\left(-\frac{1}{%
2}({\boldsymbol{x}}-\boldsymbol{\mu})^{T}({\boldsymbol{H}}^{-1}+\boldsymbol{%
\Sigma})^{-1}({\boldsymbol{x}}-\boldsymbol{\mu})\right). \tag{65}
$$
Appendix I Proof of Lemma 2
The conditional density of ${\boldsymbol{M}}=({\boldsymbol{M}}_{1},{\boldsymbol{M}}_{2})$ is
$$
f({\boldsymbol{m}}_{1},{\boldsymbol{m}}_{2}\mid{\widetilde{F}})=\frac{f({%
\boldsymbol{m}}_{1},{\boldsymbol{m}}_{2}){\widetilde{F}}({\boldsymbol{m}}_{1},%
{\boldsymbol{m}}_{2})}{\int_{\mathbb{R}^{2p}}f({\boldsymbol{m}}_{1},{%
\boldsymbol{m}}_{2}){\widetilde{F}}({\boldsymbol{m}}_{1},{\boldsymbol{m}}_{2})%
d{\boldsymbol{m}}_{1}d{\boldsymbol{m}}_{2}}. \tag{69}
$$
The numerator on the right-hand side of (69) is
$$
f({\boldsymbol{m}}_{1},{\boldsymbol{m}}_{2}){\widetilde{F}}({\boldsymbol{m}}_{%
1},{\boldsymbol{m}}_{2})=\phi({\boldsymbol{m}}_{1};\boldsymbol{\mu}_{1},%
\boldsymbol{\Sigma}_{1})\phi({\boldsymbol{m}}_{2};\boldsymbol{\mu}_{2},%
\boldsymbol{\Sigma}_{2})\times\\
\exp\left\{-\frac{1}{2}({\boldsymbol{m}}_{1}-{\boldsymbol{m}}_{2})^{T}{%
\overline{{\boldsymbol{H}}}}({\boldsymbol{m}}_{1}-{\boldsymbol{m}}_{2})\right\}, \tag{70}
$$
which can be written as
$$
f({\boldsymbol{m}}_{1},{\boldsymbol{m}}_{2}){\widetilde{F}}({\boldsymbol{m}}_{%
1},{\boldsymbol{m}}_{2})=\frac{1}{(2\pi)^{p}|\boldsymbol{\Sigma}_{1}%
\boldsymbol{\Sigma}_{2}|^{1/2}}\exp\left(-\frac{Z}{2}\right)
$$
with
$$
Z={\boldsymbol{m}}_{1}^{T}(\boldsymbol{\Sigma}_{1}^{-1}+{\overline{{%
\boldsymbol{H}}}}){\boldsymbol{m}}_{1}+{\boldsymbol{m}}_{2}^{T}(\boldsymbol{%
\Sigma}_{2}^{-1}+{\overline{{\boldsymbol{H}}}}){\boldsymbol{m}}_{2}-2{%
\boldsymbol{m}}_{1}^{T}{\overline{{\boldsymbol{H}}}}{\boldsymbol{m}}_{2}-2{%
\boldsymbol{m}}_{1}^{T}\boldsymbol{\Sigma}_{1}^{-1}\boldsymbol{\mu}_{1}-\\
2{\boldsymbol{m}}_{2}^{T}\boldsymbol{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}+%
\boldsymbol{\mu}_{1}^{T}\boldsymbol{\Sigma}_{1}^{-1}\boldsymbol{\mu}_{1}+%
\boldsymbol{\mu}_{2}^{T}\boldsymbol{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}. \tag{71}
$$
Now, the $2p$ -dimensional Gaussian density with mean ${\widetilde{\boldsymbol{\mu}}}$ and covariance matrix ${\widetilde{\boldsymbol{\Sigma}}}$ equals
$$
\phi({\boldsymbol{m}};{\widetilde{\boldsymbol{\mu}}},{\widetilde{\boldsymbol{%
\Sigma}}})=\frac{1}{(2\pi)^{p}|{\widetilde{\boldsymbol{\Sigma}}}|^{1/2}}\exp%
\left\{-\frac{1}{2}({\boldsymbol{m}}-\boldsymbol{\mu})^{T}{\widetilde{%
\boldsymbol{\Sigma}}}^{-1}({\boldsymbol{m}}-\boldsymbol{\mu})\right\}. \tag{72}
$$
Decomposing vector ${\widetilde{\boldsymbol{\mu}}}$ as ${\widetilde{\boldsymbol{\mu}}}=({\widetilde{\boldsymbol{\mu}}}_{1},{\widetilde%
{\boldsymbol{\mu}}}_{2})$ , with ${\widetilde{\boldsymbol{\mu}}}_{1},{\widetilde{\boldsymbol{\mu}}}_{2}â%
\mathbb{R}^{p}$ , and ${\widetilde{\boldsymbol{\Sigma}}}^{-1}$ as
$$
{\widetilde{\boldsymbol{\Sigma}}}^{-1}=\begin{pmatrix}{\boldsymbol{A}}&{%
\boldsymbol{B}}\\
{\boldsymbol{B}}&{\boldsymbol{C}}\end{pmatrix},
$$
where ${\boldsymbol{A}}$ , ${\boldsymbol{B}}$ and ${\boldsymbol{C}}$ are $pĂ p$ matrices, we can rewrite (72) as
$$
\phi({\boldsymbol{m}};{\widetilde{\boldsymbol{\mu}}},{\widetilde{\boldsymbol{%
\Sigma}}})=\frac{1}{(2\pi)^{p}|{\widetilde{\boldsymbol{\Sigma}}}|^{1/2}}\exp%
\left\{-\frac{1}{2}Z^{\prime}\right\}
$$
with
$$
Z^{\prime}={\boldsymbol{m}}_{1}^{T}{\boldsymbol{A}}{\boldsymbol{m}}_{1}-2{%
\boldsymbol{m}}_{1}^{T}{\boldsymbol{A}}{\widetilde{\mu}}_{1}+{\widetilde{\mu}}%
_{1}^{T}{\boldsymbol{A}}{\widetilde{\mu}}_{1}+{\boldsymbol{m}}_{2}^{T}{%
\boldsymbol{C}}{\boldsymbol{m}}_{2}-2{\boldsymbol{m}}_{2}^{T}{\boldsymbol{C}}{%
\widetilde{\mu}}_{2}+{\widetilde{\mu}}_{2}^{T}{\boldsymbol{C}}{\widetilde{\mu}%
}_{u}+\\
2{\boldsymbol{m}}_{2}^{T}{\boldsymbol{B}}{\boldsymbol{m}}_{1}-2{\boldsymbol{m}%
}_{2}^{T}{\boldsymbol{B}}\boldsymbol{\mu}_{1}-2{\boldsymbol{m}}_{1}^{T}{%
\boldsymbol{B}}\boldsymbol{\mu}_{2}+2\boldsymbol{\mu}_{2}^{T}{\boldsymbol{B}}%
\boldsymbol{\mu}_{1}. \tag{73}
$$
Equating the second-order terms in (71) and (73), we get
$$
{\boldsymbol{A}}=\boldsymbol{\Sigma}_{1}^{-1}+{\overline{{\boldsymbol{H}}}},%
\quad{\boldsymbol{C}}=\boldsymbol{\Sigma}_{2}^{-1}+{\overline{{\boldsymbol{H}}%
}},\quad{\boldsymbol{B}}=-{\overline{{\boldsymbol{H}}}}.
$$
Equating the first-order terms, we get
$$
\displaystyle\boldsymbol{\Sigma}_{1}^{-1}\boldsymbol{\mu}_{1} \displaystyle={\boldsymbol{A}}{\widetilde{\boldsymbol{\mu}}}_{1}+{\boldsymbol{%
B}}{\widetilde{\boldsymbol{\mu}}}_{2}=(\boldsymbol{\Sigma}_{1}^{-1}+{\overline%
{{\boldsymbol{H}}}}){\widetilde{\boldsymbol{\mu}}}_{1}-{\overline{{\boldsymbol%
{H}}}}{\widetilde{\boldsymbol{\mu}}}_{2}, \displaystyle\boldsymbol{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2} \displaystyle={\boldsymbol{B}}{\widetilde{\boldsymbol{\mu}}}_{1}+{\boldsymbol{%
C}}{\widetilde{\boldsymbol{\mu}}}_{2}=-{\overline{{\boldsymbol{H}}}}{%
\widetilde{\boldsymbol{\mu}}}_{1}+(\boldsymbol{\Sigma}_{2}^{-1}+{\overline{{%
\boldsymbol{H}}}}){\widetilde{\boldsymbol{\mu}}}_{2}.
$$
Multiplying both sides of (74a) and (74b) by ${\overline{{\boldsymbol{H}}}}^{-1}$ , we get
$$
\displaystyle({\overline{{\boldsymbol{H}}}}^{-1}\boldsymbol{\Sigma}_{1}^{-1}+{%
\boldsymbol{I}}_{p}){\widetilde{\boldsymbol{\mu}}}_{1}-{\widetilde{\boldsymbol%
{\mu}}}_{2} \displaystyle={\overline{{\boldsymbol{H}}}}^{-1}\boldsymbol{\Sigma}_{1}^{-1}%
\boldsymbol{\mu}_{1} \displaystyle-{\widetilde{\boldsymbol{\mu}}}_{1}+({\overline{{\boldsymbol{H}}}%
}^{-1}\boldsymbol{\Sigma}_{2}^{-1}+{\boldsymbol{I}}_{p}){\widetilde{%
\boldsymbol{\mu}}}_{2} \displaystyle={\overline{{\boldsymbol{H}}}}^{-1}\boldsymbol{\Sigma}_{2}^{-1}%
\boldsymbol{\mu}_{2}, \tag{75}
$$
which can be written in matrix form
$$
\begin{pmatrix}{\overline{{\boldsymbol{H}}}}^{-1}\boldsymbol{\Sigma}_{1}^{-1}+%
{\boldsymbol{I}}_{p}&-{\boldsymbol{I}}_{p}\\
-{\boldsymbol{I}}_{p}&{\overline{{\boldsymbol{H}}}}^{-1}\boldsymbol{\Sigma}_{2%
}^{-1}+{\boldsymbol{I}}_{p}\end{pmatrix}\begin{pmatrix}{\widetilde{\boldsymbol%
{\mu}}}_{1}\\
{\widetilde{\boldsymbol{\mu}}}_{2}\end{pmatrix}=\begin{pmatrix}{\overline{{%
\boldsymbol{H}}}}^{-1}\boldsymbol{\Sigma}_{1}^{-1}&\boldsymbol{0}\\
\boldsymbol{0}&{\overline{{\boldsymbol{H}}}}^{-1}\boldsymbol{\Sigma}_{2}^{-1}%
\end{pmatrix}\begin{pmatrix}\boldsymbol{\mu}_{1}\\
\boldsymbol{\mu}_{2}\end{pmatrix},
$$
from which we obtain (36).
Finally, the degree of conflict between GRFVâs ${\widetilde{X}}_{1}\sim{\widetilde{N}}(\boldsymbol{\mu}_{1},\boldsymbol{\Sigma%
}_{1},{\boldsymbol{H}}_{1})$ and ${\widetilde{X}}_{2}\sim{\widetilde{N}}(\boldsymbol{\mu}_{2},\boldsymbol{\Sigma%
}_{2},{\boldsymbol{H}}_{2})$ is
$$
\kappa=1-(P_{1}\times P_{2})({\widetilde{\Theta}}^{*})=1-\int_{\mathbb{R}^{2p}%
}f({\boldsymbol{m}}_{1},{\boldsymbol{m}}_{2}){\widetilde{F}}({\boldsymbol{m}}_%
{1},{\boldsymbol{m}}_{2})d{\boldsymbol{m}}_{1}d{\boldsymbol{m}}_{2}.
$$
Taking the ratio of (70) to (72), we get
$$
\int_{\mathbb{R}^{2p}}f({\boldsymbol{m}}_{1},{\boldsymbol{m}}_{2}){\widetilde{%
F}}({\boldsymbol{m}}_{1},{\boldsymbol{m}}_{2})d{\boldsymbol{m}}_{1}d{%
\boldsymbol{m}}_{2}=\\
\sqrt{\frac{|{\widetilde{\boldsymbol{\Sigma}}}|}{|\boldsymbol{\Sigma}_{1}||%
\boldsymbol{\Sigma}_{2}|}}\exp\left\{-\frac{1}{2}\left[\boldsymbol{\mu}_{1}^{T%
}\boldsymbol{\Sigma}_{1}^{-1}\boldsymbol{\mu}_{1}+\boldsymbol{\mu}_{2}^{T}%
\boldsymbol{\Sigma}_{2}^{-1}\boldsymbol{\mu}_{2}-{\widetilde{\boldsymbol{\mu}}%
}^{T}{\widetilde{\boldsymbol{\Sigma}}}^{-1}{\widetilde{\boldsymbol{\mu}}}%
\right]\right\}.
$$
Appendix J Proof of Lemma 3
The membership function of the projection of fuzzy vector $\textsf{GFV}({\boldsymbol{m}},{\boldsymbol{H}})$ on $\Theta_{1}$ is
$$
\varphi({\boldsymbol{x}}_{1})=\max_{{\boldsymbol{x}}_{2}}\exp\left(-\frac{1}{2%
}({\boldsymbol{x}}-{\boldsymbol{m}})^{T}{\boldsymbol{H}}({\boldsymbol{x}}-{%
\boldsymbol{m}})\right)=\exp\left(-\frac{1}{2}\min_{{\boldsymbol{x}}_{2}}Z%
\right), \tag{77}
$$
with $Z=({\boldsymbol{x}}-{\boldsymbol{m}})^{T}{\boldsymbol{H}}({\boldsymbol{x}}-{%
\boldsymbol{m}})$ . Now,
$$
\displaystyle Z \displaystyle=({\boldsymbol{x}}_{1}-{\boldsymbol{m}}_{1},{\boldsymbol{x}}_{2}-%
{\boldsymbol{m}}_{2})\begin{pmatrix}{\boldsymbol{H}}_{11}&{\boldsymbol{H}}_{12%
}\\
{\boldsymbol{H}}_{21}&{\boldsymbol{H}}_{22}\end{pmatrix}\begin{pmatrix}{%
\boldsymbol{x}}_{1}-{\boldsymbol{m}}_{1}\\
{\boldsymbol{x}}_{2}-{\boldsymbol{m}}_{2}\end{pmatrix} \displaystyle=({\boldsymbol{x}}_{1}-{\boldsymbol{m}}_{1})^{T}{\boldsymbol{H}}_%
{11}({\boldsymbol{x}}_{1}-{\boldsymbol{m}}_{1})+({\boldsymbol{x}}_{2}-{%
\boldsymbol{m}}_{2})^{T}{\boldsymbol{H}}_{21}({\boldsymbol{x}}_{1}-{%
\boldsymbol{m}}_{1})+ \displaystyle\hskip 56.9055pt({\boldsymbol{x}}_{1}-{\boldsymbol{m}}_{1})^{T}{%
\boldsymbol{H}}_{12}({\boldsymbol{x}}_{2}-{\boldsymbol{m}}_{2})+({\boldsymbol{%
x}}_{2}-{\boldsymbol{m}}_{2})^{T}{\boldsymbol{H}}_{22}({\boldsymbol{x}}_{2}-{%
\boldsymbol{m}}_{2}).
$$
Using ${\boldsymbol{H}}_{21}={\boldsymbol{H}}_{12}^{T}$ , the gradient of $Z$ with respect to ${\boldsymbol{x}}_{2}$ can be written as
$$
\frac{\partial Z}{\partial{\boldsymbol{x}}_{2}}=2{\boldsymbol{H}}_{21}({%
\boldsymbol{x}}_{1}-{\boldsymbol{m}}_{1})+2{\boldsymbol{H}}_{22}({\boldsymbol{%
x}}_{2}-{\boldsymbol{m}}_{2}).
$$
Setting $\frac{â Z}{â{\boldsymbol{x}}_{2}}=0$ , and assuming ${\boldsymbol{H}}_{22}$ to be nonsingular, we get
$$
({\boldsymbol{x}}_{2}-{\boldsymbol{m}}_{2})=-{\boldsymbol{H}}_{22}^{-1}{%
\boldsymbol{H}}_{21}({\boldsymbol{x}}_{1}-{\boldsymbol{m}}_{1}). \tag{79}
$$
Replacing $({\boldsymbol{x}}_{2}-{\boldsymbol{m}}_{2})$ by its expression (79) in (78) and using (77), we finally get
$$
\varphi({\boldsymbol{x}}_{1})=\exp\left(-\frac{1}{2}({\boldsymbol{x}}_{1}-{%
\boldsymbol{m}}_{1})^{T}{\boldsymbol{H}}_{11}^{\prime}({\boldsymbol{x}}_{1}-{%
\boldsymbol{m}}_{1})\right),
$$
with
$$
{\boldsymbol{H}}^{\prime}_{11}={\boldsymbol{H}}_{11}-{\boldsymbol{H}}_{12}{%
\boldsymbol{H}}_{22}^{-1}{\boldsymbol{H}}_{21}.
$$