Fuzzy set-based possibility theory was introduced by L. Zadeh in 1978 (see [12]) and provided an alternative nonclassical means, other than probability theory, of modeling and studying “uncertainty”. Zadeh established in [12] the principle of consistency between possibility and probability, according to which “anything that is probable must be possible”. This principle is expressed as “P(A) (A)”, and the probability P could also be said to be coherent with the possibility. The finite case has been studied by M. Delgado and S. Moral in [4], where they characterise the probabilities that are coherent with a given possibility; also in [2] Castieira et al. deepened in that case defining a distance between possibility an probability measures, finding the closest probability to a given possibility and proving they are coherent. The case of continuous universes has been addressed by several authors, including Dubois et al, who, in [7], examined possibility/probability transformations taking into account the principle of insufficient reason from possibility to probability, and the principle of maximum specificity from probability to possibility. Although dealing with the same subject, the purpose of this paper is another. As density functions are to probabilities what possibility distributions are to possibility measures and, taking into account that a density function whose value is 1 at any point determines both a probability measure and a possibility measure, we set out to analyse the coherence between these probability and possibility measures.

This paper is organized as follows: After a background section, in section 2, we prove that a possibility generates a degenerated probability defined on a -algebra, as in the finite case where the coincident probabilities and Information Models possibilities were degenerated. In section 3, some functions are obtained which are both possibility distributions and density functions; particularly, some classic distributions have been considered, then we address the problem of coherence between possibilities and probabilities generated by the same function. Some counterexamples show that, even in these cases, the coherence between measures cannot be guaranteed. Finally, in section 4, we deal with the coherence between some classical probability distributions and their respective possibility measures, stressing the case of the normal law, where there exists coherence.

1. Preliminaries Let F(E) be the set of all fuzzy sets in a universe of discourse E, with the partial order defined by P Q if and only if P(x) (x) for all x E, where P, [0,1]E are the membership functions of P and Q, Q Q respectively. A -measure in F(E) is any function M:F(E)[0,1] such that: m ) M()=0; m ) M(E)=1; m ) If 1 2 P Q, then M(P) M(Q).

Considering the standard fuzzy sets theories (F(E),,, c) where the operations, are defined by the t-norm c T=Min, the t-conorm S=Max and the complement by means of a strong negation [11], on the one hand, a possibility in F(E) (see [5] and [6]) is any mapping :F(E)[0,1] satisfying: p ) (E)=1; p ) ()=0; and 1 p )(P Q) = Max((P), (Q)) for any P,Q F(E). On the other hand, a necessity in F(E) is any mapping N:F(E)[0,1] satisfying: n ) N(E)=1; p ) N()=0; and p ) N(P Q)=Min(N(P), N(Q)) for any P,Q F(E).

1 2 It is easy to check that both any and any N verify the axiom m, and are, therefore, -measures. Note that, given a possibility, the function N=1-( c) is a necessity measure, the bidual necessity associated with.

Furthermore, if is such that sup{(x), x E}=1, the function : F(E)[0,1] defined for all P F(E) [0,1]E by (P) = sup{Min((x),P(x)), x E} is a possibility measure. The function is called possibility distribution of the. Note that for all A P(E), where P(E) is the set of parts of E, the possibility measure given by the possibility distribution is defined by (A)=sup{(x), x A}.

Let M1 and M2 be two -measures, M1 is coherent with M2 if M1(A) M2(A) for all P F(E). When M1=N is a necessity measure and 2 = is a possibility measure, it is clear that, generally, there is no coherence between N and, that is, neither N, nor N. Nevertheless, when N=N is the necessity measure associated with the possibility measure, N because 1= (P Pc )=Max((P), (Pc)) (P)+(Pc), thus N(P)=1- (Pc) (P).

As the purpose of this paper is to compare possibility and probability measures, we will consider the possibilities as being restricted to classic sets, that is, to –algebras A P(E). Recall that A is a –algebra if for any A A its complement Ac A, and for any countable family {An}n A it is n An A. Moreover, P: A [0,1] is a N N probability measure if P(E)=1 and P is -additive, that is, for any {An}n A such that AiAj= if i j, then N P! n An1= n P(An) holds.

N N We will consider Borel's -algebra in R, that is, the smallest -algebra that contains the semi-ring {[a,b); a, b R with a< b}, or alternatively, the smallest -algebra that contains the open sets of R, and which is usually denoted by B. It is well known that every probability measure P: B [0,1] is univocally determined by a distribution b function, F: R [0,1] ([10]), and if F(x)=f(x) exists for “almost any” point, then P([a,b])= (*).

a f (x)dx Generally, if f: R R+ is such that - f (x)dx = 1 (that is, f is a density function), f defines, as in (*), a probability measure on Borel’s algebra of R. Note that, pursuant to the theorems of measure extension, every probability in (R,B) is determined by ascertaining its values in the intervals [a,b].

Fourth International Conference I.TECH 2006 2. Probability Generated by a Possibility Measure Let such that sup{µ(x), x R}=1 and let us consider the possibility measure generated by on the [0,1]E crisp sets of R, that is, :P(R)[0,1] defined for each A P(R) by (A)=sup{µ(x), x A}, verifies:

1) ()=0; 2) Monotonicity: if A B then (A) (B); 3) Subadditivity: !n An1 n (An).

N N That is, is an exterior measure in R and also verifies that (R )=1.

It is known that any exterior measure M generates a -additive measure on the -algebra of the M –measurable sets (see [9], [10]) according to:

Caratheodory’s Theoreme: If M: P(E) R+ is an exterior measure in a set E, then the family A={A P(E); X P(E), M(X)= M (X A )+ M (X Ac)} is a -algebra and the restriction of M to A is a additive measure.

The Caratheodorys method applied to the exterior measure generates a degenerated probability as follows:

Theoreme 2.1. Let such that sup{µ(x), x R}=1, then the family of -measurable sets is [0,1]E A={ A P(R), supp(µ) A or A (supp(µ))c}, where supp(µ)= {x R, µ(x) 0} is the support of µ, and the possibility measure restricted to the measurable sets is a degenerated probability defined for each A A by (A)=0, if A (supp(µ))c, and (A)=1 if supp(µ) A.

Proof: Let us see that A is the -algebra constructed by Caratheodory’s method.

A is a -algebra trivially. The elements of A are -measurable; indeed, if supp() A, for each X R, (X) = sup{(x), x X}=sup{(x), x (AX)(AcX)} = Max{sup{(x), x AX},sup{(x), x AcX}} = sup{(x), x AX}= (AX)= (AX)+(AcX) holds, as (AcX)=0. Similarly, if A (supp(µ))c, we could prove that A is -measurable.

Furthermore, we will prove that the only -measurable elements are elements of A: If A R is -measurable, then, in particular, 1= (R)=(A)+ (Ac) () holds, and two options can be study:

1) There exists x0 R such that (x0)=1. If, moreover, x0 A it follows from () that (Ac)=0, which means that Ac (supp(µ))c and, therefore, supp(µ) A and A A. Similarly, if x0 Ac, we have that A (supp(µ))c and A A.

2) For all x R, µ(x)<1. In this case, µ reaches its supreme value at + or -, and this point of infinity is an accumulation point of A, x A’, or of Ac. Let us suppose that x A’, then (A)=1, and it follows from () that (Ac)=0, which means that, again, supp(µ) A and A A. If the point of infinity at which µ reaches the supreme is an accumulation point of Ac, it follows, similarly, that Ac (supp(µ))c and A A.

Finally, the values of on elements of A follow from the definition of. Information Models 3. Possibility and Probability Measures Generated by a Density Function and Their Coherence We will address the coherence of measures in a continuous universe when the possibility and probability are determined by the same function, that is, a possibility distribution in the first instance and a density function in the second one. For this purpose, a first section analyses how this type of functions can be derived from a given density function and, then, from a given possibility distribution. The second section deals with the coherence between a possibility and a probability both generated by a given density function.

3.1. Possibility Distributions and Density Functions In this section, some conditions for a function to be a density function and a possibility distribution at the same time are stated; moreover the cases of some notable distributions are analysed.

Lemma 3.1. If f: R R+ is a bounded density function, then the function : R R defined for each x R f by (x) = kf (kx), where k=1/sup{f(x), x R}, is a density function and a possibility distribution function.

f Additionally, if f is continuous, then there exists y0 R such that (y0 ) =1.

f Proof: is a density function. Indeed, (x)dx = f (kx)d(kx) =1 It is also a possibility distribution, f f - since 0 (x) sup{ (x), x R}=k sup{f(kx), x R}= 1.

f f Finally, if f is continuous, there exists x0 R such that f(x0) = sup{f(x), x R}=1/k; hence, it suffices to consider y0= x0/k, since then (x0 / k) =1. will be said to be the possibility distribution associated with f.

f f Some examples The possibility distributions associated with some well-known probability distributions are listed below (for more details about these distributions, see [3]).

-( x- )(a) Normal distribution of parameters,, N(,): Its density function is f (x) = (1/ 2 )e with -( 2 x- )maximum f () = 1/( 2 ), then (x) = 2 f ( 2 x) = e. In particular, f when = 1/ 2, which is a density function for the normal distribution N(,1/ 2 ).

(x) = f (x) = e- ( x- )f a (b) Cauchy distribution with parameters : Its density function is whose maximum, a,b f (x) = (a2 + (x - b)2 ) reached in b, is f (b) = 1/(a ) ; hence, its associated possibility distribution is a (x) = af (ax) = f a2 + (ax - b)If b = 0, then (x) =, and its probability distribution is a Cauchy distribution with a = 1.

f 1+ xp a p-(c) Gamma distribution of parameters if x > 0, Its density function is p > 0, a > 0, ( p, a) :

f (x) = x e-ax ( p) + p-and f (x) = 0 if x 0, ( p) = e-x x dx is the second-class Euler's function.

where Note, firstly, that if p (0,1), then f is not bounded and, therefore, there is no associated possibility distribution.

When it is also a particular case of the exponential distribution that will be dealt with in the following p = 1, p -example. If the function is bounded, reaching its maximum value in and its associated p > 1, x =, a Fourth International Conference I.TECH 2006 possibility distribution can be ascertained. It will be calculated for two particular cases so as to avoid tedious calculations.

If p = 2, then f (x) = a2xe-ax x > 0, f (x) = 0 x 0, if and if and its maximum is f (1/ a) = a / e; therefore, the associated possibility distribution is (x) = e2xe- ex if x > 0, and (x) = 0, if x 0, which is also a f f density function for the distribution (2, e). If p = 3, we get the law (3,e2 / 2).

(d) Exponential Distribution of parameter : Its density function is f (x) = e-x if x 0, and f (x) = 0 if x < 0, whose maximum is f (0) =. Hence, the associated possibility distribution is (x) = e-x if x 0, and f (x) = 0, if x < 0, which is a density function for the exponential distribution with = 1or also for (1,1). f The inverse problem of getting a density function that is also a possibility distribution from another possibility distribution is easily solved if this distribution “encloses” a finite area, as shown in the following result.

R Lemma 3.2. Let such that (x)dx = A < + and let us suppose that there exists such [0,1] x0 R R that (x0 ) =1, then the function defined for each x R by is a density function f (x) = (A(x - x0 ) + x0 ) and also a possibility distribution.

- A Therefore, is a density function. Additionally, which means that f is also a possibility f (x0) = (x0) = 1, distribution. f will be said to be a density function associated with.

Note that there are many density functions associated with a function under the above conditions. Indeed, and all its translations would also be density functions. The fact that we considered the translation f (x) = Ax to x0 is really a practical matter, as if. reaches the value 1 at a single point x0, then the graph of f is obtained by “squashing” the graph of. and leaving the fixed point (x0,1), which would mean that it would be “most like” the original.

R Example: The function (x) = e-|x|, with is a possibility distribution, since [0,1] and (0) =1, x R, + but it is not a density function, as e-|x|dx = 2. However, associated density functions can indeed be found:

f (x) = e-2 |x| and its translations.

3.2 Coherence Between Possibility and Probability R Let [0,1] such that (x)dx =1and sup (x) =1. Let be the generated possibility by and P R xR the probability with density function. Our aim is to study when P, that is, when P is coherent with. The following result shows that there is “local coherence” with the possibility for “small” subsets.

1 Theorem 3.3. Let A B such that L (A) 1, where L designates the Lebesgue measure in R; then for any R [0,1] such that (x)dx =1 and sup (x) =1, it is P (A) (A).

R xR Proof: P (A) = (x)dx sup (x)·L (A) (A).

A xA Information Models Generally, it cannot be guaranteed that P (A) (A) for any AB, as shown by the following examples.

a+a x0 if and f (x) = 0 if Pareto distribution of parameters a, x0 : Its density function is x x0, f (x) = x0 x x < x0, and its associated possibility function taking x0 = a is (x) = (a / x)a+1 if x > a, and (x) = 0 if f f a+1 a a++ a a a x < a. Then, for each b > a, P ((b,+]) = dx = > = ((b,+]).

f f b x b b P (A)=area(R)>P (A) f f P (A) f R a b A=[b,+ ) Figure 1: Density function of the Pareto distribution.

Cauchy distribution: As discussed previously, in the family of Cauchy density functions, (x) = 1 is also 1+ x+ 1 a possibility distribution. If A = (-, 3 / ] [ 3 /,), L (A) > 1, and P (A) = 2 dx = ;

Материалы этого сайта размещены для ознакомления, все права принадлежат их авторам.
Если Вы не согласны с тем, что Ваш материал размещён на этом сайте, пожалуйста, напишите нам, мы в течении 1-2 рабочих дней удалим его.