Method for generating distributions and classes of probability ...
-
Upload
khangminh22 -
Category
Documents
-
view
1 -
download
0
Transcript of Method for generating distributions and classes of probability ...
Hacettepe Journal of Mathematics and StatisticsVolume 48 (3) (2019), 897 – 930RESEARCH ARTICLE
Method for generating distributions and classes ofprobability distributions: the univariate case
Cıcero Ramos de Brito∗ , Leandro Chaves Rego † , Wilson Rosa de Oliveira ‡ andFrank Gomes-Silva §¶
Abstract
In this work, we present a method to generate probability distributions andclasses of probability distributions, which broadens the process of probabilitydistribution construction. In this method, distribution classes are built from pre-defined monotonic functions and from known distributions. With its use, wecan obtain different classes of probability distributions described in literature.Beside these results, we obtain results on the support and nature of the generateddistributions.
Keywords: Probabilistic distributions generating method, Distribution classes genera-ting functions, probabilistic distributions classes, probabilistic distribution support.
Mathematics Subject Classification (2010): 60E05; 62N05.
Received : 15.12.2017 Accepted : 28.06.2018 Doi : 10.15672/HJMS.2018.619
∗Federal Institute of Pernambuco, Recife, Brazil. Email: [email protected]†Department of Statistics and Applied Mathematics, Federal University of Ceara, Fortaleza, Brazil. Email:
[email protected]‡Department of Statistics and Informatics, Federal Rural University of Pernambuco, Recife, Brazil. Email:
[email protected]§Department of Statistics and Informatics, Federal Rural University of Pernambuco, Recife, Brazil. Email:
[email protected]¶Corresponding Author.
898
1. IntroductionThe amount of data available for analysis is growing increasingly faster, requiring new proba-
bilistic distributions to better describe each phenomenon or experiment studied. Computer basedtools allow the use of more complex distributions with a larger number of parameters to better studysizeable masses of data.
The literature in the field describes several generalizations and extensions of symmetric, asym-metric, discrete and continuous distributions. It is worth quoting Lee et al. (2013) regarding themain methods of generating distributions and classes of probability distributions:
Generally speaking, the methods developed prior to 1980s may be summarizedinto three categories: (1) method of differential equation, (2) method of trans-formation, and (3) quantile method. Techniques developed since 1980s may becategorized as methods of combination for the reason that these methods attemptto combine existing distributions into new distributions or adding new parame-ters to an existing distribution. (Lee et al., 2013, 219)
The relevance of these new models is that, according to the situation, each one of them canbetter fit the mass of data. Table 1 presents several classes of distributions described in literature,their nomenclature and the title of the work where they have been presented. For a comprehensivediscussion about the classes of probability distributions see three excellent articles written by Leeet al. (2013), Tahir and Nadarajah (2015) and Tahir and Cordeiro (2016).
Table 1. Some classes of distributions described in literatura
Distribution classes Nomenclature
F (x) = Ga(x), where a > 0 defined by Mudholkar et al.(1995)
F (x) = 1B(a,b)
∫G(x)0 ta−1(1− t)b−1dt, beta-G type 1 defined by
where a > 0, b > 0 and 0 < t < 1 Eugene et al. (2002)F (x) = 1
B(a,b)
∫G(x)0 ta−1(1 + t)−(a+b)dt, beta-G type 3 defined by
where a > 0, b > 0 and t > 0 Thair and Nadarajah (2015)F (x) = 1
B(a,b)
∫Gc(x)0 ta−1(1− t)b−1dt, Mc-G type 1 defined by
where a > 0, b > 0, c > 0 and 0 < t < 1 McDonald (1984)F (x) = 1
B(a,b)
∫Gc(x)0 ta−1(1 + t)−(a+b)dt, Mc-G type 3 defined by
where a > 0, b > 0, c > 0 and t > 0 Thair and Nadarajah (2015)F (x) = 1− (1−Ga(x))b, where Kumaraswamy-G defined by
a > 0 and b > 0 Cordeiro and Castro (2011)Kumaraswamy-G type 2
F (x) = 1− (1− (1−G(x))a)b, where defined bya > 0 and b > 0 Thair and Nadarajah (2015)
F (x) = 1−G(x)
G(x) + b(1−G(x), where b > 0 Marshall-Olkin-G defined by
Marshall and Olkin (1997)
F (x) = 1−(
b(1−G(x))G(x)+b(1−G(x)
)θ, where b > 0 and Marshall-Olkin-G
θ > 0 defined by Jayakumarand Mathew (2008)
F (x) =(
G(x)G(x)+b(1−G(x)
)θ, where b > 0 and θ > 0 Marshall-Olkin-G defined by
Thair and Nadarajah (2015)gamma-G defined by
Continues on the next page
899
Table 1. Continued from the previous page
Distribution classes Nomenclature
F (x) =βα
Γ(α)
∫ − log[1−G(x)]
0tα−1e−βtdt, Zografos and Balakrishnan
where α > 0 and β > 0 (2009)
F (x) = 1−βα
Γ(α)
∫ 1−G(x)G(x)
0tα−1 e−β tdt, gamma-G defined by
where α > 0 and β > 0 Brito et al. (2017)
F (x) = 1−βα
Γ(α)
∫ − log(G(x))
0tα−1 e−β tdt, gamma-G defined by
where α > 0 and β > 0 Cordeiro et al. (2017)
F (x) = 1− a+[1−G(x)](1+a) [1−G(x)]
exp{−a G(x)
1−G(x)
}, Odd Lindley-G defined by
where a > 0 Gomes-Silva et al. (2017)
F (x) = 1− C(θe−αH(x))C(θ)
, where x > 0, θ > 0 and Extended Weibull distributionC(θ) =
∑∞n=1 anθ
n defined by Silva et al. (2013)
F (x) =1− exp[−λG(x)]
1− e−λKumaraswamy-G Poisson defi-
ned by Ramos et al. (2014)F (x) = {1− [1−Ga(x)]b}c, where a > 0, b > 0 exponentiated
and c > 0 Kumaraswamy-G defined bySilva (2019)
F (x) =eλe
−βxa − eλ
1− eλ, x > 0 beta Weibull Poisson Family
defined by Percontini (2013)F (x) =
∫G(x)0 Kta−1(1− t)b−1 exp(−ct)dt, where beta Kummer generalized
a > 0, b > 0 and c ∈ R defined by Pescim et al.(2012)
F (x) = e−λaW (−a e−a)−e
−λaW (ψ(x))
e−λaW (−a e−a)−1
, where Weibull Generalized Poisson
W (x) =∑∞
n=1(−1)n−1nn−z
(n−1)!xn and distribution defined
ψ(x) = −a e−a−bxa by Percontini (2014)
F (x) =(1− β)s − {1− β[1−G(x)]}−s
(1− β)s − 1, G-Negative Binomial family
where β ∈ (0, 1) and s > 0 defined by Percontini (2014)F (x) =
ζ(s)−Lis[1−G(x)](s)
, where Zeta-G defined by Percontini
Lis(z) =∑∞
k=1zk
ksand ζ(s) =
∑∞k=1
1ks
(2014)Power Series Distributions
F (x) =
x∑k=0
C(k)(a)
k!C(λ)(λ− a)k Family by Consul and Famoye,
(2006)
F (x) =1∑
k=1
1
k!
[(C(0))k
](k−1)Basic Lagrangian defined by
Consul and Famoye (2006)
F (x) =x∑
k=n
n
(k − n)!k
[(C(0))k
](k−n)Lagrangian Delta defined by
Continues on the next page
900
Table 1. Continued from the previous page
Distribution classes Nomenclature
Consul and Famoye (2006)F (x) =
(∑xk=0 P (X = k)
)δ, Generalized Lagrangian
P (X = k) =
{w(0), k = 0[
(C(0))kw(1)(0)](k−1)
, k = 1, 2, ...defined by Consul and
Famoye (2006)Generalized Pearson in
F (x) =
∫ x
−∞e∫ a0+a1t+···+astsb0+b1t+···+brtr
dtdt Ordinary Differential
Equation form defined byShakil et al. (2010)Generalized Pearson in
F (x) =
∫ x
−∞e∫ a0+a1t+···+astsb0+b1t+···+brtr
(f(t))βdtdt, β ≥ 0 Ordinary Differential
Equation form defined byShakil et al. (2010)
Generalized Family in
F (x) =
∫ x
−∞
∫ y
−∞
(2∑
i=1
ai(t)fβi (t)
)dt dy Ordinary Differential
Equation form defined byVoda (2009)
F (x) =
∫ W [G(x)]
ar(t) dt, T-X class
where W [G(x)] ∈ [a, b], with by Alzaatreh et al. (2013)W [G(x)] differentiable and monotonically
non-decreasing
The aim of this work is to propose a method to create distributions and probabilistic distributionclasses that could unify the various methods to generate distribution classes already described inliterature. The idea of this method is to generate classes from already known distributions, usingmonotonic functions and a cumulative distribution function.
We show that the proposed method has high power of generality. The well-known T-X classgeneralizes most of the classes presented in Table 1. To get an idea of its power we will show thatthe T-X class appears as a sub-case of a simple sub-model of the proposed method that we willdenote it by 3S1C1.2 (see Table 2, page 907). In addition to generalizing existing classes, the newmethod provides a source of new probability distribution classes.
This paper is organized in the following way: in Section 2, we describe two methods to generateprobability distributions, establishing the conditions that must be satisfied by the used monotonicfunctions and probability distribution to guarantee that the proposed method indeed generates aprobability distribution. In Section 3, we analyze a special case of the methods described in theprevious section for the case where the monotonic functions are compositions of known probabilitydistribution functions. Still in Section 3 we present several specific cases of these methods thatmay be easily used to obtain new probability distributions. At the end of this section, we demons-trate that all methods presented in Sections 2 and 3 are equivalent. In Section 4, we analyze thesupport and nature of the distributions generated by the methods proposed in Section 3. Section 5
901
presents our conclusions and directions for further works. As an application of the proposed meth-ods, the appendix A to the article contains a table (Table 4) showing how to obtain several classesof probability distributions described in the literature using our proposed methods.
2. The methodThe method we suggest to create distribution classes uses monotonic functions, υ : R → R,√
: R → R, Uj : R → R ∪ {±∞}, Lj : R → R ∪ {±∞}, Mj : R → R ∪ {±∞} andVj : R → R ∪ {±∞}, and a cumulative distribution function (cdf) F . The idea of this method isto generate a probability distribution integrating F from Lj(x) to Uj(x) and from Mj(x) to Vj(x)for any x ∈ R and j = 1, 2, 3, . . . , n. Theorem 1 that follows shows sufficient conditions that thefunctions υ(x),
√(x), Lj(x), Uj(x), Mj(x) and Vj(x) must satisfy to guarantee that the method
generates a probability distribution function.
1. Theorem (T1). Method to generate distributions and classes of probability distributions.Let F : R → R, υ: R → R,
√: R → R, Uj : R → R ∪ {±∞}, Lj : R → R ∪ {±∞},
Mj : R → R ∪ {±∞} and Vj : R → R ∪ {±∞}, for j = 1, 2, 3, . . . , n, be monotonic and rightcontinuous functions such that:
[c1] F is a cdf andυ and√
are non-negative;
[c2] υ(x), Uj(x) and Mj(x) are non-decreasing and√(x), Vj(x) and Lj(x) are non-increasing
∀ j = 1, 2, 3, . . . , n;
[c3] If limx→−∞
υ(x) = limx→−∞
√(x), then lim
x→−∞υ(x) = 0 or lim
x→−∞Uj(x) = lim
x→−∞Lj(x) ∀
j = 1, 2, 3, . . . , n, and limx→−∞
√(x) = 0 or lim
x→−∞Mj(x) = lim
x→−∞Vj(x), ∀ j = 1, 2, 3, . . . , n;
[c4] If limx→−∞
υ(x) = limx→−∞
√(x) = 0, then lim
x→−∞Uj(x) = lim
x→−∞Vj(x) and lim
x→−∞Mj(x) =
limx→−∞
Lj(x), ∀ j = 1, 2, 3, . . . , n;
[c5] limx→−∞
Lj(x) ≤ limx→−∞
Uj(x) and if limx→−∞
√(x) = 0, then lim
x→+∞Mj(x) ≤ lim
x→+∞Vj(x),
∀ j = 1, 2, 3, . . . , n;
[c6] limx→+∞
Un(x) ≥ sup{x ∈ R : F (x) < 1} and limx→+∞
L1(x) ≤ inf{x ∈ R : F (x) > 0};
[c7] limx→+∞
υ(x) = 1;
[c8] limx→+∞
√(x) = 0 or lim
x→+∞Mj(x) = lim
x→+∞Vj(x), ∀ j = 1, 2, 3, . . . , n and n ≥ 1;
[c9] limx→+∞
Uj(x) = limx→+∞
Lj + 1(x), ∀ j = 1, 2, 3, . . . , n− 1 and n ≥ 2;
[c10] F is a cdf without points of discontinuity or all functions Lj(x) and Vj(x) are constantat the right of the vicinity of points whose image are points of discontinuity of F , being also contin-uous in that points. Moreover, F does not have any point of discontinuity in the set { lim
x→±∞Lj(x),
limx→±∞
Uj(x), limx→±∞
Mj(x), limx→±∞
Vj(x), for some j = 1, 2, . . . , n}. Then,
H(x) = υ(x)
n∑j=1
∫ Uj(x)
Lj(x)
dF (t)−√(x)
n∑j=1
∫ Vj(x)
Mj(x)
dF (t)(2.1)
is a cdf .
902
Proof. (i) limx→−∞
H(x) = 0.
limx→−∞
H(x) = limx→−∞
(υ(x)
n∑j=1
∫ Uj(x)
Lj(x)
dF (t)
)
− limx→−∞
(√(x)
n∑j=1
∫ Vj(x)
Mj(x)
dF (t)
)
=
(lim
x→−∞υ(x)
) n∑j=1
∫ limx→−∞
Uj(x)
limx→−∞
Lj(x)
dF (t)
−(
limx→−∞
√(x)
) n∑j=1
∫ limx→−∞
Vj(x)
limx→−∞
Mj(x)
dF (t),
where the last equality holds because F is continuous in{lim
x→−∞Uj(x), lim
x→−∞Lj(x), lim
x→−∞Vj(x), lim
x→−∞Mj(x)
}.
Conditions [c3] and [c4] guarantee that:
limx→−∞
H(x) =
(lim
x→−∞υ(x)
) n∑j=1
∫ limx→−∞
Uj(x)
limx→−∞
Lj(x)
dF (t)
−(
limx→−∞
√(x)
) n∑j=1
∫ limx→−∞
Vj(x)
limx→−∞
Mj(x)
dF (t) = 0.
(ii) limx→+∞
H(x) = 1.
limx→+∞
H(x) = limx→+∞
(υ(x)
n∑j=1
∫ Uj(x)
Lj(x)
dF (t)
)
− limx→+∞
(√(x)
n∑j=1
∫ Vj(x)
Mj(x)
dF (t)
)
=
(lim
x→+∞υ(x)
) n∑j=1
∫ limx→+∞
Uj(x)
limx→+∞
Lj(x)
dF (t)
−(
limx→+∞
√(x)
) n∑j=1
∫ limx→+∞
Vj(x)
limx→+∞
Mj(x)
dF (t),
where the last equality holds because F is continuous in{lim
x→+∞Uj(x), lim
x→+∞Lj(x), lim
x→+∞Vj(x), lim
x→+∞Mj(x)
}.
Thus, conditions [c1], [c6], [c7], [c8] and [c9] guarantee that
limx→+∞
H(x) = 1.
(iii) If x1 ≤ x2, then H(x1) ≤ H(x2).Let x1 ≤ x2, then [c2] implies that: Uj(x1) ≤ Uj(x2), Lj(x1) ≥ Lj(x2), Mj(x1) ≤
Mj(x2), Vj(x1) ≥ Vj(x2), υ(x1) ≤ υ(x2) and√(x1) ≥
√(x2). Beside this, [c2] and [c5]
imply,∑n
j=1
∫ Uj(x1)
Lj(x1)dF (t) ≥ 0,
∑nj=1
∫ Vj(x1)
Mj(x1)dF (t) ≥ 0,
∑nj=1
∫ Uj(x2)
Lj(x2)dF (t) ≥ 0 and∑n
j=1
∫ Vj(x2)
Mj(x2)dF (t) ≥ 0.
903
Thus, since, by [c1],υ and√
are non-negative, we haveH(x1) =υ(x1)
∑nj=1
∫ Uj(x1)
Lj(x1)dF (t)−
√(x1)
∑nj=1
∫ Vj(x1)
Mj(x1)dF (t)
≤ υ(x2)
n∑j=1
∫ Uj(x2)
Lj(x2)
dF (t)−√(x2)
n∑j=1
∫ Vj(x2)
Mj(x2)
dF (t) = H(x2).
(iv) limx→x+
0
H(x) = H(x0).
limx→x+
0
H(x) = limx→x+
0
υ(x)
n∑j=1
∫ Uj(x)
Lj(x)
dF (t)− limx→x+
0
√(x)
n∑j=1
∫ Vj(x)
Mj(x)
dF (t)
=
(lim
x→x+0
υ(x)
)n∑
j=1
∫ limx→x
+0
Uj(x)
limx→x
+0
Lj(x)
dF (t)−
(lim
x→x+0
√(x)
)n∑
j=1
∫ limx→x
+0
Vj(x)
limx→x
+0
Mj(x)
dF (t)
= υ(x0)
n∑j=1
∫ Uj(x0)
Lj(x0)
dF (t)−√(x0)
n∑j=1
∫ Vj(x0)
Mj(x0)
dF (t) = H(x0).
The above equalities hold due to [c10] and because υ(x), Uj(x), Mj(x),√(x), Vj(x) and
Lj(x) are right continuous.
From the facts (i), (ii), (iii) and (iv), we may conclude that (2.1) is a cdf . �
Corollary 1.1 presents an alternative method to generate distributions and classes of probabilitydistributions.
1.1. Corollary (C1.1). Complementary method to generate distributions and classes of probabilitydistributions.Let φ : R → R, 0 : R → R, W : R → R, U : R → R ∪ {±∞}, Mj : R → R ∪ {±∞} andVj : R → R ∪ {±∞}, ∀ j = 1, 2, 3, . . . , n, be monotonic and right continuous functions suchthat:[cc1] F is a cdf and 0 and W are non-negative;
[cc2]0(x), Uj(x) and Mj(x) are non-decreasing and W (x), Vj(x) and Lj(x) are non-increasing∀ j = 1, 2, 3, . . . , n;
[cc3] If limx→+∞
W (x) = limx→+∞
0(x), then limx→+∞
0(x) = 0 or limx→+∞
Lj(x) = limx→+∞
Uj(x), ∀ j = 1, 2, 3, . . . , n, and limx→+∞
W (x) = 0 or limx→+∞
Mj(x) = limx→+∞
Vj(x), ∀j = 1, 2, 3, . . . , n;
[cc4] If limx→+∞
W (x) = limx→+∞
0(x) = 0, then limx→+∞
Uj(x) = limx→+∞
Vj(x) and limx→+∞
Mj(x) = limx→+∞
Lj(x), ∀ j = 1, 2, 3, . . . , n;
[cc5] limx→+∞
Mj(x) ≤ limx→+∞
Vj(x) and if limx→+∞
0(x) = 0, then limx→−∞
Lj(x) ≤ limx→−∞
Uj(x),
∀ j = 1, 2, 3, . . . , n;
[cc6] limx→−∞
Vn(x) ≥ sup{x ∈ R : φ(x) < 1} and limx→−∞
L1(x) ≤ inf{x ∈ R : φ(x) > 0};
[cc7] limx→−∞
W (x) = 1;
[cc8] limx→−∞
0(x) = 0 or limx→−∞
Lj(x) = limx→−∞
Uj(x), ∀ j = 1, 2, 3, . . . , n and n ≥ 1;
904
[cc9] limx→−∞
Vj(x) = limx→−∞
Mj + 1(x), ∀ j = 1, 2, 3, . . . , n− 1 and n ≥ 2;
[cc10] φ is a cdf without points of discontinuity or all functions Lj(x) and Vj(x) are constant atthe right of the vicinity of points whose image are points of discontinuity of φ, being also continuousin that points. Moreover, φ does not have any point of discontinuity in the set { lim
x→±∞Lj(x)(x),
limx→±∞
Uj(x), limx→±∞
Mj(x), limx→±∞
Vj(x), for some j = 1, 2, . . . , n}.
Then,
H(x) = 1−W (x)
n∑j=1
∫ Vj(x)
Mj(x)dφ(t) + 0(x)
n∑j=1
∫ Uj(x)
Lj(x)dφ(t)
is a cdf.
Proof. In Theorem 1, consider n = 1, υ(x) = 1,√(x) = 0, U1(x) = 1 and L1(x) =
W (x)∑n
j=1
∫ Vj(x)Mj(x)
dφ(t)− 0(x)∑n
j=1
∫ Uj(x)Lj(x)
dφ(t), ∀x ∈ R, and F a cdf of the uni-form[0,1] distribution. Note that U1(x) and L1(x) satisfy the hypotheses of Theorem 1, since[cc1], [cc2] and [cc5] guarantee that L1(x) = W (x)
∑nj=1
∫ Vj(x)Mj(x)
dφ(t)−0(x)∑n
j=1
∫ Uj(x)Lj(x)
dφ(t)
is non-increasing and U1(x) = 1 is non-decreasing. Thus, conditions [cc2] and [cc5] are satisfied.Moreover, conditions [cc3] and [cc4] guarantee that:
limx→−∞
U1(x) = limx→−∞
L1(x) = 1, limx→+∞
U1(x) = sup{x∈R : F (x) < 1} = 1, limx→+∞
L1(x) =
inf{x ∈ R : F (x) > 0} = 0, that both L1(x) and U1(x) are right continuous and that F is a cdfwithout points of discontinuity.
As all conditions of Theorem 1 are satisfied, it follows that
H(x) =
∫ U1(x)
L1(x)
dF (s) =
∫ 1
W (x)∑nj=1
∫ Vj(x)Mj(x)
dφ(t)−0(x)∑nj=1
∫Uj(x)Lj(x)
dφ(t)
ds
= 1−W (x)
n∑j=1
∫ Vj(x)
Mj(x)dφ(t) + 0(x)
n∑j=1
∫ Uj(x)
Lj(x)dφ(t)
is a cdf . �
In the next section, we present some corollaries of Theorem 1 where the monotonic functionsυ(x),
√(x), Uj(x), Lj(x), Mj(x) and Vj(x) are compositions of monotonic functions of known
probability distributions.
3. Monotonic functions involving probabilities distributionsIn this section, we show how to generate classes of probability distributions using monotonicfunctions which are compositions known probability distributions. Formally, consider that u:[0, 1]m → R, ϑ : [0, 1]m → R, µj : [0, 1]m → R ∪ {±∞}, ℓ : [0, 1]m → R ∪ {±∞},vj : [0, 1]m → R ∪ {±∞} and mj : [0, 1]m → R ∪ {±∞} are monotonic and right continuousfunctions. The results of this section are achieved considering that: υ(x) = u(G1, . . . , Gm)(x),√(x) = ϑ(G1, . . . , Gm)(x), Uj(x) = µj(G1, . . . , Gm)(x), Lj(x) = ℓ(G1, . . . , Gm)(x),Mj
(x) = mj(G1, . . . , Gm)(x) and Vj(x) = vj(G1, . . . , Gm)(x).We use the abbreviation (·)(x) = (G1, . . . , Gm)(x) = (G1(x), . . . , Gm(x)) to represent the
vector formed by the cdf ′s calculated on the same point of the domain x.
905
1.2. Corollary (C1.2). Method to generate classes of probability distributions.Let F : R → R, µj : [0, 1]m → R ∪ {±∞}, ℓj : [0, 1]m → R ∪ {±∞}, u: [0, 1]m → R,vj : [0, 1]m → R∪{±∞}, mj : [0, 1]m → R∪{±∞} and ϑ : [0, 1]m → R, ∀ j = 1, 2, 3, . . . , n,be monotonic and right continuous functions such that:
[d1] F is a cdf andu and ϑ are non-negative;
[d2] µj , mj and u are non-decreasing and ℓj , vj and ϑ are non-increasing, ∀ j = 1, 2, 3, . . . , n,in all of its variables;
[d3] If u(0, . . . , 0) = ϑ(0, . . . , 0), then u(0, . . . , 0) = 0 or µj(0, . . . , 0) = ℓj(0, . . . , 0), ∀j = 1, 2, 3, . . . , n, and ϑ(0, . . . , 0) = 0 or mj(0, . . . , 0) = vj(0, . . . , 0), ∀ j = 1, 2, 3, . . . , n;
[d4] Ifu(0, . . . , 0) = ϑ(0, . . . , 0) = 0, then µj(0, . . . , 0) = vj(0, . . . , 0) and mj(0, . . . , 0)= ℓj(0, . . . , 0), ∀ j = 1, 2, 3, . . . , n;
[d5] ℓj(0, . . . , 0) ≤ µj(0, . . . , 0) and if ϑ(0, . . . , 0) = 0, then mj(1, . . . , 1) ≤ vj(1, . . . , 1),∀ j = 1, 2, 3, . . . , n;
[d6] µn(1, . . . , 1) ≥ sup{x ∈ R : F (x) < 1} and ℓ1(1, . . . , 1) ≤ inf{x ∈ R : F (x) > 0};
[d7]u(1, . . . , 1) = 1;
[d8] ϑ(1, . . . , 1) = 0 or vj(1, . . . , 1) = mj(1, . . . , 1), ∀ j = 1, 2, 3, . . . , n− 1 and n ≥ 2;
[d9] µj(1, . . . , 1) = ℓj+1(1, . . . , 1), ∀ j = 1, 2, 3, . . . , n− 1 and n ≥ 2;
[d10] F is a cdf without points of discontinuity or the functions ℓj(·)(x) and vj(·)(x) are constantat the right of the vicinity of points whose image are points of discontinuity of F , being also contin-uous in that points. Moreover, F does not have any point of discontinuity in the set {ℓj(0, . . . , 0),µj(0, . . . , 0), mj(0, . . . , 0), vj(0, . . . , 0), ℓj(1, . . . , 1), µj(1, . . . , 1), mj(1, . . . , 1), vj(1, . . . , 1),for some j = 1, 2, . . . , n}.
Then,
HG1,...,Gm(x) = u(·)(x)n∑
j=1
∫ µj(·)(x)
ℓj(·)(x)dF (t)− ϑ(·)(x)
∫ vj(·)(x)
mj(·)(x)dF (t)
is a functional generator of classes of probability distributions where (·)(x) = (G1, . . . ,Gm)(x).
Proof. In Theorem 1, set υ(x) = u(·)(x),√(x) = ϑ(·)(x), Uj(x) = µj(·)(x), Lj(x) =
ℓj(·)(x), Mj(x) = mj(·)(x) and Vj(x) = vj(·)(x), and observe that condition [di] implies con-dition [ci] of Theorem 1, for i = 1, 2, . . . , 10. �
Let us now consider a special case of Corollary 1.2 that is a functional constructor of classes ofprobability distributions that can be easily used.
1st special case of Corollary 1.2 (1C1.2). Easy to use method for the construction of classes ofprobability distributions.Let ui : [0, 1]m → [0, 1] and νi : [0, 1]m → [0, 1] be monotonic and right continuous func-tions such that ui’s are non-decreasing and νi’s are non-increasing in each one of its variables,with ui(0, . . . , 0) = 0, ui(1, . . . , 1) = 1, νi(0, . . . , 0) = 1 and νi(1, . . . , 1) = 0, for all
906
i = 1, . . . , k. If, in Corollary 1.2, u(·)(x) =∏k
i=1
((i − θi)ui(·)(x) + θi
)αiand ϑ(·)(x) =∏k
i=1 (θiνi(·)(x))αi , with αi ≥ 0 and 0 ≤ θi ≤ 1, then
HG1,...,Gm(x) =
k∏i=1
((1− θi)ui(·)(x) + θi)αi
n∑j=1
∫ µj(·)(x)
ℓj(·)(x)dF (t)
−k∏
i=1
(θiνi(·)(x))αin∑
j=1
∫ vj(·)(x)
mj(·)(x)dF (t),(3.1)
is a functional generator of classes of probability distributions, where (·)(x) = (G1, . . . ,Gm)(x).
Table 2 shows some particular cases of the functional constructor of classes of probability dis-tributions, given by Equation (3.1), that may be more easily used for the generation of classesof distribution. Consider the following functions in the expressions from 15S1C1.2 to 20S1C1.2:µ : [0, 1] → R∪{±∞}, ℓ : [0, 1] → R∪{±∞}, v : [0, 1] → R∪{±∞}, m : [0, 1] → R∪{±∞}such that µ and m are non-decreasing and right continuous, and v and ℓ are non-increasing and rightcontinuous.
907
Tabl
e2.
Som
efu
nctio
nalc
onst
ruct
ors
ofcl
asse
sof
prob
abili
tydi
stri
butio
nsob
tain
edfr
om1C
1.2
Som
esu
b-ca
ses
of1C
1.2
Spec
ialc
ondi
tions
ofm
onot
onic
func
tions
and
para
met
ers
Func
tiona
lcon
stru
ctor
obta
ined
1S1C
1.2
k=
1,a
1=
0an
dvj(1,...,1)=mj(1,...,1)
HG
1,...,Gm
(x)=
n ∑ j=
1
∫ µ j(·)
(x)
ℓj(·)
(x)
dF(t)
2S1C
1.2
n=
1,θi=
0an
dvj(1,...,1)=mj(1,...,1)
HG
1,...,Gm
(x)=
k ∏ i=
1
uαii
(·)(x)
∫ µ j(·)
(x)
ℓj(·)
(x)
dF(t)
3S1C
1.2
n=
1,k=
1,a
1=
0an
d
v1(1,...,1)=m
1(1,...,1)
HG
1,...,Gm
(x)=
∫ µ 1(·)
(x)
ℓ1(·)
(x)
dF(t)
4S1C
1.2
n=
1,k=
1,a
1=
0,
vj(1,...,1)=mj(1,...,1)
and
f(t)=
1
µ1(1,...,1)−ℓ 1
(1,...,1)
,fort
inHG
1,...,Gm
(x)=
µ1(·)(x)−ℓ 1
(·)(x)
µ1(1,...,1)−ℓ 1
(1,...,1)
[ℓ1(1,...,1),µ1(1,...,1)]
5S1C
1.2
n=
1,k=
1,a
1=
0,
ℓ 1(·)(x)=µ1(0,...,0),
v1(1,...,1)=m
1(1,...,1)
and
f(t)=
1
µ1(1,...,1)−µ1(0,...,0)
,fort
inHG
1,...,Gm
(x)=
µ1(·)(x)−µ1(0,...,0)
µ1(1,...,1)−µ1(0,...,0)
[µ1(0,...,0),µ1(1,...,1)]
Con
tinue
son
the
next
page
908
Tabl
e2.
Con
tinue
dfr
omth
epr
evio
uspa
ge
Som
esu
b-ca
ses
of1C
1.2
Spec
ialc
ondi
tions
ofm
onot
onic
func
tions
and
para
met
ers
Func
tiona
lcon
stru
ctor
obta
ined
6S1C
1.2
n=
1,k=
1,a
1=
0,
µ1(·)(x)=ℓ 1
(0,...,0),
v1(1,...,1)=m
1(1,...,1)
and
HG
1,...,Gm
(x)=
ℓ 1(·)(x)−ℓ 1
(0,...,0)
ℓ 1(1,...,1)−ℓ 1
(0,...,0)
f(t)=
1
ℓ 1(1,...,1)−ℓ 1
(0,...,0)
,fort
in
[ℓ1(1,...,1),ℓ 1
(0,...,0)]
7S1C
1.2
k=
1,a
1=
0an
dn ∑ j=
1
∫ µ j(·)
(x)
ℓj(·)
(x)
dF(t)=
1HG
1,...,Gm
(x)=
1−
n ∑ j=
1
∫ v j(·)
(x)
mj(·)
(x)
dF(t)
8S1C
1.2
n=
1,θ
1=
1,
µ1(·)(x)=
+∞
and
HG
1,...,Gm
(x)=
1−
k ∏ i=
1
(νi(·)(x))αi
∫ v 1(·)
(x)
m1(·)
(x)
dF(t)
ℓ 1(·)(x)=
−∞
9S1C
1.2
n=
1,k
=1a1=
0,
µ1(·)(x)=
+∞
andℓ 1
(·)(x)=
−∞
HG
1,...,Gm
(x)=
1−∫ v 1
(·)
(x)
m1(·)
(x)
dF(t)
10S1
C1.
2n
=1
,k=
1a1=
0,
µ1(·)(x)=
+∞
,ℓ1(·)(x)=
−∞
and
f(t)=
1
v1(0,...,0)−m
1(0,...,0)
,fort
inHG
1,...,Gm
(x)=
1−
v1(·)(x)−m
1(·)(x)
v1(0,...,0)−m
1(0,...,0)
[m1(0,...,0),v1(0,...,0)]
11S1
C1.
2n
=1
,k=
1a1=
0,
m1(·)(x)=v1(1,...,1),
µ1(·)(x)=
+∞
,ℓ1(·)(x)=
−∞
and
f(t)=
1
v1(0,...,0)−v1(1,...,1)
,fort
inHG
1,...,Gm
(x)=
v1(·)(x)−v1(0,...,0)
v1(1,...,1)−v1(0,...,0)
[v1(1,...,1),v1(0,...,0)]
Con
tinue
son
the
next
page
909
Tabl
e2.
Con
tinue
dfr
omth
epr
evio
uspa
ge
Som
esu
b-ca
ses
of1C
1.2
Spec
ialc
ondi
tions
ofm
onot
onic
func
tions
and
para
met
ers
Func
tiona
lcon
stru
ctor
obta
ined
12S1
C1.
2n
=1
,k=
1a1=
0,
v1(·)(x)=m
1(1,...,1),
µ1(·)(x)=
+∞
,ℓ1(·)(x)=
−∞
and
f(t)=
1
m1(0,...,0)−m
1(1,...,1)
,fort
inHG
1,...,Gm
(x)=
m1(·)(x)−m
1(0,...,0)
m1(1,...,1)−m
1(0,...,0)
[m1(0,...,0),m
1(1,...,1)]
13S1
C1.
2n
=1
,k=
1a1=
0
HG
1,...,Gm
(x)=
∫ µ 1(·)
(x)
ℓ1(·)
(x)
dF(t)−∫ v 1
(·)
(x)
m1(·)
(x)
dF(t)
14S1
C1.
2n
=1
,k=
1a1=
0an
df(t)=
1µ1(1,...,1
)−ℓ1(1,...,1
)−v1(1,...,1
)+m
1(1,...,1
),
HG
1,...,Gm
(x)=
µ1(·)
(x)−ℓ1(·)
(x)−v1(·)
(x)+m
1(·)
(x)
µ1(1,...,1
)−ℓ1(1,...,1
)−v1(1,...,1
)+m
1(1,...,1
)
fort
in[ℓ
1(1,...,1)+v1(1,...,1),m
1(1,...,1)+µ1(1,...,1)]
.
15S1
C1.
2µ1(·)(x)=µ((1−γ)uk+
1(·)(x)+γ),
ℓ 1(·)(x)=ℓ((1−γ)uk+
1(·)(x)+γ),
HG
1,...,Gm
(x)=
k ∏ i=
1
((1−θi)ui(·)(x)+θi)αi
∫ µ 1(·)
(x)
ℓ1(·)
(x)
dF(t)
v1(·)(x)=µ(γνk+
1(·)(x))
,
m1(·)(x)=ℓ(γνk+
1(·)(x))
,−
n+k ∏
i=n+
1
(θiνi(·)(x))αi
∫ v 1(·)
(x)
m1(·)
(x)
dF(t)
n=
1,αi>
0an
d0≤γ≤
1.
Con
tinue
son
the
next
page
910
Tabl
e2.
Con
tinue
dfr
omth
epr
evio
uspa
ge
Som
esu
b-ca
ses
of1C
1.2
Spec
ialc
ondi
tions
ofm
onot
onic
func
tions
and
para
met
ers
Func
tiona
lcon
stru
ctor
obta
ined
16S1
C1.
2µ1(·)(x)=µ((1−γ)uk+
1(·)(x)+γ),
ℓ 1(·)(x)=
−∞
,HG
1,...,Gm
(x)=
k ∏ i=
1
((1−θi)ui(·)(x)+θi)αi
∫ µ 1(·)
(x)
−∞
dF(t)
v1(·)(x)=µ(γν1(·)(x))
,−
k ∏ i=
1
(θiνi(·)(x))αi
∫ v 1(·)
(x)
−∞
dF(t)
m1(·)(x)=
−∞
,n
=1
and0≤γ≤
1.
17S1
C1.
2µ1(·)(x)=
+∞
,
ℓ 1(·)(x)=ℓ((1−γ)uk+
1(·)(x)+γ),
HG
1,...,Gm
(x)=
k ∏ i=
1
((1−θi)ui(·)(x)+θi)αi
∫ +∞
ℓ1(·)
(x)
dF(t)
v1(·)(x)=
+∞
,−
k ∏ i=
1
(θiνi(·)(x))αi
∫ +∞
m1(·)
(x)
dF(t)
m1(·)(x)=ℓ(γνk+
1(·)(x))
,n
=1
and0≤γ≤
1.
18S1
C1.
2µ1(·)(x)=v(γνk+
1(·)(x))
,HG
1,...,Gm
(x)=
k ∏ i=
1
((1−θi)ui(·)(x)+θi)αi
∫ ℓ 1(·)
(x)
µ1(·)
(x)
dF(t)
ℓ 1(·)(x)=m
(γνk+
1(·)(x))
,
v1(·)(x)=v((1−γ)uk+
1(·)(x)+γ),
−n+k ∏
i=n+
1
(θiνi(·)(x))αi
∫ v 1(·)
(x)
m1(·)
(x)
dF(t)
m1(·)(x)=m
((1−γ)uk+
1(·)(x)+γ),
n=
1,αi>
0an
d0≤γ≤
1.
19S1
C1.
2µ1(·)(x)=v(γνk+
1(·)(x))
,
ℓ 1(·)(x)=
−∞
,HG
1,...,Gm
(x)=
k ∏ i=
1
((1−θi)ui(·)(x)+θi)αi
∫ µ 1(·)
(x)
−∞
dF(t)
v1(·)(x)=v((1−γ)uk+
1(·)(x)+γ),
−k ∏ i=
1
(θiνi(·)(x))αi
∫ +∞
m1(·)
(x)
dF(t)
m1(·)(x)=
−∞
,n
=1
and0≤γ≤
1.
Con
tinue
son
the
next
page
911
Tabl
e2.
Con
tinue
dfr
omth
epr
evio
uspa
ge
Som
esu
b-ca
ses
of1C
1.2
Spec
ialc
ondi
tions
ofm
onot
onic
func
tions
and
para
met
ers
Func
tiona
lcon
stru
ctor
obta
ined
20S1
C1.
2µ1(·)(x)=
+∞
,
ℓ 1(·)(x)=m
(γνk+
1(·)(x))
,HG
1,...,Gm
(x)=
k ∏ i=
1
((1−θi)ui(·)(x)+θi)αi
∫ +∞
ℓ1(·)
(x)
dF(t)
v1(·)(x)=
+∞
,−
k ∏ i=
1
(θiνi(·)(x))αi
∫ +∞
m1(·)
(x)
dF(t)
m1(·)(x)=m
((1−γ)uk+
1(·)(x)+γ),
n=
1an
d0≤γ≤
1.
21S1
C1.
2n
=1
.HG
1,...,Gm
(x)=
k ∏ i=
1
((1−θi)ui(·)(x)+θi)αi
∫ µ 1(·)
(x)
ℓ1(·)
(x)
dF(t)
−k ∏ i=
1
(θiνi(·)(x))αi
∫ v 1(·)
(x)
m1(·)
(x)
dF(t)
22S1
C1.
2µ1(·)(x)=
+∞
,
ℓ 1(·)(x)=
−∞
,HG
1,...,Gm
(x)=
k ∏ i=
1
((1−θi)ui(·)(x)+θi)αi−
k ∏ i=
1
(θiνi(·)(x))αi
v1(·)(x)=
+∞
,m
1(·)(x)=
−∞
,n
=1
andαi>
0.
912
Corollary 1.3 shows an alternative method to obtain classes of probability distributions fromCorollary 1.1. It shows what hypothesesu, ϑ, µj , ℓj , vj and mj must satisfy so that the functions0(x), W (x), Uj(x), Lj(x), Mj(x) and Vj(x) satisfy the conditions of Corollary 1.1 and classesof probability distributions can be obtained.
1.3. Corollary (C1.3). Complementary method to generate classes of probability distributions.Let φ : R → R, µj : [0, 1]m → R ∪ {±∞}, ℓj : [0, 1]m → R ∪ {±∞}, u: [0, 1]m → R,vj : [0, 1]m → R∪{±∞}, mj : [0, 1]m → R∪{±∞} and ϑ : [0, 1]m → R, ∀ j = 1, 2, 3, . . . , η,be monotonic and right continuous functions such that:
[cd1] φ is a cdf andu and ϑ are non-negative;
[cd2] µj , mj andu are non-decreasing and ℓj , vj and ϑ are non-increasing, ∀ j = 1, 2, 3, . . . , η,in all of its variables;
[cd3] If u(1, . . . , 1) = ϑ(1, . . . , 1), then ϑ(1, . . . , 1) = 0 or mj(1, . . . , 1) = vj(1, . . . , 1), ∀j = 1, 2, 3, . . . , η, and u(1, . . . , 1) = 0 or ℓj(1, . . . , 1) = µj(1, . . . , 1), ∀ j = 1, 2, 3, . . . , η;
[cd4] If u(1, . . . , 1) = ϑ(1, . . . , 1) = 0, then µj(1, . . . , 1) = vj(1, . . . , 1), ∀ j = 1, 2, 3, . . . , η,and mj(1, . . . , 1) = ℓj(1, . . . , 1), ∀ j = 1, 2, 3, . . . , η;
[cd5] ℓj(0, . . . , 0) ≤ µj(0, . . . , 0) and if ϑ(1, . . . , 1) = 0, then mj(1, . . . , 1) ≤ vj(1, . . . ,1), ∀ j = 1, 2, 3, . . . , η;
[cd6] vn(0, . . . , 0) ≥ sup{x ∈ R : F (x) < 1} and m1(0, . . . , 0) ≤ inf{x ∈ R : F (x) > 0};
[cd7] ϑ(0, . . . , 0) = 1;
[cd8]u(0, . . . , 0) = 0 or ℓj(0, . . . , 0) = µj(0, . . . , 0), ∀ j = 1, 2, 3, . . . , η − 1 and η ≥ 1;
[cd9] vj(0, . . . , 0) = mj+1(0, . . . , 0), ∀ j = 1, 2, 3, . . . , η − 1 and η ≥ 2;
[cd10] φ is a cdf without points of discontinuity or the functions ℓj(·)(x) and vj(·)(x) are constantat the right of the vicinity of points whose image are points of discontinuity of φ, being also contin-uous in that points. Moreover, φ does not have any point of discontinuity in the set {ℓj(0, . . . , 0),µj(0, . . . , 0), mj(0, . . . , 0), vj(0, . . . , 0), ℓj(1, . . . , 1), µj(1, . . . , 1), mj(1, . . . , 1), vj(1, . . . , 1),for some j = 1, 2, . . . , η}.
Then,
HG1,...,Gm(x) = 1− ϑ(·)(x)η∑
j=1
∫ vj(·)(x)
mj(·)(x)dφ(t) + u(·)(x)
η∑j=1
∫ µj(·)(x)
ℓj(·)(x)dφ(t),
is a functional generator of classes of probability distributions, where (·)(x) = (G1, . . . ,Gm) (x).
Proof. In Corollary 1.1, set 0(x) = u(·)(x), W (x) = ϑ(·)(x), Uj(x) = µj(·)(x), Lj(x) =ℓj(·)(x), Mj(x) = mj(·)(x) and Vj(x) = vj(·)(x) and observe that condition [cdi] impliescondition [cci] of Corollary 1.1, for i = 1, 2, . . . , 10. �
Let us now consider a special case of Corollary 1.3 that is a functional constructor of classes ofprobability distributions that can be easily used.
913
1st special case of Corollary 1.3 (1C1.3). Easy to use complementary method for the constructionof classes of probability distributions.Let ui : [0, 1]m → [0, 1] and νi : [0, 1]m → [0, 1] be monotonic and right continuous func-tions such that ui’s are non-decreasing and νi’s are non-increasing in each one of its variables,with ui(0, . . . , 0) = 0, ui(1, . . . , 1) = 1, νi(0, . . . , 0) = 1 and νi(1, . . . , 1) = 0, for all
i = 1, . . . , k. If, in Corollary 1.3, ϑ(·)(x) =∏k
i=1
((i − θi)νi(·)(x) + θi
)αiand u(·)(x) =∏k
i=1 (θiui(·)(x))αi , with αi ≥ 0 and 0 ≤ θi ≤ 1, then
HG1,...,Gm(x) = 1−k∏
i=1
((1− θi)νi(·)(x) + θi)αi
n∑j=1
∫ vj(·)(x)
mj(·)(x)dφ(t)
+
k∏i=1
(θiui(·)(x))αin∑
j=1
∫ µj(·)(x)
ℓj(·)(x)dφ(t),(3.2)
is a functional generator of classes of probability distributions, where (·)(x) = (G1, . . . ,Gm)(x).
Table 3 shows how to obtain some special cases of the function given by Equation (3.2), thatmay be more easily used to generate classes of distributions. It is important to emphasize that wecan obtain the same constructors from 1S1C1.2 to 12S1C1.2 using 1C1.3, we omit the details hereshowing only how to obtain different constructors from those of Table 2. Consider the followingfunctions in the expressions from 15S1C1.3 to 20S1C1.3: µ : [0, 1] → R ∪ {±∞}, ℓ : [0, 1] →R∪{±∞}, v : [0, 1] → R∪{±∞}, m : [0, 1] → R∪{±∞} such that µ and m are non-decreasingand right continuous, and v and ℓ are non-increasing and right continuous.
914
Tabl
e3.
Som
efu
nctio
nalc
onst
ruct
ors
ofcl
asse
sof
prob
abili
tydi
stri
butio
nsob
tain
edfr
om1C
1.3
Som
esu
b-ca
ses
of1C
1.3
Spec
ialc
ondi
tions
over
mon
oton
icfu
nctio
nsan
dpa
ram
eter
sC
onst
ruct
orfu
nctio
nsob
tain
ed
13S1
C1.
3η=
1,k
=1
,a1=
0.
HG
1,...,Gm
( x)=
1−∫ v 1
(·)
(x)
m1(·)
(x)
dF(t)+
∫ µ 1(·)
(x)
ℓ1(·)
(x)
dF(t)
14S1
C1.
3η=
1,k
=1
,a1=
0an
df(t)=
1v1(0,...,0
)−m
1(0,...,0
)−µ1(0,...,0
)+ℓ1(0,...,0
),
HG
1,...,Gm
(x)=
1−
v1(·)
(x)−m
1(·)
(x)−µ1(·)
(x)+ℓ1(·)
(x)
v1(0,...,0
)−m
1(0,...,0
)−µ1(0,...,0
)+ℓ1(0,...,0
)
fort
in[m
1(0,...,0)+µ1(0,...,0),ℓ 1
(0,...,0)+v1(0,...,0)]
.
15S1
C1.
3µ1(·)(x)=µ(γuk+
1(·)(x))
,
ℓ 1(·)(x)=ℓ(γuk+
1(·)(x))
,HG
1,...,Gm
(x)=
1−
k ∏ i=
1
((1−θi)νi(·)(x)+θi)αi
∫ v 1(·)
(x)
m1(·)
(x)
dφ(t)
v1(·)(x)=µ((1−γ)νk+
1(·)(x)+γ),
+
k ∏ i=
1
(θiui(·)(x))αi
∫ µ 1(·)
(x)
ℓ1(·)
(x)
dφ(t)
m1(·)(x)=ℓ((1−γ)νk+
1(·)(x)+γ),
η=
1an
d0≤γ≤
1.
16S1
C1.
3µ1(·)(x)=µ(γuk+
1(·)(x))
,
ℓ 1(·)(x)=
−∞
,HG
1,...,Gm
(x)=
1−
k ∏ i=
1
((1−θi)νi(·)(x)+θi)αi
∫ v 1(·)
(x)
m1(·)
(x)
dφ(t)
v1(·)(x)=µ((1−γ)νk+
1(·)(x)+γ),
m1(·)(x)=
−∞
,+
k ∏ i=
1
(θiui(·)(x))αi
∫ µ 1(·)
(x)
ℓ1(·)
(x)
dφ(t)
η=
1an
d0≤γ≤
1.
17S1
C1.
3µ1(·)(x)=
+∞,ℓ 1
(·)(x)=ℓ(γuk+
1(·)(x))
,
v1(·)(x)=
+∞,η=
1,0≤γ≤
1,a
ndHG
1,...,Gm
(x)=
1−
k ∏ i=
1
((1−θi)νi(·)(x)+θi)αi
∫ +∞
m1(·)
(x)
dφ(t)
m1(·)(x)=ℓ((1−γ)νk+
1(·)(x)+γ).
+
k ∏ i=
1
(θiui(·)(x))αi
∫ +∞
ℓ1(·)
(x)
dφ(t)
Con
tinue
son
the
next
page
915
Tabl
e3.
Con
tinue
dfr
omth
epr
evio
uspa
ge
Som
esu
b-ca
ses
of1C
1.3
Spec
ialc
ondi
tions
over
mon
oton
icfu
nctio
nsan
dpa
ram
eter
sC
onst
ruct
orfu
nctio
nsob
tain
ed
18S1
C1.
3µ1(·)(x)=m
(γuk+
1(·)(x))
,
ℓ 1(·)(x)=v(γuk+
1(·)(x))
,HG
1,...,Gm
(x)=
1−
k ∏ i=
1
((1−θi)νi(·)(x)+θi)αi
∫ v 1(·)
(x)
m1(·)
(x)
dφ(t)
v1(·)(x)=m
((1−γ)νk+
1(·)(x)+γ),
+
k ∏ i=
1
(θiui(·)(x))αi
∫ µ 1(·)
(x)
ℓ1(·)
(x)
dφ(t)
m1(·)(x)=v((1−γ)νk+
1(·)(x)+γ),
η=
1an
d0≤γ≤
1.
19S1
C1.
3µ1(·)(x)=m
(γuk+
1(·)(x))
,
ℓ 1(·)(x)=
−∞
,HG
1,...,Gm
(x)=
1−
k ∏ i=
1
((1−θi)νi(·)(x)+θi)αi
∫ v 1(·)
(x)
−∞
dφ(t)
v1(·)(x)=m
((1−γ)νk+
1(·)(x)+γ),
m1(·)(x)=
−∞
,+
k ∏ i=
1
(θiui(·)(x))αi
∫ µ 1(·)
(x)
−∞
dφ(t)
η=
1an
d0≤γ≤
1.
20S1
C1.
3µ1(·)(x)=
+∞
,
ℓ 1(·)(x)=v(γuk+
1(·)(x))
,HG
1,...,Gm
(x)=
1−
k ∏ i=
1
((1−θi)νi(·)(x)+θi)αi
∫ +∞
m1(·)
(x)
dφ(t)
v1(·)(x)=
+∞
,+
k ∏ i=
1
(θiui(·)(x))αi
∫ +∞
ℓ1(·)
(x)
dφ(t)
m1(·)(x)=v((1−γ)νk+
1(·)(x)+γ),
η=
1an
d0≤γ≤
1.
21S1
C1.
3η=
1.
HG
1,...,Gm
(x)=
1−
k ∏ i=
1
((1−θi)νi(·)(x)+θi)αi
∫ v 1(·)
(x)
m1(·)
(x)
dφ(t)
+k ∏ i=
1
(θiui(·)(x))αi
∫ µ 1(·)
(x)
ℓ1(·)
(x)
dφ(t)
22S1
C1.
3µ1(·)(x)=
+∞
,ℓ1(·)(x)=
−∞
,v1(·)(x)=
+∞
,m1(·)(x)=
−∞
,
η=
1an
dαi>
0.
HG
1,...,Gm
(x)=
1−
k ∏ i=
1
((1−θi)νi(·)(x)+θi)αi+
k ∏ i=
1
(θiui(·)(x))αi
916
The following theorem shows that Theorem 1 and of its corollaries are equivalent. In otherwords, Theorem 1 and all of its corollaries generate the same probabilistic distributions.
2. Theorem (T2). Equivalence among Theorem 1 and its corollaries.Theorem 1 and all of its corollaries generate exactly the same probabilistic distributions
Proof. To demonstrate Theorem 2 we show that C1.1 is a corollary of T1, that C1.2 is a corollaryof C1.1, that C1.3 is a corollary of C1.2, and finally that T1 is a corollary of C1.3.
(1) C1.1 is a corollary of T1: it is obvious, as it has been already demonstrated.(2) C1.2 is a corollary of C1.1: In Corollary 1.1, W (x) = 1, η = 1, V1(x) = 1, M1(x) =
u(·)(x)∑n
j=1
∫ µj(·)(x)ℓj(·)(x)
dF (t)−ϑ(·)(x)∑n
j=1
∫ vj(·)(x)mj(·)(x)
dF (t), 0(x) = 0, ∀x ∈ R and φ(t)
the cdf of the uniform[0,1].(3) C1.3 is a corollary of C1.2: In Corollary 1.2, set n = 1, u(·)(x) = 1, µ1(·)(x) = 1,
ℓ1(·)(x) = 0, ϑ(·)(x) = 1, m1(·)(x) = 0, v1(·)(x) = ϑ(·)(x)∑η
j=1
∫ vj(·)(x)mj(·)(x)
dφ(t) −
u(·)(x)∑η
j=1
∫ µj(·)(x)ℓj(·)(x)
dφ(t), ∀x ∈ R and F (t) as the cdf of the uniform[0,1].(4) T1 is a corollary of C1.3: In Corollary 1.3, set η = 1,u(·)(x) = 1, ϑ(·)(x) = 1, v1(·)(x) = 1,
m1(·)(x) = 0, ℓ1(·)(x) = 0, µ1(·)(x) = υ(x)∑n
j=1
∫ Uj(x)
Lj(x)dF (t)−
√(x)∑n
j=1
∫ Vj(x)
Mj(x)dF (t)
∀x ∈ R, and φ(t) cdf of the uniform[0,1].
From (1) to (4), we conclude Theorem 2. �
Several classes of probability distributions existing in the literature can be obtained as specialcases of the functional constructors of classes of probability distributions proposed here. Table 4,in the Appendix A, shows how to obtain such classes using some corollaries of Theorem 1.
4. Support of the classes of probability distributionsIn this section, we provide an analysis about the support and nature of the probability distributionsgenerated through the methods described in Corollaries 1.2 and 1.3. These results are important togain a deeper understanding about the proposed method, especially considering the fact that thereis little work on this theme in the literature.
In order to state the results, we remind the reader that, by the definition, the support of a cumula-tive distribution function F is given by SF = {x ∈ R : (x)− F (x− ε) > 0, ∀ε > 0}. Theorem 3shows that the support of the generated distribution is contained in the union of the supports of thebaseline distributions Gi’s.
3. Theorem (T3). General theorem of the supports.Let HG1,...,Gm(x) be a cumulative distribution function generated from Corollary 1.2 (respectively,1.3). Then,
SHG1,...,Gm⊂ ∪m
j=1SGj .
Proof. Consider that HG1,...,Gm(x) has the functional form of Corollary 1.2:
HG1,...,Gm(x) = u(·)(x)∑n
j=1
∫ µj(·)(x)ℓj(·)(x)
dF (t)− ϑ(·)(x)∑n
j=1
∫ vj(·)(x)mj(·)(x)
dF (t).
917
Thus, it follows that
HG1,...,Gm(x)−HG1,...,Gm(x− ε) = u(·)(x)n∑
j=1
∫ µj(·)(x)
ℓj(·)(x)dF (t)
− ϑ(·)(x)n∑
j=1
∫ vj(·)(x)
mj(·)(x)dF (t)
− u(·)(x− ε)
n∑j=1
∫ µj(·)(x−ε)
ℓj(·)(x−ε)
dF (t)
+ ϑ(·)(x− ε)
n∑j=1
∫ vj(·)(x−ε)
mj(·)(x−ε)
dF (t).
Suppose that x /∈ ∪mj=1 SGj . Then, there exists ε > 0 such that Gj(x) − Gj(x − ε) = 0, for
all j = 1, 2, . . . ,m. Let us show that HG1,...,Gm(x)−HG1,...,Gm(x− ε) = 0.
First, note that
HG1,...,Gm(x) − HG1,...,Gm(x− ε) = (u(·)(x)−u(·)(x− ε))
n∑j=1
∫ µj(·)(x)
ℓj(·)(x)dF (t)
− (ϑ(·)(x)− ϑ(·)(x− ε))
∫ vj(·)(x)
mj(·)(x)dF (t)
+ u(·)(x− ε)
n∑j=1
{∫ µj(·)(x)
µj(·)(x−ε)
dF (t)−∫ ℓj(·)(x)
ℓj(·)(x−ε)
dF (t)
}
− ϑ(·)(x− ε)
n∑j=1
{∫ vj(·)(x)
vj(·)(x−ε)
dF (t)−∫ mj(·)(x)
mj(·)(x−ε)
dF (t)
}.
Since u(·)(x) = u(·)(x − ε), ϑ(·)(x) = ϑ(·)(x − ε), µj(·)(x) = µj(·)(x − ε), ℓj(·)(x) =ℓj(·)(x− ε), mj(·)(x) = mj(·)(x− ε), vj(·)(x) = vj(·)(x− ε), it follows that
HG1,...,Gm(x)−HG1,...,Gm(x− ε) = 0.
Thus, we have x /∈ SHG1,...,Gm. Therefore, SHG1,...,Gm
⊂ ∪mj=1SGj . A similar argument
works for the case where HG1,...,Gm(x) has the functional form of Corollary 1.3. �
Corollary 3.1 shows a special case where the distribution HG1,...,Gm(x) is discrete.
3.1. Corollary (C3.1). Discrete baselines generate discrete distributions.If all Gj’s are discrete in Corollary 1.2 (respectively, 1.3), then HG1,...,Gm(x) is discrete.
Proof. Being all Gj’s discrete, then ∪mj=1SGj has a countable number of values. Since, by Theo-
rem 3, SHG1,...,Gm⊂ ∪m
j=1SGj , then SHG1,...,Gmhas a countable number of values and, for this
reason, HG1,...,Gm(x) is acdf of a discrete random variable. �
Theorem 4 shows conditions when SHG1,...,Gm⊂ ∪m
j=1SGj .
4. Theorem (T4). The support of distribution is a union of the supports of the baselines.Assume, in Corollary 1.2, (respectively, 1.3) that:[f1] SF is a convex set;[f2] µn(1, . . . , 1) = sup{x ∈R : φ(x) < 1}, ℓ1(1, . . . , 1) = inf{x ∈R : φ(x) < 1},u(·)(x) >0, ∀ x ∈ R, and that µj(·)(x) or ℓj(·)(x), for some j = 1, 2, . . . , n, are strictly monotonic orvn(0, . . . , 0) = sup{x ∈ R : φ(x) < 1}, m1(0, . . . , 0) = inf{x ∈ R : φ(x) > 0}, ϑ(·)(x) > 0,∀ x ∈ R, and that vj(·)(x) or mj(·)(x), for some j = 1, 2, . . . , n, are strictly monotonic.
918
Then,
SHG1,...,Gm⊂ ∪m
j=1SGj .
Proof. As a consequence of Theorem 3, we only need to show that
SHG1,...,Gm⊂ ∪m
j=1SGj .
Consider that HG1,...,Gm(x) has the functional form of Corollary 1.2:HG1,...,Gm(x) = u(·)(x)
∑nj=1
∫ µj(·)(x)ℓj(·)(x)
dF (t)− ϑ(·)(x)∑n
j=1
∫ vj(·)(x)mj(·)(x)
dF (t).Thus, using an argument identical to that of the proof of Theorem 3, we have
HG1,...,Gm(x) − HG1,...,Gm(x− ε) = (u(·)(x)−u(·)(x− ε))
n∑j=1
∫ µj(·)(x)
ℓj(·)(x)dF (t)
− (ϑ(·)(x)− ϑ(·)(x− ε))
∫ vj(·)(x)
mj(·)(x)dF (t)
+ u(·)(x− ε)
n∑j=1
{∫ µj(·)(x)
µj(·)(x−ε)
dF (t)−∫ ℓj(·)(x)
ℓj(·)(x−ε)
dF (t)
}
− ϑ(·)(x− ε)
n∑j=1
{∫ vj(·)(x)
vj(·)(x−ε)
dF (t)−∫ mj(·)(x)
mj(·)(x−ε)
dF (t)
}.
Suppose that x ∈ ∪mj=1SGj . Then, there exists ε > 0 such that Gj(x) − Gj(x − ε) > 0 for
some j = 1, 2, . . . ,m. Conditions [f1] and [f2] imply that at least one of the integrals of theform
∫ h(·)(x)h(·)(x−ε)
dF (t) is different from zero for h = µj , h = ℓj , h = vj or h = mj , for somej = 1, 2, . . . , n. This fact together with the fact thatu or ϑ are strictly monotonic imply that:
HG1,...,Gm(x) − HG1,...,Gm(x − ε) > 0. Thus, x ∈ SHG1,...,Gm, as desired. A similar argu-
ment works for the case where HG1,...,Gm(x) has the functional form of Corollary 1.3. �
Theorem 5 shows some conditions that guarantee that HG1,...,Gm(x) is a continuous cdf .
5. Theorem (T5). Generating continuous cumulative distribution functions.Suppose that F (x), G1, . . . , Gm are continuous cdf ′s and that µj , ℓj , u, vj , mj and ϑ are con-tinuous functions in Corollary 1.2 (respectively, 1.3). Then, HG1,...,Gm(x) is a continuous cdf .
Proof. Consider that HG1,...,Gm(x) has the functional form of Corollary 1.2
HG1,...,Gm(x) = u(·)(x)∑n
j=1
∫ µj(·)(x)ℓj(·)(x)
dF (t)− ϑ(·)(x)∑n
j=1
∫ vj(·)(x)mj(·)(x)
dF (t).
Thus, it follows that
HG1,...,Gm(x)−HG1,...,Gm(x−) = u(·)(x)n∑
j=1
∫ µj(·)(x)
ℓj(·)(x)dF (t)
− ϑ(·)(x)n∑
j=1
∫ vj(·)(x)
mj(·)(x)dF (t)
− u(·)(x−)
n∑j=1
∫ µj(·)(x−)
ℓj(·)(x−)
dF (t)
+ ϑ(·)(x)n∑
j=1
∫ vj(·)(x−)
mj(·)(x−)
dF (t).
919
Using a similar approach as that used in the proof of Theorem 3, we obtain:
HG1,...,Gm(x)−HG1,...,Gm(x−) =(u(·)(x)− u(·)(x−)
) n∑j=1
∫ µj(·)(x)
ℓj(·)(x)dF (t)
−(ϑ(·)(x)− ϑ(·)(x−)
) n∑j=1
∫ vj(·)(x)
mj(·)(x)dF (t)
+u(·)(x−)
n∑j=1
{∫ µj(·)(x)
µj(·)(x−)
dF (t)−∫ ℓj(·)(x)
ℓj(·)(x−)
dF (t)
}
− ϑ(·)(x−)
{n∑
j=1
∫ vj(·)(x)
vj(·)(x−)
dF (t)−∫ mj(·)(x)
mj(·)(x−)
dF (t)
}.
Since all functions included in the previous expression are continuous, we have:
HG1,...,Gm(x)−HG1,...,Gm(x−) = 0.
Therefore, we shall conclude that HG1,...,Gm(x) is a continuous function. A similar argumentworks for the case where HG1,...,Gm(x) has the functional form of Corollary 1.3. �
Theorem 6 shows conditions where distribution HG1,...,Gm(x) will be a continuous cdf of ran-dom variables.
6. Theorem (T6). Generating cumulative distribution functions of continuous random variables.Suppose that F (x), G1, . . . , Gm are cdf ′s of continuous random variables and that µj , ℓj ,u, vj ,mj and ϑ are continuous and differentiable functions in Corollary 1.2 (respectively, 1.3). Then,HG1,...,Gm(x) is a cdf of a continuous random variable.
Proof. Consider that HG1,...,Gm(x) has the functional form of Corollary 1.2HG1,...,Gm(x) = u(·)(x)
∑nj=1
∫ µj(·)(x)ℓj(·)(x)
dF (t)− ϑ(·)(x)∑n
j=1
∫ vj(·)(x)mj(·)(x)
dF (t).
Since F (x), G1, . . . , Gm are cdf ′s of continuous random variables and µj , ℓj , u, vj , mj andϑ are continuous and differentiable functions , then HG1,...,Gm(x) is a cdf of a continuous randomvariable with density given by:
HG1,...,Gm(x) =
(m∑
z=1
∂u(·)(x)∂Gz
gz(x)
)(n∑
j=1
∫ µj(·)(x)
ℓj(·)(x)dF (t)
)
+u(·)(x)n∑
j=1
{F (µj(·)(x)
m∑z=1
∂µj(·)(x)∂Gz
gz(x)
− F (ℓj(·)(x))m∑
z=1
∂ℓj(·)(x)∂Gz
gz(x)
}
−
(m∑
z=1
∂ϑ(·)(x)∂Gz
gz(x)
)(n∑
j=1
∫ vj(·)(x)
mj(·)(x)dF (t)
)
− ϑ(·)(x)n∑
j=1
{F (vj(·)(x))
m∑z=1
∂vj(·)(x)∂Gz
gz(x)
− F (mj(·)(x))m∑
z=1
∂mj(·)(x)∂Gz
gz(x)
}
920
where (·)(x) = (G1, . . . , Gm)(x). A similar argument works for the case whereHG1,...,Gm(x) has the functional form of Corollary 1.3. �
The next theorem shows an alternative way of generating discrete distributions.
7. Theorem (T7). Integrals with respect to discrete distributions generate discrete distributions.Suppose that the probability distribution F (x) is discrete and that u(·)(x) = ϑ(·)(x) = 1, inCorollary 1.2 (respectively, 1.3). Then, HG1,...,Gm(x) is discrete independent from the monotonicfunctions that are used as limits of the integration.
Proof. We can write the following
HG1,...,Gm(x) =
n∑j=1
∫ µj(·)(x)
ℓj(·)(x)dF (t)−
n∑j=1
∫ vj(·)(x)
mj(·)(x)dF (t)
=
n∑j=1
[F (µj(·)(x))− F (ℓj(·)(x))]
−n∑
j=1
[F (vj(·)(x))− F (mj(·)(x))] .
Since F (x) is a cdf of a discrete random variable, then F (x) assumes a countable number ofdifferent values. Thus, as HG1,...,Gm(x) is given by the sum of differences of F (x) evaluatedin at most 4n distinct points, it also can only assume a countable number of different values and,therefore, it is a cdf of a discrete random variable. �
5. ConclusionsThe method to generate distributions and classes of probability distributions that we presented in thispaper combines several methods to generate classes of distribution that have already been describedin the literature. By this unification, we could draw conclusions on the supports of generated classes.Using the proposed method, we can generate any probability distribution in different ways. The onlynecessity is to modify the monotonic functions involved in the method.
As a further step, we aim to explore several classes of distribution which may be generated usingthis method, developing their properties and applying them to model several datasets. In a parallelwork, we are proposing a method for generating multivariate distributions.
It is important to stress that a model can better describe a phenomenon by increasing its numberof parameters, providing higher flexibility. On the other hand, we should not forget that increasingthe number of parameters may cause identifiability and computational problems in the estimation ofthe parameters. Moreover, a large number of parameters increase the chance of overfitting, which isa problem particularly in forecasting and prediction studies. Thus, the best approach is to choose amethod that best describes the analyzed phenomenon or experiment with the lowest possible numberof parameters.
Appendix A. how to obtain generalizations of already existing class modelsIn this appendix we will show some applications to obtain some very special examples of functionsgenerating classes of probabilistic distributions by finding probability distribution classes that havealready been described in literature.
Table 4 shows how to obtain classes of probability distributions already existing in the literatureby the use of some corollaries of Theorem 1.
921
Tabl
e4.
Gen
eral
izat
ions
ofal
read
yex
istin
gcl
asse
sof
prob
abili
tydi
stri
butio
ns
Sub-
case
Use
ddi
stri
butio
nsf(t)
Mon
oton
icfu
nctio
nsV
alue
sfo
rthe
para
met
ers
Obt
aine
dcl
ass
θ=
0,m
=1
and
beta
-Gty
pe1
ℓ 1(·)(x)=θ
m ∏ i=
1
( 1−Gαii
(x)) δ i
β1=
1de
fined
byE
ugen
eet
3S1C
1.2
1
B(a,b)ta
−1(1
−t)b−
1al
.(20
02)
µ1(·)(x)=
(1−θ)
m ∏ j=
1
Gβjj
(x)+θ
Mc-G
type
1
θ=
0an
dm
=1
defin
edby
McD
onal
d(1
984)
θ=
0,m
=1
and
beta
-Gty
pe1
m1(·)(x)=θ
m ∏ j=
1
Gβjj
(x)
β1=
1de
fined
byE
ugen
eet
9S1C
1.2
1
B(a,b)ta
−1(1
−t)b−
1al
.(20
02)
v1(·)(x)=
(1−θ)
m ∏ i=
1
( 1−Gαii
(x)) δ i
+θ
Mc-G
type
1
θ=
0an
dm
=1
defin
edby
McD
onal
d(1
984)
3S1C
1.2
btb
−1
ℓ 1(·)(x)=θ
m ∏ i=
1
( 1−Gαii
(x)) δ i
θ=
0,m
=1
and
expo
nent
iate
d
β1=
1ge
nera
lized
defin
edby
µ1(·)(x)=
(1−θ)
m ∏ j=
1
Gβjj
(x)+θ
Mud
holk
aret
al.
(199
5)ex
pone
ntia
ted
9S1C
1.2
btb
−1
m1(·)(x)=θ
m ∏ j=
1
Gβjj
(x)
θ=
0,m
=1
and
gene
raliz
edde
fined
by
v1(·)(x)=
(1−θ)
m ∏ i=
1
( 1−Gαii
(x)) δ i
+θ
β1=
1M
udho
lkar
etal
.
(199
5)
Con
tinue
son
the
next
page
922
Tabl
e4.
Con
tinue
dfr
omth
epr
evio
uspa
ge
Sub-
case
Use
ddi
stri
butio
nsf(t)
Mon
oton
icfu
nctio
nsV
alue
sfo
rthe
para
met
ers
Obt
aine
dcl
ass
expo
nent
iate
d
3S1C
1.2
btb
−1
ℓ 1(·)(x)=θ
m ∏ i=
1
( 1−Gαii
(x)) δ i
θ=
0an
dm
=1
gene
raliz
edde
fined
by
µ1(·)(x)=
(1−θ)
m ∏ j=
1
Gβjj
(x)+θ
Mud
holk
aret
al.
(199
5)ex
pone
ntia
ted
5S1C
1.2
——
—µ1(·)(x)=
m ∏ i=
1
( b i+Gαii
(x)) β i
θ=
0an
dβ
=1
gene
raliz
edde
fined
by
Mud
holk
aret
al.
(199
5)K
umar
asw
amy-G
3S1C
1.2
abta
−1(1
−ta
)b−
1ℓ 1
(·)(x)=θ
m ∏ i=
1
( 1−Gαii
(x)) δ i
θ=
0an
dm
=1
and
defin
edby
µ1(·)(x)=
(1−θ)
m ∏ j=
1
Gβjj
(x)+θ
β1=
1C
orde
iro
and
Cas
tro
(201
1)
Kum
aras
wam
y-G
6S1C
1.2
——
—µ1(·)(x)=
m ∏ i=
1
( b i−Gαii
(x)) β i
m=
1,b
1=
1de
fined
by
β1=β
andα
1=α
Cor
deir
oan
dC
astr
o(2
011)
beta
-Gty
pe3
θ=
0,m
=1
and
defin
edby
Tha
iran
d
3S1C
1.2
ta−
1(1
−t)b−
1
B(a,b)(1+t)a+b
ℓ 1(·)(x)=θm ∏ i=
1
( 1−Gαii
(x)) δ i
β1=
1N
adar
ajah
(201
5)[2
2]
µ1(·)(x)=
(1−θ)m ∏ j=
1
Gβjj
(x)+θ
Mc-G
type
3
θ=
0an
dm
=1
defin
edby
Tha
iran
dN
adar
ajah
(201
5)
Con
tinue
son
the
next
page
923
Tabl
e4.
Con
tinue
dfr
omth
epr
evio
uspa
ge
Sub-
case
Use
ddi
stri
butio
nsf(t)
Mon
oton
icfu
nctio
nsV
alue
sfo
rthe
para
met
ers
Obt
aine
dcl
ass
beta
-Gty
pe1
θ=
0,m
=1
and
defin
edby
Eug
ene
et
m1(·)(x)=θ
m ∏ j=
1
Gβjj
(x)
β1=
1al
.(20
02)
9S1C
1.2
ta−
1(1
−t)b−
1
B(a,b)(1+t)a+b
v1(·)(x)=
(1−θ)
m ∏ i=
1
( 1−Gαii
(x)) δ i
+θ
Mc-G
type
3
θ=
0an
dm
=1
defin
edby
Tha
iran
dN
adar
ajah
(201
5)
ℓ 1(·)(x)=θ
m ∏ i=
1
( 1−Gαii
(x)) δ i
θ=
0,m
=1
and
Kum
mer
beta
gene
ra-
3S1C
1.2
ta−
1(1
−t)b−
1exp(−ct)
B(a,b)
β1=
1liz
edde
fined
byPe
s-
µ1(·)(x)=
(1−θ)
m ∏ j=
1
Gβjj
(x)+θ
cim
etal
.(20
12)
Kum
mer
beta
gene
ra-
9S1C
1.2
ta−
1(1
−t)b−
1exp(−ct)
B(a,b)
m1(·)(x)=θ
m ∏ j=
1
Gβjj
(x)
θ=
1,m
=1
and
lized
defin
edby
Pes-
v1(·)(x)=
(1−θ)
m ∏ i=
1
( 1−Gαii
(x)) δ i
+θ
β1=
1ci
met
al.(
2012
)
θ=
0,m
=1
,ga
mm
a-G
defin
edby
3S1C
1.2
ba
Γ(a
)ta
−1e−bt
ℓ 1(·)(x)=θm ∏ i=
1
( 1−Gβjj
(x)) γ j
α1=
1,λ
=1
,r=
1Z
ogra
fos
and
Bal
a-
µ1(·)(x)=θ+
( −log
( m ∏ i=
1
(1−Gβjj
(x))δ1
) λ)r
andδ1=
1kr
ishn
an(2
009)
Con
tinue
son
the
next
page
924
Tabl
e4.
Con
tinue
dfr
omth
epr
evio
uspa
ge
Sub-
case
Use
ddi
stri
butio
nsf(t)
Mon
oton
icfu
nctio
nsV
alue
sfo
rthe
para
met
ers
Obt
aine
dcl
ass
ℓ 1(·)(x)=ρ
m ∏ i=
1
( 1−Gωij
(x)) s i
ρ=
0,m
=1
,ga
mm
a-G
defin
edby
3S1C
1.2
ba
Γ(a
)ta
−1e−bt
α=
0an
dλ1=
1Z
ogra
fos
and
Bal
a-
µ1(·)(x)=ρ−
log
( 1−
m ∏ i=
1
Gλll
(x))
kris
hnan
(200
9)
ℓ 1(·)(x)=ρ
m ∏ i=
1
( 1−Gλij
(x)) s i
m=
1,α
1=
1,
gam
ma-G
defin
edby
3S1C
1.2
ba
Γ(a
)ta
−1e−bt
ρ=
0,ω
1=
1,λ
=1
Zog
rafo
san
dB
ala-
µ1(·)(x)=ρ+
( −log
( m ∏ i=
1
( 1−Gαii
(x)) ω i
) λ)r
andr=
1kr
ishn
an(2
009)
m1(·)(x)=ρ
m ∏ i=
1
( 1−Gωij
(x)) s i
ρ=
0,m
=1
and
gam
ma-G
defin
edby
9S1C
1.2
ba
Γ(a
)ta
−1e−bt
λ1=
1Z
ogra
fos
and
Bal
a-
v1(·)(x)=ρ−
log
( 1−
m ∏ i=
1
Gλll
(x))
kris
hnan
(200
9)
m1(·)(x)=ρm ∏ i=
1
( 1−Gλij
(x)) s i
m=
1,α
1=
0,
gam
ma-G
defin
edby
9S1C
1.2
ba
Γ(a
)ta
−1e−bt
ρ=
0an
dβ1=
1Z
ogra
fos
and
Bal
a-
v1(·)(x)=ρ+
( −log
( m ∏ i=
1
( 1−Gαii
(x)) β i
) λ)r
kris
hnan
(200
9)
Con
tinue
son
the
next
page
925
Tabl
e4.
Con
tinue
dfr
omth
epr
evio
uspa
ge
Sub-
case
Use
ddi
stri
butio
nsf(t)
Mon
oton
icfu
nctio
nsV
alue
sfo
rthe
para
met
ers
Obt
aine
dcl
ass
9S1C
1.2
ba
Γ(a
)ta
−1e−bt
m1(·)(x)=θ
m ∏ j=
1
( 1−Gβjj
(x)) γ j
θ=
0,m
=1
,ga
mm
a-G
defin
ed
v1(·)(x)=θ+
( −log
( m ∏ i=
1
Gαii
(x))) δ
α1=
1an
dδ=
1by
Cor
deir
oet
al.
(201
7)
9S1C
1.2
ba
Γ(a
)ta
−1e−bt
m1(·)(x)=θ
1−m ∏ j=
1
Gαjj
(x) λ
θ=
0,m
=1
,ga
mm
a-G
defin
ed
v1(·)(x)=θ+
( −log
( 1−
m ∏ i=
1
( 1−Gβii
(x)) γ i
) r)s
β1=
1,γ
1=
1,r
=1
byC
orde
iro
etal
.
ands=
1(2
017)
3S1C
1.2
ba
Γ(a
)ta
−1e−bt
ℓ 1(·)(x)=ρ
m ∏ i=
1
Gωii
(x)
ρ=
0,m
=1
,ga
mm
a-G
defin
ed
µ1(·)(x)=ρ−
log
( m ∏ l=1
Gλll
(x))
andλ1=
1by
Cor
deir
oet
al.
(201
7)
3S1C
1.2
ba
Γ(a
)ta
−1e−bt
ℓ 1(·)(x)=ρ
m ∏ i=
1
Gλii
(x)
m=
1, ρ
=0
,ga
mm
a-G
defin
ed
µ1(·)(x)=ρ+
( −log
( 1−
m ∏ i=
1
( 1−Gαii
(x)) ω i
) λ)r
α1=
1,δ
1=
1,
byC
orde
iro
etal
.
λ=
1an
dr=
1(2
017)
9S1C
1.2
ba
Γ(a
)ta
−1e−bt
m1(·)(x)=ρm ∏ i=
1
Gωii
(x)
ρ=
0,m
=1
,ga
mm
a-G
defin
ed
v1(·)(x)=ρ−
log
( m ∏ l=1
Gλll
(x))
α=
0an
dλ1=
1by
Cor
deir
oet
al.
(201
7)
Con
tinue
son
the
next
page
926
Tabl
e4.
Con
tinue
dfr
omth
epr
evio
uspa
ge
Sub-
case
Use
ddi
stri
butio
nsf(t)
Mon
oton
icfu
nctio
nsV
alue
sfo
rthe
para
met
ers
Obt
aine
dcl
ass
9S1C
1.2
ba
Γ(a
)ta
−1e−bt
m1(·)(x)=ρ
m ∏ i=
1
Gλii
(x)
m=
1,ρ
=0
,ga
mm
a-G
defin
ed
v1(·)(x)=ρ+
( −log
( 1−
m ∏ i=
1
( 1−Gαii
(x)) ω i
) λ)r
α1=
1,ω
1=
1,
byC
orde
iro
etal
.
λ=
1an
dr=
1(2
017)
G1(x
)=G
2(x
),M
arsh
all-
Olk
in-G
α=
1,β
=1
and
defin
edby
Mar
shal
lθ=
1an
dO
lkin
(199
7)
12S1
C1.
2—
——
——
—m
1(·)(x)=
b( 1
−Gβ 2(x
)) γGα 1(x
)+b( 1
−Gβ 2(x
)) γ θ
G1(x
)=G
2(x
),M
arsh
all-
Olk
in-G
α=
1an
dβ
=1
defin
edby
Jaya
kum
aran
dM
athe
w(2
008)
G1(x
)=G
2(x
),M
arsh
all-
Olk
in-G
α=
1,β
=1
and
defin
edby
Mar
shal
lθ=
1an
dO
lkin
(199
7)
5S1C
1.2
——
——
——
µ1(·)(x)=
Gα 1(x
)
Gα 1(x
)+b( 1
−Gβ 2(x
)) γ θ
G1(x
)=G
2(x
),M
arsh
all-
Olk
in-G
α=
1an
dβ
=1
defin
edby
Tha
iran
dN
adar
ajah
(201
5)
Kum
aras
wam
y-G
12S1
C1.
2—
——
——
—m
1(·)(x)=
exp
( −λ
m ∏ l=1
Gαll
(x))
m=
1an
dα
1=
1Po
isso
nde
fined
by
Ram
oset
al.
(201
4)
Con
tinue
son
the
next
page
927
Tabl
e4.
Con
tinue
dfr
omth
epr
evio
uspa
ge
Sub-
case
Use
ddi
stri
butio
nsf(t)
Mon
oton
icfu
nctio
nsV
alue
sfo
rthe
para
met
ers
Obt
aine
dcl
ass
Kum
aras
wam
y-G
12S1
C1.
2—
——
——
—m
1(·)(x)=
exp
( −λ−λ
m ∏ l=1
( 1−Gαll
(x)) β l
)m
=1
andα
1=
1Po
isso
nde
fined
by
andβ1=
1R
amos
etal
.(2
014)
Kum
aras
wam
y-G
12S1
C1.
2—
——
——
—m
1(·)(x)=
exp
( −λ−λ
( 1−
m ∏ l=1
Gαll
(x)) β)
m=
1an
dα
1=
1Po
isso
nde
fined
by
andβ1=
1R
amos
etal
.(2
014)
u1(·)(x)=
eλe−βxa
−eλ
1−
eλ
k=
2,α
1=
1B
eta
Wei
bull
Pois
son
α2=
1,θ
=0
and
Fam
ilyde
fined
by
u2(·)(x)=
e−a λW
(−a
e−a)−
e−a λW
(ψ(x))
e−a λW
(−a
e−a)−
1δ=
0Pe
rcon
tini(
2013
)
k=
2,α
1=
0,
Wei
bull
Gen
eral
ized
W(x
)=∑ ∞ n=
1(−
1)n−
1nn−
2
(n−
1)!
xn
α2=
1,θ
=0
and
Pois
son
defin
edby
ψ(x
)=
−αe−α−bxa
δ=
0Pe
rcon
tini(
2014
)
2S1C
1.2
γtγ−
1ℓ 1
(·)(x)=θ
( 1−
(1−β)−s−
{1−β[1
−G
(x)]}
−s
(1−β)−s−
1
)k=
2,α
1=
0,
G-N
egat
ive
Bin
omia
l
α2=
0an
dθ=
1de
fined
byPe
rcon
-
µ1(·)(x)=
(1−θ)( ζ(s
)−Lis[1
−G
(x)]
ζ(s)
) δ +θ
,tin
ieta
l.(2
013)
LIs(Z
)=∑ ∞ j=
1zj
js
k=
2,α
1=
0,
Zet
a-G
defin
edby
ζ(s)=∑ ∞ j=
11 js
α2=
0an
dθ=
0Pe
rcon
tini(
2014
)
Con
tinue
son
the
next
page
928
Tabl
e4.
Con
tinue
dfr
omth
epr
evio
uspa
ge
Sub-
case
Use
ddi
stri
butio
nsf(t)
Mon
oton
icfu
nctio
nsV
alue
sfo
rthe
para
met
ers
Obt
aine
dcl
ass
k=
2,α
1=
1,
Pow
erSe
ries
defin
ed
u1(·)(x)=
x ∑ j=
0
C(j)(a
)
j!C
(λ)(λ
−a)j
α2=
0,θ
=0
and
byC
onsu
land
Fam
oye
δ=
0(2
006)
u2(·)(x)=
x ∑ j=
1
1 j!
[ (C(0
))j] (j−
1)
k=
2,α
1=
0,
Bas
icL
agra
ngia
n
α2=
1,θ
=0
and
defin
edby
Con
sula
nd
δ=
0Fa
moy
e(2
006)
2S1C
1.2
btb
−1
ℓ 1(·)(x)=θ
1−x ∑ j=n
n
(j−n)!j
[ (C(0
))j] (j−
n)
k=
2,α
1=
0,
Lag
rang
ian
Del
ta
α2=
0an
dθ=
1de
fined
byC
onsu
land
µ1(·)(x)=
(1−θ)
x ∑ j=
0
P(X
=j) δ
+θ
Fam
oye
(200
6)
P(X
=j)=
{w(0
),j=
0[ (C
(0))jw
(1)(0
)] (j−1),
j=
1,2,3,...
Gen
eral
ized
k=
2,α
1=
0,
Lag
rang
ian
fam
ily
α2=
0an
dθ=
0by
Con
sula
nd
Fam
oye
(200
6)
Con
tinue
son
the
next
page
929
Tabl
e4.
Con
tinue
dfr
omth
epr
evio
uspa
ge
Sub-
case
Use
ddi
stri
butio
nsf(t)
Mon
oton
icfu
nctio
nsV
alue
sfo
rthe
para
met
ers
Obt
aine
dcl
ass
Gen
eral
ized
Pear
son
in
k=
1,α
1=
0,θ
=0
Ord
inar
yD
iffer
entia
l
andδ=
0E
quat
ion
form
bySh
a-
u1(·)(x)=
∫ x −∞
e
∫ a0+a1t+
···+asts
b0+b1+t+
···+brtrf(t)dtdt
kile
tal.
(201
0)
Gen
eral
ized
Pear
son
2S1C
1.2
btb
−1
k=
1,α
1=
0,
inO
rdin
ary
andδ=
1D
iffer
entia
lEqu
atio
n
ℓ 1(·)(x)=θ
( 1−∫ x −
∞e
∫ a0+a1t+
···+asts
b0+b1+t+
···+brtrf(t)dtdt)
form
defin
edby
Shak
il
etal
.(20
10)
Gen
eral
ized
Fam
ilyin
µ1(·)(x)=
(1−θ)
k=
1,α
1=
0,θ
=0
Ord
inar
yD
iffer
entia
l
×( ∫ x −
∞∫ y −
∞
( ∑2 i=
1αi(t)fβi(t)) d
tdy) δ
andδ=
1E
quat
ion
form
defin
ed
byVo
da(2
009)
930
References[1] Alzaatreh, A., Lee, C. and Famoye F. A new method for generating families of continuous distributions,
Metron 71, 63-79, 2013.[2] Brito, C.R., Gomes-Silva, F., Rego, L.C. and Oliveira, W.R. A new class of gamma distribution, Acta
Sci-Technol 39, 79-89, 2017.[3] Cordeiro, G.M., Alizadeh, M. and Silva, R.B. A New Wider Family of Continuous Models: The Extended
Cordeiro and de Castro Family, Hacet J Math Stat, 2017, In press.[4] Cordeiro, G.M. and de Castro, M. A new family of generalized distributions, J Stat Comput Simul 81,
883-898, 2011.[5] Consul, P.C. and Famoye, F. Lagrangian Probability Distributions, Birkhauser Boston, Boston, 2006.[6] Eugene, N., Lee, C. and Famoye, F. Beta-normal distribution and its application, Commun Stat Theor
Meth 31, 497-512, 2002.[7] Gomes-Silva, F., Percontini, A., Brito, E., Ramos, M.W.A., Silva, R.V. and Cordeiro, G.M. The odd
Lindley-G family of distributions, Austrian J Stat 49, 57-79,2017.[8] Jayakumar, K. and Mathew, T. On a generalization to Marshall-Olkin scheme and its application to Burr
type XII distribution, Stat Pap 49, 421-439, 2008.[9] Lee, C., Famoye, F. and Alzaatreh, A. Methods for generating families of univariate continuous distribu-
tions in the recent decades, WIREs Comput Stat 5, 219-238, 2013.[10] Marshall, A.W. and Olkin, I.A. A new method for adding a parameter to a family of distributions with
application to the exponential and weibull families, Biometrika 84, 641-652, 1997.[11] McDonald, J.B. Some generalized functions for the size distribution of income, Econometrica 52, 647-663,
1984.[12] Mudholkar, G.S., Srivastava, D. K. and Freimer, M. The Exponentiated Weibull Family: A Reanalysis of
the Bus-Motor-Failure Data. Technometrics 37, 436-445, 1995.[13] Percontini, A. New Extended Lifetime Distributions, Ph. D. thesis in Computational mathematics, Federal
University of Pernambuco, Brazil, 2014.[14] Percontini, A., Cordeiro, G.M. and Bourguignon, M. The G-Negative Binomial Family: General Properties
and Applications, Adv Appl Stat 35, 127-160, 2013.[15] Percontini, A., Blas, B. and Cordeiro, G.M. The beta weibull poisson distribution, Chil J Stat 4 3-26, 2013.[16] Pescim, R.R., Cordeiro, G.M., Demetrio, C.G.B. and Nadarajah, S. The new class of Kummer beta gene-
ralized distributions, SORT 36, 153-180, 2012.[17] Ramos, M.W.A., Marinho, P.R.D., Cordeiro, G.M., Silva, R.V. and Hamedani, G. The kumaraswamy-G
Poisson family of distributions, J Stat Theory Appl 14, 222-239, 2015.[18] Silva, R.B., Bourguignon, M., Dias, C.R.B. and Cordeiro, G.M. The compound class of extended Weibull
power series distributions, CSDA 58, 352-367, 2013.[19] Silva, R.V., Gomes-Silva, F., Ramos, M.W.A., Cordeiro, G.M., Marinho, P.R.D. and Andrade, T.A.N. The
Exponentiated Kumaraswamy-G Class: General Properties and Application, Rev Colomb Estad 2019,Forthcoming.
[20] Shakil, M., Kibria, B.M. and Singh, J.N. A new family of distributions based on the generalized Pearsondifferential equation with some applications, Aust J Stat 39, 259-258, 2010.
[21] Tahir, M.H. and Cordeiro, G.M. Compounding of distributions: a survey and new generalized classes,JSDA 3, 1-35, 2016.
[22] Tahir, M. H. and Nadarajah, S. Parameter induction in continuous univariate distributions: Well estab-lished G families, An Acad Bras Cienc 87, 539-568, 2015.
[23] Voda, V. G. A method constructing density functions: the case of a generalized Rayleigh variable, ApplMath 54, 417-431, 2009.
[24] Zografos, K. and Balakrishnan, N. On the families of beta-and generalized gamma-generated distributionand associated inference, Stat Met 6, 344-362, 2009.