Weighted exponential distribution: Properties and Different Methods of Estimation

23
This article was downloaded by: [Sanku Dey] On: 06 January 2015, At: 18:32 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Click for updates Journal of Statistical Computation and Simulation Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/gscs20 Weighted exponential distribution: properties and different methods of estimation Sanku Dey a , Sajid Ali b & Chanseok Park c a Department of Statistics, St. Anthony's College, Shillong, Meghalaya, India b Department of Decision Sciences, Bocconi University, Italy c Department of Mathematical Sciences, Clemson University, Clemson, SC, USA Published online: 02 Jan 2015. To cite this article: Sanku Dey, Sajid Ali & Chanseok Park (2015): Weighted exponential distribution: properties and different methods of estimation, Journal of Statistical Computation and Simulation, DOI: 10.1080/00949655.2014.992346 To link to this article: http://dx.doi.org/10.1080/00949655.2014.992346 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &

Transcript of Weighted exponential distribution: Properties and Different Methods of Estimation

This article was downloaded by: [Sanku Dey]On: 06 January 2015, At: 18:32Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Click for updates

Journal of Statistical Computation andSimulationPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/gscs20

Weighted exponential distribution:properties and different methods ofestimationSanku Deya, Sajid Alib & Chanseok Parkc

a Department of Statistics, St. Anthony's College, Shillong,Meghalaya, Indiab Department of Decision Sciences, Bocconi University, Italyc Department of Mathematical Sciences, Clemson University,Clemson, SC, USAPublished online: 02 Jan 2015.

To cite this article: Sanku Dey, Sajid Ali & Chanseok Park (2015): Weighted exponentialdistribution: properties and different methods of estimation, Journal of Statistical Computation andSimulation, DOI: 10.1080/00949655.2014.992346

To link to this article: http://dx.doi.org/10.1080/00949655.2014.992346

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoever orhowsoever caused arising directly or indirectly in connection with, in relation to or arisingout of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &

Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015

Journal of Statistical Computation and Simulation, 2015http://dx.doi.org/10.1080/00949655.2014.992346

Weighted exponential distribution: properties and differentmethods of estimation

Sanku Deya∗, Sajid Alib and Chanseok Parkc

aDepartment of Statistics, St. Anthony’s College, Shillong, Meghalaya, India; bDepartment of DecisionSciences, Bocconi University, Italy; cDepartment of Mathematical Sciences, Clemson University,

Clemson, SC, USA

(Received 18 March 2014; accepted 24 November 2014)

In this article, we investigate various properties and methods of estimation of the Weighted Exponentialdistribution. Although, our main focus is on estimation (from both frequentist and Bayesian point ofview) yet, the stochastic ordering, the Bonferroni and the Lorenz curves, various entropies and orderstatistics are derived first time for the said distribution. Different types of loss functions are considered forBayesian estimation. Furthermore, the Bayes estimators and their respective posterior risks are computedand compared using Gibbs sampling. The different reliability characteristics including hazard function,stress and strength analysis, and mean residual life function are also derived. Monte Carlo simulationsare performed to compare the performances of the proposed methods of estimation and two real data setshave been analysed for illustrative purposes.

Keywords: Bayes estimator; maximum likelihood estimators; moment estimators; percentile estimator;failure rate function; mean residual life function

AMS 2000 Subject Classifications: 62C10; 62F10; 62F15; 65C10

1. Introduction

Recently, Gupta and Kundu [1] introduced a new class of Weighted Exponential (WE) distribu-tion as a generalization of the standard exponential distribution. They observed that the shape ofthe probability density function (PDF) of the WE distribution is very similar with the shape of theother well-known generalization of the exponential distribution, for example, gamma, Weibull orgeneralized exponential distribution which in turn can be used as an alternative to these distribu-tions. They also established several interesting properties of this new WE distribution. Moreover,they observed that in many situations, the two-parameter WE distribution may provide a betterfit than the two-parameter Weibull, gamma or generalized exponential distributions. However, inspite of its versatility, it seems to have attracted comparatively less attention. Recently, Farahaniand Khorram [2] obtained the Bayes estimators of the parameters, reliability function and hazardfunction of this distribution.

The appeal of the methods of estimation vary from user to user and area of application. Forinstance, one may prefer to use the uniformly minimum variance unbiased estimator even whenit does not have a closed-form expression. The originality of this study comes from the fact

*Corresponding author. Email: [email protected]

© 2015 Taylor & Francis

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015

2 S. Dey et al.

that there has been no previous work comparing all of these estimators along with statisticalproperties for the two-parameter Weighted exponential (WE) distribution. Comparisons of esti-mation methods for other distributions have been performed in the literature: Kundu and Raqab[3] for generalized Rayleigh distributions, Alkasasbeh and Raqab [4] for generalized logistic dis-tributions, Dey et al. [5] for two-parameter Rayleigh distribution and Teimouri et al. [6] for theWeibull distribution.

The main aim of this paper is to consider different estimation procedures of the two-parameterWE distribution both from the Bayesian and frequentist point of view. We first consider themost natural frequentist’s estimators namely, the maximum likelihood estimators (MLEs), themethod of moments estimators (MMEs), weighted least-squares estimators (WLSEs) and the per-centile estimators (PCEs). We further consider the Bayes estimators of the unknown parametersunder the assumptions of independent gamma priors on the scale and shape parameters, respec-tively. We also derived some statistical characteristics like, stochastic ordering, the Bonferroniand the Lorenz curves, various entropies and order statistics, the different reliability character-istics including hazard function, stress and strength analysis, and mean residual life function.We compare the performances of the different methods using extensive computer simulations.Finally, we analyse two data sets for illustrative purposes.

Rest of the paper is organized as follows. In Section 2, we provide statistical and reliabilityproperties of WE distribution. The MLEs of the unknown parameters along with approximateconfidence intervals, MMEs, PCEs and WLSEs are provided in Section 3. Bayes estimators arepresented in Section 4. In Section 5, we present the Monte Carlo simulation results. The analysisof two real data sets is provided in Section 6, and finally, conclusions appear in Section 7.

The WE distribution is defined by the pdf

f (x; α, λ) = α + 1

αλ exp(−λx)(1 − exp(−αλx)), x > 0, λ, α > 0. (1)

The corresponding distribution function of X is

F(x; α, λ) = 1 + 1

αexp{−λ(α + 1)x} − α + 1

αexp(−λx), (2)

Here, α is the shape parameter and λ is the scale parameter.For simplicity, we reparametrize α as β = αλ, therefore, the corresponding pdf (1) and cdf (2)

becomes

f (x; β, λ) = β + λ

βλ exp(−λx)(1 − exp(−βx)), x > 0, λ, β > 0, (3)

and

F(x; β, λ) = 1 − 1

βe−λx(β + λ − λ e−βx). (4)

Below, we state and prove a theorem which characterizes the distribution:

Theorem 1 The random variable X follows a WE distribution with parameters λ and α if andonly if the density function f satisfies the homogeneous differential equation of the form:

(1 − exp(−αλx))f ′ + λ[1 − (α + 1) exp(−αλx)]f = 0, (5)

where prime denotes first-order differentiation.

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015

Journal of Statistical Computation and Simulation 3

Proof Suppose X is a WE distribution random variable, then f (x; λ, α) and f ′(x; λ, α) are thepdf and the first derivative of the pdf of the WE distribution. By substituting f (x) and f ′(x) in thedifferential equation (5), the equation is satisfied.

Conversely, we assume that f satisfies Equation (5), then we have∫f ′

fdx = λ(α + 1)

∫exp(−αλx)

(1 − exp(−αλx))dx − λ

∫dx

(1 − exp(−αλx)). (6)

After simplification we get,

f = C exp(−αλx)(1 − exp(−αλx)), x > 0, (7)

where C is the normalizing constant and the value of C = (α + 1)/αλ. �

Application of the Theorem: From the homogeneous differential equation (5), we get

x = − 1

αλlog

f ′ + f λ

f ′ + λ(α + 1)f(8)

or equivalently,

x = − 1

αλlog

F ′′ + F ′λF ′′ + λ(α + 1)F′ , (9)

where F is the corresponding cdf of the WE distribution. Thus, the importance of this theoremlies in the linearizing transformation (8) and (9) which could be regarded as WE model alternativeto Berkson’s [7] logit transform for the ordinary logistic model and Ojo [8] logit transform forgeneralized logistic model. Hence, Equations (8) or (9) could be referred as WE logit transform.Thus, the theorem shows the flexibility of WE.

2. Statistical and reliability properties

2.1. Stochastic ordering

Stochastic ordering of positive continuous random variables is an important tool for judging thecomparative behaviour. A random variable X is said to be smaller than a random variable Y inthe

• stochastic order (X ≤st Y ) if FX (x) ≥ FY (y) for all x.• hazard rate order (X ≤hr Y ) if hX (x) ≥ hY (y) for all x.• mean residual life order (X ≤mrl Y ) if mX (x) ≥ mY (y) for all x.• likelihood ratio order (X ≤lr Y ) if fX (x)/fY (x) decreases in x.

The following results due to Shaked and Shanthikumar [9] are well known for establishingstochastic ordering of distributions.

X ≤lr Y =⇒ X ≤hr

⇓X ≤st Y .

Y =⇒ X ≤mrl Y , (10)

The WE is ordered with respect to the strongest ‘likelihood ratio’ ordering as shown in thefollowing theorem:

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015

4 S. Dey et al.

Theorem 2 Let X ∼ WE(α1, λ1) and Y ∼ WE(α2, λ2). If α1 = α2 and λ1 ≥ λ2 (or if thenX ≤lr Y and hence, X ≤hr Y , X ≤mrl Y and X ≤st Y.

Proof The likelihood ratio is

fX (x)

fY (x)= (α1 + 1)α2λ1

α1(α2 + 1)λ2e−(λ1−λ2)x

(1 − e−α1λ1x

1 − e−α2λ2x

)

Thus,d

dxlog

fX (x)

fY (x)= −(λ1 − λ2) + α1λ1 e−α1λ1x

1 − e−α1λ1x− α2λ2 e−α2λ2x

1 − e−α2λ2x

Case (i): if λ1 = λ2 = λ, α1 > α2, then d/dx log fX (x)/fY (x) < 0, which implies that X ≤lr Yand hence, X ≤hr Y , X ≤mrl Y and X ≤st Y .

Case (ii): if α1 = α2 = α, λ1 > λ2, then d/dx log fX (x)/fY (x) < 0, which implies that X ≤lr Yand hence, X ≤hr Y , X ≤mrl Y and X ≤st Y .

Hence from case (i) and case (ii) X ≤lr Y and X ≤hr Y , X ≤mrl Y and X ≤st Y . �

2.2. Bonferroni and Lorenz curves

The Bonferroni and the Lorenz curves and the Bonferroni and the Gini indices have applicationsnot only in economics to study the income and poverty, but also in other fields like reliability,insurance, medicine and demography. The Bonferroni and the Lorenz curves are defined by

B(p) = 1

∫ q

0xf (x) dx, (11)

L(p) = 1

μ

∫ q

0xf (x) dx, (12)

respectively, where μ = E(X ) = (α + 2)/λ(α + 1) and q = F−1(p). The Bonferroni and theGini indices are defined by

B = 1 −∫ 1

0B(p) dp, (13)

L(p) = 1 − 2∫ 1

0L(p) dp, (14)

respectively. A comprehensive explanation about these indices can be seen in [10] for differentparametric families. For the WE distribution, the Bonferroni and the Lorenz curves are given asfollows:

B(p) = α2 + 2α − e−qλ{1 − λq + α(1 + λq)(α − 2)} + e−q(1+α)λ{1 + λq(1 + α)}λp(α + 1)(α + 2)

, (15)

L(p) = α2 + 2α − e−qλ{1 − λq + α(1 + λq)(α − 2)} + e−q(1+α)λ{1 + λq(1 + α)}λ(α + 1)(α + 2)

. (16)

2.3. Entropies

The concept of entropy is important in different areas such as physics, probability and statis-tics, communication theory and economics. Several measures of entropy have been studied and

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015

Journal of Statistical Computation and Simulation 5

compared in the literature. Entropy of a random variable X is a measure of variation of theuncertainty. If X has the probability distribution function f (·), then the Rényi entropy is definedby

ϒR(ζ ) = 1

1 − ζlog

∫f ζ (x) dx, (17)

where ζ > 0 and ζ = 1. If X is distributed as the WE, then one can calculate

ϒR(ζ ) = 1

1 − ζlog

[(α + 1

αλ

)ζ ∫ ∞

0e−ζλx(1 − e−αλx)ζ dx

]

= 1

1 − ζlog

[(α + 1

αλ

)ζ �(

ζ

α

)�(1 + ζ )

�(1 + ζ + ζ

α

)]

. (18)

For the WE distribution, the Shannon entropy is defined as

E[− log f (x)] = −α + 1

αλ

∫ ∞

0log

[α + 1

αe−λx(1 − e−αλx)

]e−λx(1 − e−αλx) dx (19)

expanding log[((α + 1)/α)e−λx(1 − e−αλx)] = log[(α + 1)/α] − λx + log(1 − e−αλx), thenmultiplying by e−λx(1 − e−αλx) and integrating, we have

E[− log f (x)] = −α + 2

αlog

(α + 1

αλ

)+ α + 1

αH(

1

α

)+ α3 + 3α2 + 2α + 1

α2(α + 1), (20)

where H(·) is a Harmonic Number. Finally, consider the cumulative residual entropy defined by

ϒCR = −∫

S(x) log S(x) dx, (21)

where S(x) = ((α + 1)/α)e−λx − (1/α)e−λ(α+1)x is the survival function of WE distribution.Thus, the cumulative residual entropy is

ϒCR = −∫ ∞

0

[α + 1

αe−λx − 1

αe−λ(α+1)x

]log

(α + 1

αe−λx − 1

αe−λ(α+1)x

)dx (22)

simplifying log(((α + 1)/α)e−λx − (1/α)e−λ(α+1)x) as log(e−λx(1 + α − e−λαx)/α) = −xλ +log(1 + α − e−λαx) − log α and then multiplying with S(x), we have

ϒCR = α4 + 4α3 + 6α2 + 3α + 2

αλ(α + 1)2− (α + 1)2F1(1, 1/α, 1 + 1/α, 1/(α + 1))

λ

+ 2F1(1, 1 + 1/α, 2 + 1/α, 1/(α + 1))

λ(α + 1)2(23)

where pFq(·) is the Hypergeometric function.

2.4. Order statistics

Order statistics makes its appearance in many areas of statistical theory and practice. Momentsof order statistics play an important role in quality control testing and reliability,where a pro-fessional needs to predict the failure of future items based on the times of few early failures.These predictors are often based on moments of order statistics. Suppose that x1, x2, . . . .xn is a

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015

6 S. Dey et al.

random sample from the WE distribution. Let X1:n < X2:n < · · · < Xn:n denote the correspondingorder statistics. The pdf and the cumulative distribution function of the kth order statistics, sayY = Xk:n are given by

fY (y) = n!

(k − 1)!(n − k)!

n−k∑m=0

(n − k

m

)(−1)mFk−1+m(y)f (y). (24)

For the WE, we have the following expressions:

FY (y) = 1 − α + 1

αe−λy + 1

αe−λ(α+1)y,

Fk−1+m(y) =[1 −

(α + 1

αe−λy + 1

αe−λ(α+1)y

)]k−1+m

=k−1+m∑

l=0

(−1)l

(k − 1 + m

l

)(α + 1

αe−λy + 1

αe−λ(α+1)y

)l

=k−1+m∑

l=0

(−1)l

(k − 1 + m

l

) l∑r=0

(−1)r

(lr

)× (α + 1)l−r

αle−λy(l+rα). (25)

Thus, finally we have the expression

fY (y) = n!

(k − 1)!(n − k)!

n−k∑m=0

(n − k

m

)(−1)m

k−1+m∑l=0

(−1)l

(k − 1 + m

l

)

×l∑

r=0

(−1)r

(lr

)(α + 1)l−r+1

αl+1e−λy(l+rα−1)(1 − e−αλy). (26)

2.5. Reliability characteristics of the WE

In reliability theory, the stress–strength model describes the life of a component or item whichhas a random strength X1 that is subjected to a random stress X2. The component fails instan-taneously when the stress applied to it surpass the strength, and the component will functionsatisfactorily/acceptably whenever X1 > X2. So, R = Pr(X1 > X2) is a measure of componentreliability. Its applicability is found in many spheres especially in engineering such as structures,deterioration of rocket motors, fatigue failure of aircraft structures and the aging of concrete pres-sure vessels. Extensive work on estimation of reliability of stress–strength models has been donefor the well-known standard distributions. However, there are still some distributions (includinggeneralizations of the well-known distributions) for which the form of R has not been inves-tigated. Here, we derive the reliability R when X1 and X2 are independent random variablesdistributed with parameters (α1, λ1) and (α2, λ2), then

R =∫ ∞

0f1(x)F2(x) dx =

∫ ∞

0

(α1 + 1)λ1

α1e−λ1x(1 − e−α1λ1x)

×[1 − (α2 + 1)

α2e−λ2x + 1

α2e−λ2(α2+1)x

]dx

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015

Journal of Statistical Computation and Simulation 7

= 1 − (α1 + 1)λ1(α2 + 1)

α2(λ1 + λ2)+ (α1 + 1)λ1(α2 + 1)

α1α2(λ1 + λ2 + α1λ1)+ (α1 + 1)λ1

α1α2(λ1 + λ2(α2 + 1))

− (α1 + 1)λ1

α1α2(λ1 + λ2(α2 + 1) + α1λ1)(27)

If α1 = α2 = α, then

R = 1 − (α + 1)λ1

[(α + 1){α(λ1 + λ2 + αλ1) − (λ1 + λ2)}

α(λ1 + λ2)(λ1 + λ2 + αλ1)

+ λ1

(λ1 + λ2(α + 1))(λ1 + λ2(α + 1) + αλ1)

](28)

and if λ1 = λ2 = λ, then

R = 1 − (α1 + 1)

α2

[(α2

1 + 2α1 − 2)(α2 + 1)

2α1(2 + α1)+ 1

(α1 + α2 + 2)(2 + α2)

]. (29)

2.6. Mean residual life function

The mean residual life function at the time point t is given by

m(t) = 1

1 − F(t)

∫ ∞

t(1 − F(x)) dx = α + 1 − [e−αλt/(1 + α)]

λ(1 + α − e−αλt). (30)

Putting t = 0, we obtain the mean of the WE distribution as m(0) = (α + 2)/λ(1 + α). Sinceh(0) = 0 and m(0) is finite, so h(0)m(0) = 0 < 1 (see Guess et al. [11]). Thus, it is observed thatmean residual life function is bath-tub shaped. The graphical presentation of the mean residuallife function is shown in Figure 1.

Figure 1. The mean residual life time at t = 2.

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015

8 S. Dey et al.

3. Classical parameter estimation

In this section, we obtain the estimators of the WE distribution parameters via different meth-ods of estimation. For this, consider X1, X2, . . . , Xn as a random sample from Equation (3) withobserved values x1, x2, . . . , xn. The methods are: method of moments estimation, maximum like-lihood estimation with expressions of approximate confidence intervals, percentile estimationand weighted least-squares estimation.

3.1. Moment estimators

The MMEs of the two-parameter WE distribution can be obtained by equating the first twotheoretical moments of Equation (3) with the sample moments (1/n)

∑ni=1 xi and (1/n)

∑ni=1 x2

i ,respectively, we get

1

n

n∑i=1

xi = 1

λ+ 1

β + λ, (31)

1

n

n∑i=1

x2i = 2

βλ2(β + λ)2[(β + λ)3 − λ3] = 2

[1

λ2+ 1

λ(β + λ)+ 1

(β + λ)2

], (32)

Here, we use two methods to obtain MEs.

3.2. Method 1: Using E(X) and E(X2)

It is immediate from Equations (10) and (11) that we have

X̄ 2 − 1

2n

n∑i=1

X 2i = 1

λ· 1

β + λ. (33)

For convenience, denote a = X̄ from Equation (10) and b = X̄ 2 − (1/2n)∑n

i=1 X 2i from

Equation (33), and let p = 1/λ and q = 1/(β + λ). Then, we have the following expressions:

a = p + q,

b = p · q.

Since a and b are known and p and q are unknown, we substitute q = a − p into b = p · q whichresults in

b = p(a − p).

Solving the above for p, we have

p = a ± √a2 − 4b

2.

Thus, we have

λ̂ = 1

p= 2

a ± √a2 − 4b

(34)

Next, substituting this λ̂ into q = a − p, we have

β̂ = 1

a − 1/λ̂− λ̂. (35)

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015

Journal of Statistical Computation and Simulation 9

Substituting Equation (34) into Equation (35), we have

β̂ = 2

2a − (a ± √a2 − 4b)

− 2

a ± √a2 − 4b

= 2

a ∓ √a2 − 4b

− 2

a ± √a2 − 4b

= ±√

a2 − 4b

b.

Since β̂ > 0, we have

λ̂ = 2

a + √a2 − 4b

= a − √a2 − 4b

2band β̂ =

√a2 − 4b

b.

That is,

λ̂ =X̄ −

√(2/n)

∑X 2

i − 3X̄ 2

2X̄ 2 − (1/n)∑

X 2i

and β̂ =√

(2/n)∑

X 2i − 3X̄ 2

X̄ 2 − (1/2n)∑

X 2i

. (36)

It should be noted that the MME exists only when a2 − 4b > 0 (or, (2/n)∑

X 2i − 3X̄ 2 > 0)

and b > 0 (or, X̄ 2 − (1/2n)∑

X 2i > 0). This is equivalent to the condition that the interval

(1.5nX̄ 2/∑

X 2i , 2nX̄ 2/

∑X 2

i ) should include 1, or b should lie in (0, a2/4).When a2 − 4b ≤ 0, the MME does not exist either. In this case, by setting a2 − 4b = 0, we

can obtain λ̂ = 2/X̄ and β̂ = 0. In practice, we can use λ̂ = 2/X̄ and β̂ = mini=1,...,n{Xi}.Also, when b ≤ 0, the MME does not exist. In this case, we set b → 0+. Then we have λ̂ →

1/X̄ and β̂ → ∞. Thus, we suggest λ̂ = 1/X̄ and β̂ = maxi=1,...,n{Xi} in practice.

3.3. Method 2: Using E(X) and Var(X)

Another way of finding the MME is using the sample mean and the sample variance. We knowthat

E(X ) = 1

λ+ 1

β + λ,

Var(X ) = 1

λ2+ 1

(β + λ)2.

For convenience, we let a be the estimate for E(X ) and c be the estimate for Var(X ). If we usea = X̄ and c = ∑

(Xi − X̄ )2/n, the solution is the exactly the same as Equation (36).If the sample variance S2 = ∑

(Xi − X̄ )2/(n − 1) is used instead of c, the results are slightlydifferent. Here, we obtain the MME based on the sample mean and the sample variance bysolving the following relationship:

X̄ = 1

λ+ 1

β + λ,

S2 = 1

λ2+ 1

(β + λ)2.

Then, we have

λ̂ = X̄ −√

2S2 − X̄ 2

X̄ 2 − S2and β̂ = 2

√2S2 − X̄ 2

X̄ 2 − S2. (37)

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015

10 S. Dey et al.

It is to be noted that this MME exists only when 2S2 − X̄ 2 > 0 and X̄ 2 − S2 > 0. This is thesame as the condition that the interval (0.5X̄ 2/S2, X̄ 2/S2) should include 1.

When either 2S2 − X̄ 2 ≤ 0 or X̄ 2 − S2 ≤ 0, the MME does not exist. Similar to the above,when 2S2 − X̄ 2 ≤ 0, we propose to use λ̂ = 2/X̄ and β̂ = mini=1,...,n{Xi} in practice. Also, whenX̄ 2 − S2 ≤ 0, we proposed to use λ̂ = 1/X̄ and β̂ = maxi=1,...,n{Xi}.

3.4. Maximum likelihood estimators

Let x1, . . . , xn be a random sample of size n from Equation (3), the log-likelihood function of thedensity (3) is given by

log L = l(β, λ) = n ln(β + λ) − n ln β + n ln λ − λ

n∑i=1

xi +n∑

i=1

ln(1 − e−βxi) (38)

The score function associated with log-likelihood (7) is

∂l

∂β= n

β + λ− n

β+

n∑i=1

xi e−βxi

(1 − e−βxi)

and∂l

∂λ= n

β + λ+ n

λ−

n∑i=1

xi.

The MLEs of λ and β are obtained by solving numerically the system of equations ∂l/∂λ = 0and ∂l/∂β = 0. Therefore, The MLEs are

λ̂MLE = n

(n∑

i=1

xi

1 − e−β̂MLExi− n

β̂MLE

)−1

,

β̂MLE = n

(n∑

i=1

xi − n

λ̂MLE

)−1

− λ̂MLE.

Since the MLE of the vector of unknown parameters θ = (β, λ) cannot be derived in closedforms, therefore it is not easy to derive the exact distributions of the MLEs and hence we cannotget the exact bounds of the parameters. The idea is to use the large sample approximation. It isknown that the asymptotic distribution of the MLE θ̂ is

(θ̂ − θ) → N2(0, I−1(θ))

(see Lawless [12]), where I−1(θ) is the inverse of the observed information matrix of theunknown parameters θ = (β, λ).

I−1(θ) =

⎛⎜⎜⎝

−∂2 log L

∂β2−∂2 log L

∂β∂λ

−∂2 log L

∂λ∂β−∂2 log L

∂λ2

⎞⎟⎟⎠∣∣∣∣∣∣∣∣

−1

(β,λ)=(β̂,λ̂)

=(

var(β̂MLE) cov(β̂MLE, λ̂MLE)

cov(λ̂MLE, β̂MLE) var(λ̂MLE)

)=(

σββ σβλ

σλβ σλλ

).

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015

Journal of Statistical Computation and Simulation 11

The derivatives in I(θ) are given as follows:

∂2 log L

∂β2

∣∣∣∣β=β̂MLE

= − n

(β + λ)2+ n

β2−

n∑i=1

x2i e−βxi

(1 − e−βxi)2,

∂2 log L

∂λ2

∣∣∣∣λ=λ̂MLE

= − n

λ2− n

(β + λ)2,

∂2 log L

∂β∂λ

∣∣∣∣β=β̂MLE, λ=λ̂MLE

= − n

(β + λ)2.

Therefore, the above approach is used to derive the approximate 100(1 − τ)% confidenceintervals of the parameters θ = (β, λ) as in the following forms:

β̂MLE ± zτ/2

√Var(β̂MLE), λ̂MLE ± zτ/2

√Var(λ̂MLE),

Here, Zτ/2 is the upper (τ/2)th percentile of the standard normal distribution.

3.5. Percentile estimators

Kao [13,14] proposed an estimator based on percentile and implemented on the Weibull distribu-tion. Later on, it has been used quite successfully for other distributions, where the distributionfunctions are in closed form. Gupta and Kundu [15] and Kundu and Raqab [3] used thepercentile-based estimators and compared with the other estimators for generalized exponen-tial distribution and generalized Rayleigh distribution, respectively. Dey et al. [5] also used thepercentile-based estimators for the two-parameter Rayleigh distribution and compared with otherfrequentist and Bayes estimators. The main advantage of the percentile-based estimators is thatin many situations, they can be obtained in explicit forms. The percentile-based estimators aremainly obtained by minimizing the Euclidean distance between the sample percentile and popu-lation percentile points. Now, we apply this approach on the WE distribution to obtain estimatorsbased on the percentile (PC).

Using Equation (4), we get

ln[F(x; β, λ)] = ln

[1 − (β + λ − λ e−βx)

βe−λx

]. (39)

Let X(i) = xi be the ith order statistic, X1 < X2 < · · · < Xn. If Pi denotes some estimators ofF(xi; β, λ), then PCEs of β and λ can be obtained by minimizing

Qn(β, λ) =n∑

i=1

{ln Pi − ln

[1 − (β + λ − λ e−βxi)

βe−λxi

]}2

.

In fact, there are several estimators of Pi, the unbiased estimator of them of F(xi; β, λ) is Pi =i/(1 + n), so we consider it. Hence,

∂Qn(β, λ)

∂β= λ

β

n∑i=1

{ln

i

1 + n− ln

[1 − (β + λ − λ e−βxi)

βe−λxi

]}

× 1 − e−βxi − βxi e−βxi

β eλxi − (β + λ − λ e−βxi)(40)

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015

12 S. Dey et al.

and

∂Qn(β, λ)

∂λ=

n∑i=1

{ln

i

1 + n− ln

[1 − (β + λ − λ e−βxi)

βe−λxi

]}

× (1 − e−βxi)(1 − λxi) − βxi

β eλxi − (β + λ − λ e−βxi). (41)

The PCEs of β and λ are obtained by solving the system of equations ∂Qn(β, λ)/∂β|β=β̂ = 0 and∂Qn(β, λ)/∂λ|λ=λ̂ = 0 numerically.

3.6. Weighted least-squares estimators

The LSEs and WLSEs were proposed by Swain et al. [16] to estimate the parameters of a Betadistribution. For estimating the parameters of a certain distribution, we recall the following well-known results:

E[F(xi)] = i

1 + nand Var[F(xi)] = i(1 + n − i)

(2 + n)(1 + n)2,

where F(xi) is the cdf of the ith order statistic xi of the distribution, that are used in estimating theparameters. The WLSEs of the WE distribution parameters β and λ are obtained by minimizingthe function

�n(β, λ) =n∑

i=1

(2 + n)(1 + n)2

i(1 + n − i)

{1 + n − i

1 + n− β + λ − λ e−βxi

βe−λxi

}2

, (42)

and are given by the nonlinear equations

n∑i=1

(2 + n)(1 + n)2

i(1 + n − i)

{1 + n − i

1 + n− β̂ + λ̂ − λ̂ e−β̂xi

β̂e−λ̂xi

}[1 − (1 + β̂xi) e−β̂xi ] e−λ̂xi = 0,

(43)

n∑i=1

(2 + n)(1 + n)2

i(1 + n − i)

{1 + n − i

1 + n− β̂ + λ̂ − λ̂ e−β̂xi

β̂e−λ̂xi

}[β̂xi − (1 − λ̂xi)(1 − e−β̂xi)] e−λ̂xi

= 0. (44)

4. Bayesian analysis

In this section, we consider Bayesian estimation of the unknown parameters α and β. In manypractical situations, the information about the shape and scale parameters of the sampling distri-bution is available in an independent manner. If both the parameters are unknown, joint conjugatepriors does not exist. Therefore, it is not unreasonable to assume independent gamma priors onthe shape and scale parameters for a two-parameter lifetime distribution, because gamma dis-tributions are very flexible and Jeffreys’ (non-informative) prior, introduced by Jeffreys’,[17]is a special case of this. Independent gamma priors have been used in the Bayesian analysis

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015

Journal of Statistical Computation and Simulation 13

of the Weibull distribution, see for example, Kundu.[18] It is assumed that α and β have theindependent gamma prior distributions with pdfs

g(α) ∝ αa−1 e−bα α > 0 (45)

and

g(β) ∝ βc−1 e−dλ β > 0. (46)

Thus the joint prior for both parameters is given by p(α, β) ∝ αa−1 exp(−bα)βc−1 exp(−dβ).The hyper parameters a, b, c, and d are known and non-negative.For simplicity, we again reparametrize λ = β/α, therefore, the corresponding pdf (1) becomes

f (x; α, β) = α + 1

α2β exp

(−β

αx

)(1 − exp(−βx)).

The corresponding likelihood function is

L(x; α, β) = (α + 1)n

α2nβn exp

(−β

α

n∑i=1

xi

)n∏

i=1

(1 − exp(−βxi)).

Thus, the joint posterior distribution is given by

p(α, β|x) ∝ (α + 1)n

α2nαa−1βn+c−1 exp

(−β

α

n∑i=1

xi + dβ

)exp(−bα)

n∏i=1

(1 − exp(−βxi)).

The marginal distribution of β given α and data is given by

p(β|α, x) ∝ βn+c−1 exp

(−β

(1

α

n∑i=1

xi + d

))n∏

i=1

(1 − exp(−βxi)),

p(β|α, x) is a log-concave density function.

Proof see Farahani and Khorram.[2] �

Similarly, the marginal distribution of α given β and data is given by

p(α|β, x) ∝ (α + 1)n

α2nαa−1 exp(−bα) exp

(−β

α

n∑i=1

xi

),

p(α|β, x) has a finite maximum point.

Proof see Farahani and Khorram.[2] �

Since, p(β|α, x) has log-concave density and p(α|β, x) has a finite maximum point thus usingthe idea of Gilks and Wild,[19] adaptive acceptance–rejection sampling, it is possible to gener-ate data from p(β|α, x) and p(α|β, x), respectively. Now, we would like to provide an MCMCalgorithm to compute the Bayes estimates and their respective posterior risk using different lossfunctions (See Ali [20]) given in Table 1.

We are excluding squared error loss function and LINEX loss function because these are usedby Farahani and Khorram.[2]

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015

14 S. Dey et al.

Table 1. Bayes estimator and posterior risk under different loss functions.

Loss function Bayes estimator (BE) Posterior risk (PR)

L1 = WSELF = (θ−d)2

θ(E(θ−1|x))−1 E(θ |x) − (E(θ−1|x))−1

L2 = MSELF =(1 − d

θ

)2 E(θ−1|x)

E(θ−2|x)1 − E(θ−1|x)2

E(θ−2|x)

L3 = PLF = (θ−d)2

d

√E(θ2|x) 2[

√E(θ2|x) − E(θ |x)]

L4 = KLF =(√

−√

θd

)2 √E(θ |x)

E(θ−1|x)2[√

E(θ |x)E(θ−1|x) − 1]

Notes: WSELF, weighted squared error loss function; MSELF, modified squared error lossfunction; PLF, precautionary loss function, and KLF, K-loss function.

4.1. Algorithm

• Take some initial value of α and β called α0 and β0.• Generate αi+1 and βi+1 from p(β|αi, x) and p(α|βi, x) using the method of Gilks and Wild.[19]• Repeat above step M times to obtain (αi, λi = βi/αi).• An approximate Bayes estimate of a function of θ = (α, β), under the above-defined loss

functions can be obtained as

θB = 1

M

M∑i=1

θi

• The credible interval can be obtained by sorting the observations of θ .

5. Simulation results

In this section, we present some experimental results to compare the performance of the differentestimators proposed in the previous sections. We perform extensive Monte Carlo simulations tocompare the performance of the different estimators, mainly with respect to their biases andmean-squared errors (MSE’s) for different sample sizes and for different parameter values. Forobtaining different estimators, the number of replications is 10,000 in all the simulations. Wehave considered different sample sizes n = 10, 20, 50, 100. For obtaining frequentist estimators,we consider λ = 1 since λ is a scale parameter and β = 1, 0.5, 2. The simulation results are

Table 2. The data were generated with λ = 1 and β = 1.

MME1 MME2 MLE PCE WLSE

n Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE

λ̂

10 0.238 0.225 0.190 0.209 0.197 0.218 0.030 1.141 0.182 0.19420 0.175 0.137 0.144 0.129 0.149 0.136 –0.020 0.130 0.161 0.12750 0.127 0.084 0.110 0.081 0.115 0.084 –0.014 0.110 0.138 0.084100 0.100 0.061 0.090 0.059 0.094 0.060 0.004 0.092 0.116 0.064

β̂

10 2.337 3.2e + 3 31.7 8.4e + 7 6.133 1.9e + 3 1.5e + 7 9.0e + 17 7.5e + 5 2.5e + 1520 1.799 2.1e + 3 2.299 1.9E + 3 6.174 5.0e + 3 6.9e + 4 3.1e + 11 1.4e + 3 9.8e + 850 0.710 238.1 0.893 401.9 3.094 6.2e + 3 3.5e + 4 6.7e + 11 0.045 4.417100 0.131 5.172 0.365 228.5 0.287 380.5 8.3e + 3 1.5e + 10 –0.084 1.194

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015

Journal of Statistical Computation and Simulation 15

Table 3. The data were generated with λ = 1 and β = 0.5.

MME1 MME2 MLE PCE WLSE

n Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE

λ̂

10 0.128 0.141 0.088 0.137 0.092 0.143 −0.067 1.536 0.070 0.12520 0.085 0.084 0.059 0.084 0.062 0.088 −0.120 0.114 0.062 0.07850 0.058 0.052 0.043 0.052 0.047 0.054 −0.110 0.096 0.056 0.050100 0.048 0.039 0.039 0.039 0.042 0.040 −0.086 0.077 0.051 0.038

β̂

10 1.353 105.5 61.79 3.6e + 7 5.281 1.4e + 3 9.7e + 4 5.6e + 11 4.0e + 5 1.6 + 1520 1.232 691.9 3.131 2.3e + 4 5.255 3.9e + 3 6.2e + 4 3.5e + 11 944.2 5.0 + 850 0.541 70.89 0.669 65.45 2.418 3.9e + 3 2.1e + 4 4.2e + 10 0.194 1.938100 0.217 7.882 0.393 211.0 0.141 0.826 9.6e + 3 5.8e + 10 0.072 0.682

Table 4. The data were generated with λ = 1 and β = 2.

MME1 MME2 MLE PCE WLSE

n Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE

λ̂

10 0.354 0.370 0.296 0.332 0.304 0.348 0.174 1.347 0.308 0.32820 0.255 0.219 0.218 0.201 0.221 0.210 0.104 0.192 0.259 0.21650 0.164 0.119 0.144 0.111 0.147 0.114 0.091 0.158 0.190 0.129100 0.105 0.070 0.094 0.066 0.095 0.066 0.090 0.131 0.134 0.081

β̂

10 1.916 3.7e + 3 3.313 3.1e + 3 7.909 3.2e + 3 2.0e + 7 9.8e + 17 6.2e + 5 2.4e + 1520 1.352 277.7 2.985 5.7e + 3 8.771 7.5e + 3 9.6e + 4 8.1e + 11 3.7e + 3 4.2e + 950 1.878 2.1e + 3 1.709 1.0e + 3 4.647 1.1e + 4 4.7e + 4 5.3e + 11 0.551 3.0e + 3100 1.017 1.1e + 3 2.329 1.2e + 4 0.493 430.9 1.9e + 4 3.8e + 11 –0.153 2.754

Table 5. Bayes estimates of α and their posterior risks under different loss functions.

WSELF MSELF PLF KLF

n Estimate Risk Estimate Risk Estimate Risk Estimate Risk

α̂

10 1.282302 0.000355 1.281920 0.000298 1.291959 0.018602 1.462998 0.60337520 1.208738 0.000209 1.208518 0.000182 1.21521 0.012525 1.336034 0.44343250 1.196898 0.000161 1.196724 0.000145 1.200116 0.006114 1.312961 0.4066862100 1.056134 0.000047 1.056086 0.000045 1.057569 0.002775 1.086846 0.1180107

λ̂

10 1.218986 0.000689 1.218287 0.000573 1.220017 0.000682 1.346992 0.44209620 1.186965 0.000377 1.186582 0.000323 1.187528 0.000371 1.293788 0.37618250 1.138355 0.000216 1.138137 0.000192 1.138678 0.000214 1.214898 0.2780002100 1.038662 0.000108 1.038555 0.000104 1.038824 0.000108 1.058715 0.077970

provided in Tables 2–4. MME1 denoted the MME using the first method and MME2 deontes theMME using the second method in this article.

For Bayesian analysis, we use the above-mentioned Gibbs Sampling algorithm. We present theaverage of Bayes estimates of α and λ and respective posterior risk in Table 5, while the credibleintervals under different loss functions are reported in Table 6. The hyper parameters are selectedin such a way that mean of prior distribution approximately equal to one while variance equal to0.001.

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015

16 S. Dey et al.

Table 6. 95% credible interval of α under different loss functions.

n WSELF MSELF PLF KLF

α̂

10 [1.248800, 1.318318] [1.248800, 1.318318] [1.257264, 1.328056] [1.404724, 1.524854]20 [1.183942, 1.235295] [1.183942, 1.235295] [1.189943, 1.241879] [1.294478, 1.380216]50 [1.175855, 1.217495] [1.175855, 1.217495] [1.178602, 1.220371] [1.277917, 1.346789]100 [1.043234, 1.069478] [1.043234, 1.069478] [1.044649, 1.070592] [1.067020–1.107012]

λ̂

10 [1.166506, 1.274228] [1.166506, 1.274228] [1.166506, 1.274228] [1.259883, 1.438370]20 [1.147705, 1.225636] [1.147705, 1.225636] [1.147705, 1.225636] [1.229548, 1.356882]50 [1.108373, 1.168931] [1.108373, 1.168931] [1.108373, 1.168931] [1.166888, 1.263814]100 [1.018732, 1.060129] [1.018732, 1.060129] [1.018720, 1.060129] [1.028229, 1.091536]

Some of the points are quite clear from Tables 2–4 that as sample size increases, the averagebiases and the MSE’s decrease. It verifies the consistency properties of all the estimators.

Now comparing the performances of the different frequentist estimators, it is observed thatamong all the estimators presented here, the MME2 estimators have the smallest biases and

Figure 2. Comparison of the density function for Lindley approximation (LA) with other loss functions.

Figure 3. Comparison of the density function with other loss functions.

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015

Journal of Statistical Computation and Simulation 17

MSEs for both λ and β. The performances of the MLEs are also quite satisfactory. It is evidentfrom Table 5 that MSELF has smaller posterior risk as compare to other loss functions while PLFis on the second choice. As the sample size increases, the posterior risk of all Bayes estimatesdecreases which verifies the consistency properties of all the estimators. The posterior risk isreported for different loss functions which is different from mean square error because Posteriorrisk is more comprehensive measure for comparison of different loss functions in Bayesian set-up. We also evaluated Bayes estimators using β = 0.5, 1, 2 and λ = 1. However, for the sakeof brevity, we provided results for (λ, β) = (1, 1). It is noticed that the credible intervals usingWSELF and MSELF have shorter length as compared to PLF and KLF. Also, the bias of Bayesestimates in case of α is smaller than the all classical methods (results are not reported in the text)while reverse is true in case of λ. Comparing all these, we propose to use the MLEs or Bayesestimators for all practical purposes in estimating the parameters of the WE distribution.

The graphical presentation of the density function under different loss functions and Lindleyapproximation (LA) are shown in Figures 2–3.

6. Real data analysis

In this section, we present analysis for illustrative purposes two different data sets providedby Gupta and Kundu.[1] The first data set is about survival times of guinea pigs injected withdifferent amounts of tubercle bacilli while second data set deals with the marks of the slow pacestudents in Mathematics in the final examination at Indian Institute of Technology, Kanpur, India.

The results of the parameter estimates and the associated plots are in Tables 7–10 andFigures 4–7.

Table 7. The parameter estimates under consideration (Data Set 1).

MME1 MME2 MLE PCE WLS

λ̂ 0.01292698 0.01279137 0.01383586 0.01876583 0.02125019β̂ 0.03159294 0.03341571 0.02247038 3.030896E − 7 1.001832E − 7

Table 8. The parameter estimates under consideration (Data Set 2).

MME1 MME2 MLE PCE WLS

λ̂ 0.06995427 0.06546179 0.06879202 0.07175803 0.07774621β̂ 0.01624684 0.02870243 0.01924185 1.042404E − 6 2.691187E − 7

Table 9. The parameter estimates and the simple bootstrap confidence intervals under consideration (DataSet 1).

λ β

Method Estimate Confidence interval Estimate Confidence interval

MME1 0.01292698 [0.00325864, 0.01558949] 0.03159294 [−14.93681, 0.0551208]MME2 0.01279137 [0.00338449, 0.01540359] 0.03341571 [−14.93317, 0.0585504]MLE 0.01383586 [0.00507647, 0.01687257] 0.02247038 [0.00469283, 0.0449407]PCE 0.01876583 [0.01538176, 0.02166524] 3.03090E − 7 [−4.2336E − 7, 5.9068E − 7]WLS 0.02125019 [0.01753118, 0.03067336] 1.00183E − 7 [−0.028299, 1.9034E − 7]

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015

18 S. Dey et al.

Table 10. The parameter estimates and the simple bootstrap confidence intervals under consideration(Data Set 2).

λ β

Method Estimate Confidence Interval Estimate Confidence Interval

MME1 0.06995427 [0.0446258, 0.09283675] 0.01624684 [−5.967506, 0.0163892]MME2 0.06546179 [0.0362490, 0.08456817] 0.02870243 [−5.942595, 0.0413712]MLE 0.06879202 [0.0422516, 0.09269258] 0.01924185 [−0.122452, 0.0384837]PCE 0.07175803 [0.0540586, 0.08988366] 1.04241E − 6 [−0.054970, 2.0397E − 6]WLS 0.07774621 [0.05987982, 0.1096388] 2.69119E − 7 [−0.125780, 5.0892E − 7]

0 100 200 300 400

0.0

0.2

0.4

0.6

0.8

1.0

x

Fn(

x)

Figure 4. The empirical cdf with the fitted cdf (Data Set 1).

x

Den

sity

0 100 200 300

0.00

00.

002

0.00

40.

006

0.00

80.

010

0.01

2

MME1MME2MLEPCEWLS

Figure 5. The histogram with the fitted pdf (Data Set 1).

For the first data set, the results of the parameter estimates using the methods provided earlierare in Table 7. The empirical cdf with the fitted cdf are plotted in Figure 4 and the histogram withthe fitted pdf in Figure 5.

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015

Journal of Statistical Computation and Simulation 19

0 20 40 60 80

0.0

0.2

0.4

0.6

0.8

1.0

x

Fn(

x)

Figure 6. The empirical cdf with the fitted cdf (Data Set 2).

x

Den

sity

0 20 40 60 80

0.00

0.01

0.02

0.03

0.04

MME1MME2MLEPCEWLS

Figure 7. The histogram with the fitted pdf (Data Set 2).

For the second data set, the results of the parameter estimates are in Table 8. The empirical cdfwith the fitted cdf are plotted in Figure 6 and the histogram with the fitted pdf in Figure 7.

The Bayes estimates of the parameters, posterior risks and respective credible interval underdifferent loss functions for both data sets is presented in Table 11.

In Tables 9–10, we also obtained the bootstrap confidence intervals using the basic bootstraptechnique for the confidence limits (Davison and Hinkley,[21]). Let T be a statistic to estimate theparameter θ under consideration and t be the realization of T. Let t∗b be the bootstrap-simulatedcopies of t, b = 1, 2, . . . , B. Their order statistics are t∗(1) ≤ . . . , t∗(B). We generated the bootstrapsample of size B = 999 to find the 100(1 − α)% confidence intervals with α = 0.05. Then, thebootstrap confidence interval is given by

[2t − t∗((B+1)(1−α/2)), 2t − t∗((B+1)(α/2))] = [2t − t∗(975), 2t − t∗(25)].

For more details, see Section 2.4 of Davison and Hinkley.[21]

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015

20 S. Dey et al.

Table 11. Bayes estimates of α, λ and their posterior risks with respective credible interval under different lossfunctions.

WSELF MSELF PLF KLF

n Estimate Risk Estimate Risk Estimate Risk Estimate Risk

EX1α 1.17600 0.01298 1.17009 0.00502 1.20534 0.03272 1.30711 0.470844λ 0.02093 0.01008 0.01987 0.05060 0.10226 0.14248 0.01479 0.100101EX2α 1.47937 0.01183 1.47055 0.00595 1.50356 0.02470 1.82877 1.056294λ 0.09186 0.01023 0.08755 0.04684 0.13690 0.06961 0.04149 0.05919

Credible interval

n WSELF MSELF PLF KLF

CI1α [1.14317, 1.26144] [1.14317, 1.26144] [1.14470, 1.26278] [1.22391, 1.41930]CIλ [0.01918, 0.02637] [0.01918, 0.02637] [0.01918, 0.02637] [0.00265, 0.00429]CI2α [1.41687, 1.60182] [1.41687, 1.60182] [1.41982, 1.60505] [1.69027, 2.03258]CIλ [0.08196, 0.11617] [0.08196, 0.11617] [0.08196, 0.11617] [0.02349, 0.03975]

In Table 11, EX1 and CI1 represents for the first data set while EX2 and CI2 for second dataset. Here, we considered uniform prior because we do not have any information about these datasets.

7. Conclusion

In this paper, we have considered statistical properties and several estimation techniques forestimating the unknown parameters of the WE distribution. Predominantly, we have consideredthe MLEs, the MMEs, the weighted least-squares estimators and the Bayes estimators. As it isnot feasible to compare these methods theoretically, we have performed extensive simulationstudy to compare these methods. We have compared different frequentist estimators mainly withrespect to biases and MSE’s. The Bayes estimators are compared with respect to posterior risks.It is observed that the performances of the MLEs are also quite satisfactory. We recommend useof the MLEs or Bayes estimators for all practical purposes.

Acknowledgments

The authors would like to express their sincere thanks to the Editor, Associate Editor and the referee(s) for their helpfulcomments and suggestions which led to the improvement of the paper.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

[1] Gupta RD, Kundu D. A new class of weighted exponential distributions. Stat J Theor Appl Statist. 2009;43(6):621–634.

[2] Farahani ZSM, Khorram E. Bayesian statistical inference for weighted exponential distribution. Commun StatistSimul Comput. 2014;43:1362–1384.

[3] Kundu D, Raqab MZ. Generalized Rayleigh distribution: different methods of estimations. Comput Statist DataAnal. 2005;49:187–200.

[4] Alkasasbeh MR, Raqab MZ. Estimation of the generalized logistic distribution parameters: comparative study.Statist Methodol. 2009;6:262–279.

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015

Journal of Statistical Computation and Simulation 21

[5] Dey S, Dey T, Kundu D. Two-parameter Rayleigh distribution: different methods of estimation. Amer J MathManage Sci. 2014;33:55–74.

[6] Teimouri M, Hoseini SM, Nadarajah S. Comparison of estimation methods for the Weibull distribution. Stat J TheorAppl Statist. 2013;47(1):93–109.

[7] Berkson J. Application of the logistic function to bioassay. J Amer Statist Assoc. 1944;39:357–365.[8] Ojo MO. Some relationships between the generalized logistic and other distributions. Statistica. 1997;LVII:573–

579.[9] Shaked M, Shanthikumar JG. Stochastic orders and their applications. Boston, MA: Academic Press; 1994.

[10] Giorgi GM, Nadrajah S. Bonferroni and Gini indices for various parametric families of distributions. METRON –Int J Statist. 2010;LXVIII(1):23–46.

[11] Guess F, Nam KH, Park DH. Failure rate and mean residual life with trend changes. Asia Pacific J Oper Res.1998;15:239–244.

[12] Lawless JF. Statistical models and methods for lifetime data. 2nd ed. New York: Wiley; 1982.[13] Kao JHK. Computer methods for estimating Weibull parameters in reliability studies. Trans IRE Relia Qual Cont.

1958;13:15–22.[14] Kao JHK. A graphical estimation of mixed Weibull parameters in life-testing of electron tubes. Technometrics.

1959;1:389–407.[15] Gupta RD, Kundu D. Generalized exponential distribution: different method of estimations. J Statist Comput Simul.

2001;69:315–337.[16] Swain JJ, Venkataraman S, Wilson JR. Least-squares estimation of distribution functions in Johnson’s translation

system. J Statist Comput Simul. 1988;29:271–297.[17] Jeffreys H. An invariant form for the prior probability in estimation problems. Proc R Soc London. Ser A Math Phy

Sci. 1946;186(1007):435–461.[18] Kundu D. Bayesian inference and life testing plan for the Weibull distribution in presence of progressive censoring.

Technometrics. 2008;50:144–154.[19] Gilks WR, Wild P. Adaptive rejection sampling for Gibbs sampling. Appl Statist. 1992;41:337–348.[20] Ali S. On the Bayesian estimation of the weighted Lindley distribution. J Statist Comput Simul. 2013;10:262–279.

doi:10.1080/00949655.2013.847442.[21] Davison AC, Hinkley DV. Bootstrap methods and their application. New York: Cambridge University Press; 1997.

Dow

nloa

ded

by [

Sank

u D

ey]

at 1

8:32

06

Janu

ary

2015