Pitman Closeness Comparison of Best Linear Unbiased and Invariant Predictors for Exponential...

16
This article was downloaded by: [University of Texas San Antonio] On: 14 December 2011, At: 09:56 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Communications in Statistics - Theory and Methods Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/lsta20 Pitman Closeness Comparison of Best Linear Unbiased and Invariant Predictors for Exponential Distribution in One- and Two-Sample Situations N. Balakrishnan a , Katherine F. Davies b , Jerome P. Keating c & Robert L. Mason d a Department of Mathematics and Statistics, McMaster University, Hamilton, Ontario, Canada b Department of Statistics, University of Manitoba, Winnipeg, Manitoba, Canada c Department of Demography, University of Texas at San Antonio, San Antonio, Texas d Southwest Research Institute, San Antonio, Texas Available online: 14 Dec 2011 To cite this article: N. Balakrishnan, Katherine F. Davies, Jerome P. Keating & Robert L. Mason (2012): Pitman Closeness Comparison of Best Linear Unbiased and Invariant Predictors for Exponential Distribution in One- and Two-Sample Situations, Communications in Statistics - Theory and Methods, 41:1, 1-15 To link to this article: http://dx.doi.org/10.1080/03610920903537301 PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.

Transcript of Pitman Closeness Comparison of Best Linear Unbiased and Invariant Predictors for Exponential...

This article was downloaded by: [University of Texas San Antonio]On: 14 December 2011, At: 09:56Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,37-41 Mortimer Street, London W1T 3JH, UK

Communications in Statistics - Theory and MethodsPublication details, including instructions for authors and subscription information:http://www.tandfonline.com/loi/lsta20

Pitman Closeness Comparison of Best Linear Unbiasedand Invariant Predictors for Exponential Distribution inOne- and Two-Sample SituationsN. Balakrishnan a , Katherine F. Davies b , Jerome P. Keating c & Robert L. Mason da Department of Mathematics and Statistics, McMaster University, Hamilton, Ontario, Canadab Department of Statistics, University of Manitoba, Winnipeg, Manitoba, Canadac Department of Demography, University of Texas at San Antonio, San Antonio, Texasd Southwest Research Institute, San Antonio, Texas

Available online: 14 Dec 2011

To cite this article: N. Balakrishnan, Katherine F. Davies, Jerome P. Keating & Robert L. Mason (2012): Pitman ClosenessComparison of Best Linear Unbiased and Invariant Predictors for Exponential Distribution in One- and Two-Sample Situations,Communications in Statistics - Theory and Methods, 41:1, 1-15

To link to this article: http://dx.doi.org/10.1080/03610920903537301

PLEASE SCROLL DOWN FOR ARTICLE

Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions

This article may be used for research, teaching, and private study purposes. Any substantial or systematicreproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form toanyone is expressly forbidden.

The publisher does not give any warranty express or implied or make any representation that the contentswill be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses shouldbe independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims,proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly inconnection with or arising out of the use of this material.

Communications in Statistics—Theory and Methods, 41: 1–15, 2012Copyright © Taylor & Francis Group, LLCISSN: 0361-0926 print/1532-415X onlineDOI: 10.1080/03610920903537301

Pitman Closeness Comparison of Best LinearUnbiased and Invariant Predictors for ExponentialDistribution in One- and Two-Sample Situations

N. BALAKRISHNAN1, KATHERINE F. DAVIES2,JEROME P. KEATING3, AND ROBERT L. MASON4

1Department of Mathematics and Statistics, McMaster University,Hamilton, Ontario, Canada2Department of Statistics, University of Manitoba,Winnipeg, Manitoba, Canada3Department of Demography, University of Texas at San Antonio,San Antonio, Texas4Southwest Research Institute, San Antonio, Texas

Best linear unbiased, best linear invariant, and maximum likelihood predictors arecommonly used in reliability studies for predicting either censored failure times orlifetimes from a future life-test. In this article, by assuming a Type-II right-censoredsample from an exponential distribution, we compare best linear unbiased (BLUP)and best linear invariant (BLIP) predictors of the censored order statistics in theone-sample case and order statistics from a future sample in the two-sample case,in terms of Pitman closeness criterion. Some specific conclusions are drawn andsupporting numerical results are presented.

Keywords Best linear invariant estimator; Best linear invariant predictor; Bestlinear unbiased estimator; Best linear unbiased predictor; Order statistics;Pitman closeness; Probabilities of closeness.

Mathematics Subject Classification 62N03; 62N05.

1. Introduction

Much interest centers on comparisons of optimal estimators based on differentcriteria. For example, one may want to compare an unbiased estimator to aminimum mean squared error estimator. If one uses unbiasedness or mean squarederror, the outcome is obvious. In this regard, Rao (1981) recommended comparisonsusing an alternative criterion, that of Pitman’s measure of closeness. The Pitmancloseness of T1 relative to T2 is the probability that an estimator T1 produces an

Received August 17, 2009; Accepted December 7, 2010Address correspondence to Katherine Davies, Department of Statistics, University of

Manitoba, Winnipeg, Manitoba R3T 2N2, Canada; E-mail: [email protected]

1

Dow

nloa

ded

by [

Uni

vers

ity o

f T

exas

San

Ant

onio

] at

09:

56 1

4 D

ecem

ber

2011

2 Balakrishnan et al.

estimate which is closer to a real-valued parameter � than the one produced bya competitor T2, both estimators being based on a sample of size n. This pairedmeasure of comparison is formally given (see Pitman, 1937) by

��T1� T2 � �� n� = Pr��T1 − �� < �T2 − ���� (1)

Whenever both estimators are functions of a common statistic (as often occurs in thepresence of a sufficient statistic), we will denote ��T1� T2 � �� n� as �n. An estimatorT1 is said to be Pitman-closer than T2 whenever ��T1� T2 � �� n� ≥ ��T2� T1 � �� n� forall values of � in the parameter space �, with strict inequality holding for at leastone � ∈ �. This pairwise comparison has been extended by Pitman to include allestimators in a class �. The estimator � is then said to be Pitman-closest if � isPitman-closer than all Tj in �. An estimator Tj is Pitman closeness inadmissiblewithin a class � if there is an estimator Ti in � which is Pitman-closer than Tj . For acomprehensive review on developments concerning Pitman closeness as a criterion,one may refer to Keating et al. (1993).

By considering a two-parameter exponential distribution, Nagaraja (1986)compared the best linear unbiased (BLUE) and best linear invariant (BLIE)estimators of the location and scale parameters based on Type-II censored samples.Recently, Balakrishnan et al. (2009) discussed Pitman closeness of the samplemedian to the population median in the class of order statistics, which has beengeneralized subsequently by Balakrishnan et al. (2009) to address Pitman closenessof order statistics to population quantiles. In this article, continuing along the linesof Nagaraja (1986), by assuming a Type-II right censored sample from a scaled-exponential distribution, we compare the BLUP and the BLIP as predictors ofcensored order statistics. Based on a Type-II censored sample X1�n < · · · < Xr�n,we discuss the prediction of Xs�n (for r < s ≤ n) in the one-sample case and of Ys�m(for 1 ≤ s ≤ m) in the two-sample case. It is shown that the BLUP is always Pitman-closer when r = 1, and the BLIP is always Pitman-closer when s = r + 1 except inthe case when r = 1. When r is small, the BLUP is Pitman-closer but as r getslarger, the BLIP becomes Pitman-closer. In the two-sample case, while dealing withthe prediction of order statistics from a future sample, we observe that the BLUPis Pitman-closer than the BLIP in general. In fact, for small r, the BLUP is alwaysPitman-closer. As r gets larger, for small s the BLIP is Pitman-closer in generaland in fact, is uniformly Pitman-closer in the special case when s = 1. As s getslarger for fixed r, the BLUP is Pitman-closer in general. We present some supportivenumerical results for both these situations.

2. BLUP vs. BLIP: One-Sample Case

In reliability testing, it is often of great interest to predict censored failure times,based on observed censored data. For example, having observed the first r orderstatistics, the prediction of the largest order statistic would give an idea as to howlong the life-test experiment would have lasted had it not been terminated at therth failure. These predictions could naturally be used in projecting or forecastingcosts of continued testing. In this situation, there is a need to modify the definitionof closeness in that the target is no longer a parameter, �, but rather a randomquantity, Xs�n, where s > r, the censored order statistic that we wish to predict. So,in this context, we drop the terminology of estimators in favor of the more proper

Dow

nloa

ded

by [

Uni

vers

ity o

f T

exas

San

Ant

onio

] at

09:

56 1

4 D

ecem

ber

2011

PC Comparison of Predictors for Exponential Distn. 3

terminology of predictors. We consider the BLUP and the BLIP of all censoredorder statistics. It is known that the BLUP and BLIP of Xs�n, s > r, based on a Type-II right censored sample X1�n ≤ · · · ≤ Xr�n from Exp��� distribution, are given by

X∗s�n = Xr�n + T1�s�n − r�n� and X∗∗

s�n = Xr�n + T2�s�n − r�n��

where T1 and T2 are the BLUE and BLIE of � given by

T1 =1rT = 1

r

[r∑

i=1

Xi�n + �n− r�Xr�n

]

and

T2 =1

r + 1T = 1

r + 1

[r∑

i=1

Xi�n + �n− r�Xr�n

]�

respectively, and

i�n =1n+ 1

n− 1+ · · · + 1

n− i+ 1for i = 1� 2� � � � � n�

Then, the desired Pitman closeness probability becomes in this case

�r�s�n = Pr��Xs�n − X∗s�n� < �Xs�n − X∗∗

s�n�� �Upon using the expressions of the BLUE and BLIE given above, we find

�r�s�n = Pr(∣∣∣∣Xs�n − Xr�n − �s�n − r�n�

T

r

∣∣∣∣ <∣∣∣∣Xs�n − Xr�n − �s�n − r�n�

T

r + 1

∣∣∣∣)

= Pr(2�Xs�n − Xr�n�T

(1

r + 1− 1

r

)− �s�n − r�n�T

2

(1

�r + 1�2− 1

r2

)< 0)

= Pr

(T <

2�Xs�n − Xr�n�

�s�n − r�n�(1r+ 1

r+1

))

= Pr

(Gr <

2�Zs�n − Zr�n�

�s�n − r�n�(1r+ 1

r+1

))� (2)

where Zi�n denotes the ith order statistic from a sample of size n from the standardexponential distribution, and Zs�n − Zr�n is independent of Gr ≡ T

�=∑r

i=1 Zi�n +�n− r�Zr�n. If we now let S1 = nZ1�n� S2 = �n− 1��Z2�n − Z1�n�� · · · � Sr = �n− r +1��Zr�n − Zr−1�n�, we then have

Gr =r∑

i=1

Si�

Zs�n = S1n

+ S2n− 1

+ · · · + Ssn− s + 1

Zr�n = S1n

+ S2n− 1

+ · · · + Srn− r + 1

Dow

nloa

ded

by [

Uni

vers

ity o

f T

exas

San

Ant

onio

] at

09:

56 1

4 D

ecem

ber

2011

4 Balakrishnan et al.

and consequently, Zs�n − Zr�n = Sr+1n−r

+ · · · + Ssn−s+1 � As the spacings S1� � � � � Sr are

known to be independent standard exponential random variables, and so Gr isdistributed as standard gamma with shape parameter r (see Balakrishnan andCohen Balakrishnan and Cohen, 1991), we can express the Pitman closenessprobability �r�s�n in (2) as

�r�s�n =∫ �

0Pr

�s�n − r�n�

(2r+1r�r+1�

)2

x < Zs�n − Zr�n

1

�r�e−xxr−1dx� (3)

For the evaluation of this integral, we need the joint density of Zr�n and Zs�n

which is given by

fZr�n�Zs�n�u� v� = ar�s�n �F�u��

r−1 �F�v�− F�u��s−r−1 �1− F�v��n−s f�u�f�v��

0 < u < v < �� (4)

where ar�s�n = n!�r−1�!�s−r−1�!�n−s�! , F�u� = 1− e−u �u > 0� and f�u� = e−u �u > 0�; see

Arnold et al. (2008). Upon making the substitution K = Zr�n and L = Zs�n − Zr�n, weobtain the joint density of K and L, from (4), as

fK�L�k� l� = ar�s�n �F�k��r−1 �F�k+ l�− F�k��s−r−1 �1− F�k+ l��n−s f�k�f�k+ l�

= ar�s�n�1− e−k�r−1�e−k�n−r+1�1− e−l�s−r−1�e−l�n−s+1� k > 0� l > 0� (5)

From (5), we readily get the marginal density of L as

fL�l� = ar�s�n�1− e−l�s−r−1�e−l�n−s+1∫ �

0�1− e−k�r−1�e−k�n−r+1dk

= ar�s�n�1− e−l�s−r−1�e−l�n−s+1B�r� n− r + 1�

= �n− r�!�s − r − 1�!�n− s�! �1− e−l�s−r−1�e−l�n−s+1� l > 0� (6)

Letting br�s�n = �n−r�!�s−r−1�!�n−s�! , we find from (6) that

Pr�L ≥ c� = br�s�n

∫ �

c�1− e−l�s−r−1�e−l�n−s+1dl

= br�s�n

s−r−1∑i=0

�−1�i(s − r − 1

i

) ∫ �

c�e−l�n−s+i+1dl

= br�s�n

s−r−1∑i=0

�−1�i(s − r − 1

i

)e−c�n−s+i+1�

n− s + i+ 1� (7)

Therefore, we have

Pr�Zs�n − Zr�n > D x� = Pr�L ≥ D x� = br�s�n

s−r−1∑i=0

�−1�i(s − r − 1

i

)e−Dx�n−s+i+1�

n− s + i+ 1�

(8)

Dow

nloa

ded

by [

Uni

vers

ity o

f T

exas

San

Ant

onio

] at

09:

56 1

4 D

ecem

ber

2011

PC Comparison of Predictors for Exponential Distn. 5

where D = �s�n−r�n��1r + 1

r+1 �

2 = �s�n−r�n��2r+1�2r�r+1� � Upon substituting the expression in (8)

into (3), we find

�r�s�n = Pr��Xs�n − X∗s�n� < �Xs�n − X∗∗

s�n��

= br�s�n�r�

s−r−1∑i=0

�−1�i(s − r − 1

i

)(1

n− s + i+ 1

) ∫ �

0e−x�1+�n−s+i+1�D�xr−1dx

= br�s�n

s−r−1∑i=0

�−1�i(s − r − 1

i

)(1

n− s + i+ 1

)(1

�1+ �n− s + i+ 1�D�r

)�

(9)

We computed the values of Pitman closeness probabilities �r�s�n from (9) whenn = 10 and 15 for all choices of r = 1�1�n− 1 and s = r + 1�1�n. These values arepresented in Tables 1 and 2, respectively, and are also graphically presented inFigs. 1 and 2. These results reveal that the BLUP is Pitman-closer than the BLIPwhen r = 1, the BLIP is always Pitman-closer in the case when s = r + 1 except inthe case of r = 1. Also, for small r, the BLUP is Pitman-closer in general while forlarger r, the BLIP is Pitman-closer.

2.1. Special Case When r = 1

In the special case when r = 1, the expression in (9) simplifies to

�1�s�n = Pr��Xs�n − X∗s�n� < �Xs�n − X∗∗

s�n��

= �n− 1�!�s − 2�!�n− s�!

s−2∑i=0

�−1�i(s − 2i

)(1

n− s + i+ 1�

)(1

1+ �n− s + i+ 1�D

)

= �n− 1�!�s − 2�!�n− s�!

s−2∑i=0

�−1�i(s − 2i

)(1

n− s + i+ 1�

)

− �n− 1�!�s − 2�!�n− s�!

s−2∑i=0

�−1�i(s − 2i

)(1

1/D + �n− s + i+ 1�

)

Table 1Pitman closeness probabilities �r�s�n in one-sample case, when n = 10

r\s 2 3 4 5 6 7 8 9 10

1 0.5714 0.6398 0.6677 0.6827 0.6918 0.6976 0.7011 0.7020 0.69712 – 0.4983 0.5677 0.5990 0.6169 0.6279 0.6345 0.6370 0.63073 – – 0.4640 0.5299 0.5610 0.5790 0.5897 0.5945 0.58854 – – – 0.4441 0.5064 0.5361 0.5529 0.5612 0.55695 – – – – 0.4310 0.4901 0.5179 0.5318 0.53076 – – – – – 0.4217 0.4778 0.5023 0.50707 – – – – – – 0.4149 0.4673 0.48308 – – – – – – – 0.4095 0.45439 – – – – – – – – 0.4053

Dow

nloa

ded

by [

Uni

vers

ity o

f T

exas

San

Ant

onio

] at

09:

56 1

4 D

ecem

ber

2011

Table

2Pitman

closenessprob

abilities

�r�s�nin

one-samplecase,whenn=

15

r\s

23

45

67

89

1011

1213

1415

10.5714

0.6399

0.6680

0.6833

0.6928

0.6993

0.7040

0.7074

0.7099

0.7117

0.7128

0.7131

0.7118

0.7055

2–

0.4983

0.5679

0.5996

0.6180

0.6300

0.6383

0.6444

0.6488

0.6520

0.6540

0.6547

0.6531

0.6441

3–

–0.4640

0.5302

0.5619

0.5810

0.5936

0.6026

0.6091

0.6137

0.6168

0.6181

0.6165

0.6059

4–

––

0.4441

0.5069

0.5376

0.5565

0.5692

0.5782

0.5846

0.5889

0.5910

0.5897

0.5784

5–

––

–0.4310

0.4909

0.5206

0.5389

0.5513

0.5600

0.5659

0.5690

0.5683

0.5569

6–

––

––

0.4217

0.4792

0.5078

0.5255

0.5374

0.5453

0.5500

0.5502

0.5391

7–

––

––

–0.4149

0.4703

0.4978

0.5148

0.5258

0.5324

0.5340

0.5237

8–

––

––

––

0.4095

0.4633

0.4897

0.5056

0.5152

0.5188

0.5099

9–

––

––

––

–0.4053

0.4575

0.4827

0.4971

0.5037

0.4969

10–

––

––

––

––

0.4018

0.4525

0.4762

0.4876

0.4842

11–

––

––

––

––

–0.3990

0.4480

0.4688

0.4708

12–

––

––

––

––

––

0.3965

0.4430

0.4553

13–

––

––

––

––

––

–0.3945

0.4341

14–

––

––

––

––

––

––

0.3927

6

Dow

nloa

ded

by [

Uni

vers

ity o

f T

exas

San

Ant

onio

] at

09:

56 1

4 D

ecem

ber

2011

PC Comparison of Predictors for Exponential Distn. 7

Figure 1. Pitman closeness probabilities �r�s�n in one-sample case, when n = 10.

= �n− 1�!�s − 2�!�n− s�!B�n− s + 1� s − 1�

− �n− 1�!�s − 2�!�n− s�!B�1/D + n− s + 1� s − 1�

= 1− �n− 1��n− 2� · · · �n− s + 1��n− 1+ 1/D��n− 2+ 1/D� · · · �n− s + 1+ 1/D�

� (10)

where D = 3 1/�n−1�+1/�n−2�+···+1/�n−s+1��4 in this specific case. We observe from the

corresponding results in Tables 1 and 2 that the BLUP is always Pitman-closer thanthe BLIP in this special case regardless of what s and n are.

Figure 2. Pitman closeness probabilities �r�s�n in one-sample case, when n = 15.

Dow

nloa

ded

by [

Uni

vers

ity o

f T

exas

San

Ant

onio

] at

09:

56 1

4 D

ecem

ber

2011

8 Balakrishnan et al.

Table 3Pitman closeness probabilities �r�r+1�n in

one-sample case

r �r�r+1 r �r�r+1�n r �r�r+1�n

1 0.5714 11 0.3990 21 0.38472 0.4983 12 0.3965 22 0.38403 0.4640 13 0.3945 23 0.38334 0.4441 14 0.3927 24 0.38275 0.4310 15 0.3911 25 0.38216 0.4217 16 0.3897 26 0.38167 0.4149 17 0.3885 27 0.38118 0.4095 18 0.3874 28 0.38069 0.4053 19 0.3864 29 0.380210 0.4018 20 0.3855 30 0.3798

2.2. Special Case When s = r + 1

In the case when s = r + 1, i.e., when we are interested in predicting the time of thenext failure, Xr+1�n, having observed the Type-II censored sample X1�n� � � � � Xr�n, wereadily have D = 2r+1

2r�n−r��r+1� , and we therefore find from (9) that

�r�r+1�n = Pr(∣∣Xr+1�n − X∗

r+1�n

∣∣ < ∣∣Xr+1�n − X∗∗r+1�n

∣∣) = {1+ 2r + 1r�2r + 2�

}−r

� (11)

It is of interest to note that the probability �r�r+1�n is free of n. This probabilityhas been presented in Table 3 for r = 1�1�30 from which it is also evident that theprobability �r�r+1�n is monotone decreasing in r and tends to e−1 as r → �. Thisreveals that the BLIP is Pitman-closer than the BLUP as r gets larger. In fact, wefind from Table 3 that when s = r + 1, the BLUP is Pitman-closer only when r = 1,and that the BLIP is Pitman-closer for all r > 1.

Upon examination of the values in Tables 1 and 2 closely, we observe thefollowing with regard to the prediction of Xs�n, based on the Type-II censoredsample X1�n < · · · < Xr�n: (1) BLUP is Pitman-closer when r = 1 for all choices of sand n; (2) BLIP is Pitman-closer when s = r + 1 for all choices of r and n exceptwhen r = 1; and (3) BLUP is Pitman-closer in general for small values of r andBLIP is Pitman-closer in general for large values of r.

3. BLUP vs. BLIP: Two-Sample Case

Suppose we have a Type-II right censored sample X1�n� � � � � Xr�n from Exp���distribution and we are interested predicting future sample lifetimes Y1�m� � � � � Ym�m,also from Exp���. For predicting Ys�m, it can be shown in this case that the BLUPand BLIP of Ys�m are Y ∗

s�m = s�mT1 and Y ∗∗s�m = s�mT2, respectively, where T1 and T2

are the BLUE and BLIE of � based on the censored sample X1�n� � � � � Xr�n, and s�m =1m+ 1

m−1 + · · · + 1m−s+1 .

Dow

nloa

ded

by [

Uni

vers

ity o

f T

exas

San

Ant

onio

] at

09:

56 1

4 D

ecem

ber

2011

PC Comparison of Predictors for Exponential Distn. 9

Our aim is to compare Y ∗s�m and Y ∗∗

s�m as predictors of Ys�m using Pitmancloseness criterion. In this case, the Pitman closeness probability becomes �s�m�r�n� =Pr ��Y ∗

s�m − Ys�m� < �Y ∗∗s�m − Ys�m��.

Now,

�s�m�r�n� = Pr(∣∣∣∣s�m Tr − Ys�m

∣∣∣∣ <∣∣∣∣s�m T

r + 1− Ys�m

∣∣∣∣)

= Pr[2s�m

(1r2

− 1�r + 1�2

)T 2 < 2s�m

(1r− 1

r + 1

)Ys�mT

]

= Pr[s�m

(1r+ 1

r + 1

)T < 2Ys�m

]

= Pr[T <

2r�r + 1�s�m�2r + 1�

Ys�m

]

=∫ �

y=0Pr[Gr <

2r�r + 1�s�m�2r + 1�

]m!�1− e−y�s−1

�s − 1�!�m− s�! �e−y�m−se−ydy

=∫ �

y=0

1− r−1∑

i=0

e−2r�r+1�

s�m�2r+1� y

(2r�r+1�

s�m�2r+1� y)r−1−i

�r − 1− i�!

m!�1− e−y�s−1

�s − 1�!�m− s�! �e−y�m−se−ydy

= 1−r−1∑i=0

m!�s − 1�!�m− s�!�r − 1− i�!

(2r�r + 1�

s�m�2r + 1�

)r−1−i

×∫ �

y=0�1− e−y�s−1e−y� 2r�r+1�

s�m�2r+1�+m−s+1�yr−1−idy

= 1−r−1∑i=0

m!�s − 1�!�m− s�!

(2r�r + 1�

s�m�2r + 1�

)r−1−i

×s−1∑j=0

�−1�s−1−j

(s − 1j

)1[

2r�r+1�s�m�2r+1� +m− j

]r−i

= 1−s−1∑j=0

�−1�s−1−j

(s − 1j

)1[

2r�r+1�s�m�2r+1� +m− j

]

× m!�s − 1�!�m− s�!

r−1∑i=0

[ 2r�r+1�s�m�2r+1�

2r�r+1�s�m�2r+1� +m− j

]r−i−1

= 1−s−1∑j=0

�−1�s−1−j

(s − 1j

)1[

2r�r+1�s�m�2r+1� +m− j

] × m!�s − 1�!�m− s�!

×1−

[2r�r+1�

2r�r+1�+�m−j�s�m�2r+1�

]r1−

[2r�r+1�

2r�r+1�+�m−j�s�m�2r+1�

]

= 1−s−1∑j=0

�−1�s−1−j

(s − 1j

)s�m�2r + 1�

2r�r + 1�+ �m− j�s�m�2r + 1�

Dow

nloa

ded

by [

Uni

vers

ity o

f T

exas

San

Ant

onio

] at

09:

56 1

4 D

ecem

ber

2011

10 Balakrishnan et al.

× m!�s − 1�!�m− s�! ×

2r�r + 1�+ �m− j�s�m�2r + 1��m− j�s�m�2r + 1�

×{1−

[2r�r + 1�

2r�r + 1�+ �m− j�s�m�2r + 1�

]r}

= 1− m!�s − 1�!�m− s�!

s−1∑j=0

�−1�s−1−j

(s − 1j

)1

m− j

+ m!�s − 1�!�m− s�!

s−1∑j=0

�−1�s−1−j

(s − 1j

)1

m− j

×[

2r�r + 1�2r�r + 1�+ �m− j�s�m�2r + 1�

]r= 1− C +D� say� (12)

Now, consider

C = m!�s − 1�!�m− s�!

s−1∑j=0

�−1�s−1−j

(s − 1j

)1

m− j

= m!�s − 1�!�m− s�!

∫ 1

t=0tm−s�1− t�s−1dt

= m!�s − 1�!�m− s�!B�m− s + 1� s�

= m!�s − 1�!�m− s�!

�m− s�!�s − 1�!m!

= 1�

So, we obtain

�s�m�r�n� = 1− 1+ m!�s − 1�!�m− s�!

s−1∑j=0

�−1�s−1−j

(s − 1j

)1

m− j

×[

2r�r + 1�2r�r + 1�+ s�m�2r + 1��m− j�

]r

= m!�s − 1�!�m− s�!

s−1∑j=0

�−1�s−1−j

(s − 1j

)1

m− j

×[

2r�r + 1�2r�r + 1�+ s�m�2r + 1��m− j�

]r� (13)

3.1. Special Case When r = 1

By letting r = 1 in (13), we obtain

�s�m�1�n� =m!4

�s − 1�!�m− s�!s−1∑j=0

�−1�s−1−j

(s − 1j

)1

m− j× 1

4+ 3s�m�m− j�

Dow

nloa

ded

by [

Uni

vers

ity o

f T

exas

San

Ant

onio

] at

09:

56 1

4 D

ecem

ber

2011

PC Comparison of Predictors for Exponential Distn. 11

= m!4�s − 1�!�m− s�!3s�m

s−1∑j=0

�−1�s−1−j

(s − 1j

)1

m− j× 1

m− j + 43s�m

Using partial fractions and setting l = m+ 43s�m

, we get

�s�m�1�n� =m!4

�s − 1�!�m− s�!3s�ms−1∑j=0

�−1�s−1−j

(s − 1j

)[1

m− j− 1

l− j

]× 1

43s�m

= m!�s − 1�!�m− s�!

s−1∑j=0

�−1�s−1−j

(s − 1j

)1

m− j

− m!�s − 1�!�m− s�!

s−1∑j=0

�−1�s−1−j

(s − 1j

)1

l− j

= B�s�m− s + 1�− B�s� l− s + 1�B�s�m− s + 1�

= 1− B�s� l− s + 1�B�s�m− s + 1�

� (14)

Similarly, we can derive expressions for general r in terms of digamma,trigamma, and polygamma functions. We have presented the values of �s�m�r�n�,computed from (13), when s ≥ 2, for r = 1, 5 and 10 with m = 5 to 15 in Tables 4–6. These results readily reveal that the BLUP is always Pitman-closer when r = 1 or5. In the case of r = 10, the outcome depends on m and s. It is observed that theBLIP is Pitman-closer than the BLUP for small values of s while it is vice versa forlarge values of s.

3.2. Special Case When s = 1

When s = 1, we have

�1�m�r�n� = 1−r−1∑i=0

m

�r − 1− i�!(2r�r + 1�m2r + 1

)r−1−i ∫ �

y=0e−y 2r�r+1�m

2r+1 +m�yr−1−idy

= 1−r−1∑i=0

[2r�r+1�m2r+1

2r�r+1�m2r+1 +m

]r−1−i

× m2r�r+1�m2r+1 +m

= 1− m2r�r+1�m2r+1 +m

×1−

(2r�r+1�m2r+1

2r�r+1�m2r+1 +m

)r

1−(

2r�r+1�m2r+1

2r�r+1�m2r+1 +m

)

=[2r�r + 1�m2r + 1

× 2r + 12r�r + 1�m+m�2r + 1�

]r

=[

2r�r + 1��2r�r + 1�+ 2r + 1�m

]r

=[2r�r + 1��2r + 1�2

]r� (15)

Dow

nloa

ded

by [

Uni

vers

ity o

f T

exas

San

Ant

onio

] at

09:

56 1

4 D

ecem

ber

2011

Table

4Pitman

closenessprob

abilities

�s�m�r�n�in

two-samplecase,whenr=

1

m\s

23

45

67

89

1011

1213

1415

50.6612

0.6793

0.6870

0.6850

––

––

––

––

––

60.6587

0.6789

0.6888

0.6930

0.6895

––

––

––

––

–7

0.6567

0.6782

0.6893

0.6954

0.6976

0.6931

––

––

––

––

80.6550

0.6774

0.6893

0.6963

0.7002

0.7012

0.6960

––

––

––

–9

0.6536

0.6768

0.6892

0.6966

0.7013

0.7038

0.7040

0.6983

––

––

––

100.6525

0.6762

0.6889

0.6968

0.7018

0.7050

0.7067

0.7063

0.7004

––

––

–11

0.6515

0.6756

0.6887

0.6967

0.7021

0.7057

0.7080

0.7090

0.7083

0.7021

––

––

120.6506

0.6751

0.6884

0.6967

0.7022

0.7060

0.7087

0.7103

0.7110

0.7099

0.7036

––

–13

0.6499

0.6747

0.6882

0.6966

0.7023

0.7062

0.7091

0.7111

0.7123

0.7126

0.7113

0.7049

––

140.6493

0.6743

0.6880

0.6965

0.7023

0.7064

0.7094

0.7115

0.7131

0.7139

0.7140

0.7125

0.7060

–15

0.6487

0.6740

0.6878

0.6964

0.7022

0.7064

0.7095

0.7118

0.7135

0.7147

0.7153

0.7152

0.7136

0.7070

12

Dow

nloa

ded

by [

Uni

vers

ity o

f T

exas

San

Ant

onio

] at

09:

56 1

4 D

ecem

ber

2011

Table

5Pitman

closenessprob

abilities

�s�m�r�n�in

two-samplecase,whenr=

5

m\s

23

45

67

89

1011

1213

1415

50.5246

0.5404

0.5465

0.5391

––

––

––

––

––

60.5206

0.5396

0.5498

0.5532

0.5442

––

––

––

––

–7

0.5174

0.5383

0.5506

0.5574

0.5589

0.5486

––

––

––

––

80.5147

0.5371

0.5506

0.5591

0.5637

0.5638

0.5525

––

––

––

–9

0.5126

0.5359

0.5503

0.5597

0.5658

0.5689

0.5679

0.5559

––

––

––

100.5107

0.5349

0.5498

0.5599

0.5668

0.5713

0.5732

0.5715

0.5589

––

––

–11

0.5092

0.5339

0.5494

0.5599

0.5673

0.5725

0.5758

0.5769

0.5746

0.5615

––

––

120.5079

0.5331

0.5489

0.5597

0.5675

0.5732

0.5772

0.5796

0.5801

0.5773

0.5639

––

–13

0.5067

0.5324

0.5485

0.5596

0.5676

0.5736

0.5780

0.5811

0.5829

0.5829

0.5797

0.5660

––

140.5057

0.5317

0.5481

0.5594

0.5676

0.5738

0.5786

0.5821

0.5845

0.5857

0.5853

0.5818

0.5679

–15

0.5048

0.5312

0.5477

0.5592

0.5676

0.5740

0.5789

0.5827

0.5855

0.5874

0.5882

0.5875

0.5837

0.5696

13

Dow

nloa

ded

by [

Uni

vers

ity o

f T

exas

San

Ant

onio

] at

09:

56 1

4 D

ecem

ber

2011

Table

6Pitman

closenessprob

abilities

�s�m�r�n�in

two-samplecase,whenr=

10

m\s

23

45

67

89

1011

1213

1415

50.4902

0.5014

0.5043

0.4937

––

––

––

––

––

60.4858

0.5004

0.5079

0.5091

0.4973

––

––

––

––

–7

0.4822

0.4990

0.5087

0.5138

0.5135

0.5008

––

––

––

––

80.4794

0.4976

0.5087

0.5156

0.5189

0.5176

0.5039

––

––

––

–9

0.4770

0.4963

0.5083

0.5162

0.5213

0.5234

0.5212

0.5068

––

––

––

100.4750

0.4951

0.5078

0.5164

0.5224

0.5261

0.5273

0.5244

0.5094

––

––

–11

0.4733

0.4941

0.5072

0.5164

0.5229

0.5275

0.5302

0.5307

0.5272

0.5117

––

––

120.4719

0.4931

0.5067

0.5162

0.5232

0.5283

0.5318

0.5338

0.5337

0.5298

0.5138

––

–13

0.4706

0.4923

0.5062

0.5160

0.5233

0.5287

0.5328

0.5356

0.5369

0.5364

0.5321

0.5158

––

140.4695

0.4916

0.5057

0.5157

0.5232

0.5290

0.5334

0.5367

0.5388

0.5397

0.5388

0.5341

0.5175

–15

0.4686

0.4909

0.5053

0.5155

0.5232

0.5291

0.5338

0.5374

0.5400

0.5417

0.5422

0.5409

0.5360

0.5192

14

Dow

nloa

ded

by [

Uni

vers

ity o

f T

exas

San

Ant

onio

] at

09:

56 1

4 D

ecem

ber

2011

PC Comparison of Predictors for Exponential Distn. 15

Table 7Pitman closeness probabilities�1�m�r�n� in two-sample case

r �1�m�r�n�

1 0.44442 0.23043 0.11754 0.05955 0.03006 0.01517 0.00768 0.00389 0.001010 0.0010

It is clear that the expression in (15) is free of m and n. In Table 7, we havepresented values of �1�m�r�n� computed from (15) for r = 1 to 10.

It is clear that these values are monotonically decreasing in r and all values areless than 0.5, which reveals in this special case, the BLIP is uniformly better thanthe BLUP in the sense of Pitman closeness.

Acknowledgments

We express our sincere thanks to a referee for making some suggestions which ledto an improvement on an earlier version of this manuscript.

References

Arnold, B. C., Balakrishnan, N., Nagaraja, H. N. (2008). A First Course in Order Statistics.Classic ed. Philadelphia: SIAM.

Balakrishnan, N., Cohen, A. C. (1991). Order Statistics and Inference: Estimation Methods.Boston: Academic Press.

Balakrishnan, N., Davies, K., Keating, J. P. (2009). Pitman closeness of order statistics topopulation quantiles. Commun. Statist. Simul. Computat. 38:802–820.

Balakrishnan, N., Iliopoulos, G., Keating, J. P., Mason, R. L. (2009). Pitman closeness ofsample median to population median. Statist. Probab. Lett. 79:1759–1766.

Keating, J. P., Mason, R. L., Sen, P. K. (1993). Pitman’s Measure of Closeness. Philadelphia:SIAM.

Nagaraja, H. N. (1986). Comparison of estimators and predictors from two-parameterexponential distribution. Sankhya, Series B 48:10–18.

Pitman, E. J. G. (1937). The closest estimates of statistical parameters. Proce. CambridgePhilosoph. Soc. 33:212–222.

Rao, C. R. (1981). Some comments on the minimum mean square error as a criterion inestimation. In: Csörgo, M., Dawson, D. A., Rao, J. N. K., Saleh, A. K. Md. E., eds.Statistics and Related Topics. Amesterdam: North-Holland.

Dow

nloa

ded

by [

Uni

vers

ity o

f T

exas

San

Ant

onio

] at

09:

56 1

4 D

ecem

ber

2011