Conditional Studentized Survival Tests for Randomly Censored Models

24

Transcript of Conditional Studentized Survival Tests for Randomly Censored Models

Conditional studentized survival tests for randomly censored modelsRunning headline: Studentized survival testsArnold Janssen and Claus-Dieter MayerHeinrich-Heine-Universit�at D�usseldorfAbstractIt is shown that in the case of heterogenous censoring distributions studentizedsurvival tests can be carried out as conditional permutation tests given the orderstatistics and their censoring status. The result is based on a conditional central limittheorem for permutation statistics. It holds as well for linear test statistics as for sup-statistics. The procedure works under one of the following general circumstances forthe two-sample problem: the unbalanced sample size case, higly censored data, certainnon-convergent weight functions or under alternatives. For instance the two-samplelog rank test can be asymptotically carried out as a conditional test if the relativeamount of uncensored observations vanishes asymptotically as long as the number ofuncensored observations becomes in�nite. Similar results hold whenever the samplesizes n1 and n2 are unbalanced in the sense that n1=n2 ! 0 and n1 !1 hold.

Keywords: Log rank test, survival tests, two-sample tests, random censored data, con-ditional tests, permutation tests, central limit theorem for permutation statisticsAMS classi�cation: 62 G10

11 IntroductionThroughout we are concerned with a common two-sample problem for randomly rightcensored independent survival times T1; : : : ; Tn on IR+, which are censored by censoringrandom variables C1; : : : ; Cn, again mutually independent and also independent from theT 0s. The random variables are de�ned on some probability space (;A; P ). Assume thatwe have an i.i.d. structure within the two groupsL(T1) = L(Ti) and L(C1) = L(Ci); 1 � i � n1and L(Tn) = L(Ti) and L(Cn) = L(Ci); n1 < i � n;where n2 = n � n1 denotes the sample size of the second group. Within the censoringmodel only Xi = min(Ti; Ci) and �i = 1fTi�Cig 1 � i � n (1.1)can be observed, see Andersen et al. (1993) and Neuhaus (1993) in connection with two-sample tests. For the rest of this paper we assume that the T 's and C's have continuousdistribution functions. Although we will sometimes work with a triangular array we willwrite (Xi;�i) and suppress the index n of the row. In connection with unequal censoringdistributions L(C1) and L(Cn) for the null hypothesisH0 = fL(T1) = L(Tn)g (1.2)the data given by (1.1) loose the i.i.d. structure of the T 's even under H0. This fact maycause problems in connection with the choice of critical values which will be attacked inthe present paper. We have i.i.d. random variables for the restricted null hypothesis~H0 = fL(T1) = L(Tn); L(C1) = L(Cn)g: (1.3)which may be realistic for restricted models, for instance for competing risks. LetN1(t) = n1Xi=1 �i1[0;t](Xi); N2(t) = nXi=n1+1�i1[0;t](Xi) (1.4)denote the counting processes of uncensored events of group 1 and 2 and set N(t) =N1(t) +N2(t). The number of individuals under risk at time t is given byY1(t) = n1Xi=1 1[t;1)(Xi); Y2(t) = nXi=n1+1 1[t;1)(Xi) (1.5)and Y (t) = Y1(t) + Y2(t). De�ne the log rank process asLn(x) = ( nn1n2 )1=2�N1(x)� Z x0 Y1Y dN� ; x � 0: (1.6)

2Then well motivated survival tests are based on survival statistics W n, whereW n(x) = Z x0 �wn dLn; W n := W n(1) (1.7)relies on a predictable (w.r.t. natural �ltration) weight function t 7! �wn(t) and theirestimated variances Vn(x) = ( nn1n2 )Z x0 �w2nY1Y2Y 2 dN; Vn := Vn(1): (1.8)Under weak regularity assumptions Gill (1980) , sect. 4.3 proved thatW nV1=2n (1.9)is asymptotically standard normal under Ho (1.2). The weight function �wn can be ad-justed to make the two-sample tests given by (1.9) asymptotically e�cient for certainsemiparametric hazard rate models. We refer also to Fleming and Harrington (1991),sect. 7.4 and Andersen et al. (1993), sect. VIII, 2.3 and VIII, 4.2. It is known that underthe restricted null hypothesis ~H0 (1.3) the test given by the numerator W n can be carriedout as a permutation test, which is of exact level � for each �nite sample size. Amongother results it is shown in Janssen (1991) that the W n-permutation test has asymptoticlevel � for di�erent but contiguous sequences of censoring distributions. By a completen-ess argument Moser (1994) proved that every nonparametric exact level � survival testfor ~H0 is a permutation test which reveals the importance of this class of tests. Let nowX1:n � X2:n � : : : � Xn:n be the order statistics of the X's and let �i:n denote the cen-soring status of Xi:n and �n = (�1:n; : : : ;�n:n). Then Neuhaus proposed to carry out astudentized conditional permutation given the observations of �n. Precisely, he proposedto calculate conditional critical values cn = cn((�i:n)1�i�n) for the permutation distributi-on of the studentized statistic W n=V1=2n given �n = (�i:n)1�i�n where the denominator istaken into account. The one-sided permutation test is obtained by~�n = 8<: 1~ n0 W n=V1=2n >=< cn((�i:n)1�i�n) (1.10)Under regularity conditions these tests are asymptotically equivalent under H0 to Gill'stests given by (1.9) and their asymptotic standard normal quantils and they have thesame optimality properties. Finite sample Monte Carlo simulations con�rm the goodperformance of the permutation versions, see Neuhaus (1993) and Heller and Venkatraman(1996). Conditional two-sample survival tests for tied data were considered in Neuhaus(1994) and Janssen and Neuhaus (1997).Studentized permutation tests for score statistics of Behrens-Fisher type were studiedin Janssen (1997) for an extended non i.i.d. null hypothesis. As conclusion the meaning

3of the new permutation tests can be summarized as follows which is helpful to under-stand Neuhaus's test (1.10). For a non i.i.d. null hypothesis the numerator has the wrong(asymptotic) permutation variance. The permutation procedure for studentized statisticsincludes a variance correction such that the new permutation test works (at least asym-ptotically). On the other hand it is quite obvious that the new procedures are naturalextensions of classical optimal permutation tests obtained by the numerator only whenthe deviation from the i.i.d. null hypothesis caused by nuisance parameters is small.The present paper investigates a general conditional central limit theorem for studen-tized survival statistics. As far as we are concerned with the two-sample problem theconditional tests asymptotically work under certain circumstances in the following situa-tions:1. The unbalanced two-sample problem where only min(n1; n2) ! 1 is required incontrast to the condition n1=n! � 2 (0; 1).2. Highly censored data where the portion of censored data may tend to zero as n!1.Real life data with high censoring rates show up in implantology when the life timeis long compared to the period which can be observed.3. Certain schemes of non-convergent weight functions, where only mild assumptionsfor the weights around the upper endpoint of the distributions have to be assumed.4. The calculation of power functions under alternatives which can be used to getasymptotic power functions for local alternatives and to prove consistency under�xed alternatives.Central limit theorems for permutation processes of the numerator were published byEinmahl and Mason (1992) and Mason and Newton (1992) which apply to the bootstrap.A functional central limit theorem for ordinary rank statistics was recently proved byHu�skov�a (1997). Pr�stgaard (1995) handled permutation tests under alternatives by newempirical process methods.The present paper is organized as follows. Sections 2 and 3 contain the conditionalcentral limit theorems for studentized survival statistics. They apply to conditional sur-vival tests in section 4. Here also unconditional central limit theorems for the underlyingtest statistics are required which go back to Gill (1980). In his spirit we derive an un-conditional result for highly censored data in section 5. The proofs are given in section6.

42 The conditional central limit theorem for survival stati-sticsLet (wn(i))1�i�n; (cni)1�i�n; (�i:n)1�i�n 2 f0; 1g denote three arrays of �xed real numbersstanding for weights, regression coe�cients and �xed observations of the censoring status�n. Typically the wn(i) depend on the �'s via the Kaplan-Meier estimator.Let �n = (�ni)1�i�n be a uniformly distributed random variable of permutations off1; : : : ; ng on some probability space (~; ~A; ~P ). Then we will consider survival statisticsWn(t) =Wn(t; �n) = [nt]Xi=1 wn(i)�i:n(cn�ni � Pnj=i cn�njn� i+ 1 ) (2.1)for 0 � t � 1, where the conditionnXi=1(cni � �cn)2 = 1 with �cn := 1n nXi=1 cni (2.2)is always assumed. Obviously, we get the numerator of the survival statistic W n(1) if wechoose wn(i) = �wn(Xi:n) and the two-sample regression coe�cientscni = ( nn1n2 )1=21f1;:::;n1g(i): (2.3)If Dn = (Dni)1�i�n denotes the antiranks of the Xi's (1.1), de�ned by Xi:n = XDni , wejust arrive at Wn(1;Dn) = W n(1): (2.4)The connection between ordinary rank statistics and their survival form (2.1) was discussedin Janssen (1994). Below we will studyWn in order to apply the results on the conditionaltreatment of W n(1) given �xed values of wn(i) and �i:n.The following consideration derives two central limit theorems. First, stronger con-ditions concerning the regression coe�cients are required. In the case of the two-samplecoe�cients (2.3) the conditon (2.9) below holds i�0 < lim infn!1 n1n � lim supn!1 n1n < 1: (2.5)Since t 7! Wn(t) is a martingale with respect to the �ltration Ft = �(�ni : i � [nt]) it iseasy to see that it's variance is equal to�2n(t) := V ar(Wn(t)) = 1n� 1 [nt]Xi=1 wn(i)2�i:n(1� 1n� i+ 1): (2.6)The predictable quadratic variation of the martingale is given by�̂n2(t) = �̂n2(t; �n) = [nt]Xi=1 wn(i)2�i:n8<:Pnj=i c2n�njn� i+ 1 � Pnj=i cn�njn� i+ 1 !29=; (2.7)

5since V ar c�ni � Pnj=i cn�njn� i+ 1 ����� �n1; : : : ; �ni�1! = Pnj=i c2n�njn� i+ 1 � Pnj=i cn�njn� i+ 1 !2 :Notice that E(�̂n2(t)) = �2n(t) and check that under the conditions of (2.4) we have�̂n2(t;Dn) = V(X[nt]:n): (2.8)Below �̂n serves as the denominator of our test statistic. In view of the extended nullhypothesis this choice is more exible than the permutation variance �2n(t). Now weobtain a central limit theorem for studentized permutation statistics.Theorem 2.1 In addition to (2.2) assume that the following conditions (2.9)-(2.10) hold:For each � > 0 there exists a constant d > 0 such thatnXi=1 c2ni1[d;1)(jn1=2cnij) � � for all n: (2.9)Assume also that max1�i�n�1 ���� wn(i)�i:nn1=2�n(1) ����! 0 as n!1: (2.10)Then Wn(1)�̂n(1) converges in distribution to a standard normal random variable as n!1.Remark 2.1 : Condition (2.9) is just the uniform integrability of the sequence of functi-ons u 7! nc2n(1+[nu]), u 2 (0; 1), in L1(0; 1) (given by the entire function [ ]).The present result does not cover the two-sample case with n1=n ! 0 and n1 ! 1.Under stronger regularity conditions concerning the weights we have again asymptoticnormality also in this case.Theorem 2.2 In addition to (2.2) assume that the following conditions (2.11)-(2.12)hold: max1�i�n jcni � �cnj ! 0 as n!1: (2.11)For each � > 0 there exists a constant d > 0 such that1n�n(1) nXi=1 wn(i)2�i:n(1� 1n� i+ 1)1[d�n(1);1)(jwni�i:nj) � � for all n: (2.12)Then Wn(1)�̂n(1) converges in distribution to a standard normal random variable as n!1.

6Example 2.1 In the case of the two-sample problem the central limit theorem holds inthe following situations. Suppose that we have always positive weights wn(i) � � > 0 for1 � i � n� 1 and let rn =Pni=1 �i:n denote the number of uncensored observations.1. (Highly censored data). Assume that there exist constants 0 � � < 1=2 and K > 0with max1�i�n�1 ����wn(i)�i:nn� ���� � K for all n: (2.13)If (2.5) and the condition rnn2� !1 (2.14)hold then the assumptions of Theorem (2.1) are ful�lled. Observe that n�2n(1) ��(rn�1)=2 which yields (2.10). In particular, the central limit theorem holds for thelog rank statistic with wn(i) � 1 (and � = 0) if rn !1 .2. (Unbalanced sample size) Assume min(n1; n2) ! 1 and consider bounded weights,i.e. (2.13) for � = 0. Then the assumptions of Theorem (2.2) hold iflim infn!1 rnn > 0: (2.15)If we have a positive portion of uncensored data (2.15) we again obtain the centrallimit theorem for the log rank statistic.3 A conditional limit theorem for sup-statisticsIn this section some results for supremum statistics are proved which apply to conditionalgoodness of �t tests of R�enyi type, see Gill (1980) and Janssen and Milbrodt (1993).Earlier results for similar conditional sup-tests can be found in Neuhaus (1993). Keepingthe notation of (2.1) and (2.7) we are concerned withS1n := sup0�t�1 Wn(t)�̂n(1) and S2n := sup0�t�1 ����Wn(t)�̂n(1) ���� (3.1)again under uniformly distributed permutations �n for given schemes (cni)i�n, (wn(i))i�nand (�i:n)i�n. The treatment of the sup-statistics is delicate since convergence of thenumerator t 7! Wn(t) can not be expected. Observe, that for instance assumption (2.10)does not imply the convergence of the marginal distributions Wn(t) for 0 < t < 1 ingeneral.Theorem 3.1 Let B (t) denote a standard Brownian motion. Under the conditions ofTheorem (2.1) or Theorem (2.2) we haveS1n ! sup0�t�1 B (t) and S2n ! sup0�t�1 jB (t)j (3.2)in distribution as n!1.

7Remark 3.1 The present proof uses the central limit theorem of section 2. The treatmentof the denominator considered there is now combined with a martingale central limit theo-rem. On the other hand martingale methods can also be used to prove convergence of thenormalized numerator in section 2.4 Applications to survival testsIn this section we like to compute the asymptotic power functions of the permutation tests(1.10) and we want to compare ~�n with the unconditional asymptotic test�n = 1(u1��;1)(W n(1)=Vn(1)) (4.1)given by the standard normal quantile u1�� of size 1�� 2 (0; 1). We will restrict ourselvesto the two-sample problem (2.3). Under mild regularity conditions concerning the weightfunctions (4.6) the asymptotic tests �n (4.1) have the asymptotic level � also under thenon i.i.d. hypothesis (1.2). We refer to Gill (1980), Andersen et al. (1982) and to section5 where an unconditional central limit theorem for highly censored data is established .Throughout let wni = wn(i;�n; (Xj:n)1�j�n) (4.2)be random weights which may depend on the censoring status�n and the order statistics.The crucial regularity conditions (2.10) or (2.12) now become convergence conditions forthe random weights (4.2). By � we denote the standard normal distribution function.Lemma 4.1 Let F1;n and F2;n, respectively, denote the distribution functions of the T 's(1.1) for the �rst and second group with corresponding censoring distribution functionsG1n and G2n of the C 0s. We have convergence in probability to zerosupt2IR j ~P (Wn(1)=�̂n(1) � t)� �(t)j ! 0 (4.3)of the conditional permutation distribution functions whenever one of the conditions (a)or (b) holds.(a) In the balanced sample size case (2.5) assume that (2.10) is convergent in probability.(b) In the unbalanced sample size case min(n1; n2)!1 assumelimd"1 lim supn!1 1n�2n(1) nXi=1 w2n(i)�i:n(1� 1n� i+ 1)1[d�n(1);1)(jwn(i)�i:nj) = 0 (4.4)almost surely.

8Let now �jn = � log(1�Fjn) be the cumulative hazard functions of the T 's for j = 1; 2.Then Mj(t) = Nj(t)� Z t0 Yj d�j (4.5)are martingales. For given weights wn(i) of (4.2) let always t 7! �wn(t) denote weightfunctions with �wn(Xi:n) = wn(i) (4.6)Taking (2.4) into account we will considerW n(t) = Z t0 �wnY1Y2Y �dN1Y1 � dN2Y2 �� Z t0 �wnY1Y2Y d(�1n � �2n) =: M n(t)�Rn(t) (4.7)Recall that (M n(t))t�0 is a martingal if the weight function �wn is predictable. Amongother results we get a central limit theorem for non-predictable weights under ~H0.Theorem 4.1 Suppose that the conditional central limit theorem (4.3) holds for Fj;n; Gj;n,j = 1; 2.(a) (The restricted null hypothesis ~H0 ) Under the restricted null hypothesis ~H0 withF1n = F2n and G1n = G2n we have unconditional convergence in distribution ofW n(1)=Vn(1)! Z (4.8)to a standard normal random variable Z.(b) (The unrestricted null hypothesis H0 ) Suppose that in addition the unconditionalcentral limit theorem (4.8) holds under H0 with F1n = F2n and possibly di�erentcensoring distributions then we have�n � ~�n ! 0 in probability (4.9)(c) (Alternatives) Assume that M n(1)=Vn(1) is asymptotically standard normal andRn(1)=Vn(1)! K converges in probability to some constant K 2 IR[f1g. Thenwe have limn!1E(�n) = limn!1E( ~�n) = �(u1�� +K) (4.10)and consistency of both tests for K =1.Notice that similar results hold for omnibus tests of R�enyi type which are based on S1;nor S2;n, see 3.1. Test statistics with non-predictable weights were considered by Janssenand Neuhaus (1997).Beyond the restricted null hypothesis the unconditional convergence (4.8) is typicallyestablished by a martingale central limit theorem, see section 5.

95 An unconditional limit theorem for highly censored dataIn this section we will prove an extension of Gill's unconditional central limit theoremfor two-sample statistics if the portion of uncensored data vanish asymptotically. Amongother assumptions we only require that the number of uncensored data tends to in�nity.Notice that other cases, for instance the unbalanced sample size case, is covered by Gill'swork. Let Fi;n and Gi;n denote the distribution functions of the surival times and censoringtimes in sample i = 1; 2. We always assume thatFi;n ! Fi; Gi;n ! Gi and n1n ! �for n ! 1, where Fi,Gi are distribution functions, � 2 (0; 1) and where convergenceholds in all continuity points of the limit function. It is allowed that Fi may have massin in�nity. Let (bn)n2IN be a sequence of positive real numbers satisfying the pointwiseconvergence conditionbn !1; bn nn1n2 ! 0; bn Z �0 (1�Gi;n) dFi;n ! Hi(�) (5.1)for a continuous sub-distribution functionHi. Note that R t0 (1�G1;n) dF1;n = E(�11fX1�tg)is the expectected value of the portion of uncensored data before time t in sample 1. Thelaw of large numbers thus yields (cf. Lemma 6.4)bn 1n1 n1Xi=1 �i1fXi�tg ! H1(t) in probability (5.2)and n1=bn is thus the rate of expected uncensored data. (Analogue results hold for thesecond sample and the pooled sample. ) There are basically two possibilities under which(5.1) may hold. If the limiting survival distributions are continuous the limiting censoringdistributions may degenerate (i.e. they may be Dirac measures in 0) or vice versa. Forthe sake of brevity we restrict ourselves to the second case. That means we assume thatG1 and G2 are continuous, which implies that G1;n and G2;n converge uniformly. In thiscase a direct consequence of (5.1) is, thatbnFi;n(x)! Z x0 11�Gi dHi for all x < G�1i (1) (5.3)holds as n!1. We consider survival statistics as introduced in (1.7), where the weightfunction �wn = w � F̂n(��) (5.4)depends on the left-continuous version of the Kaplan-Meier estimator F̂n via a determi-nistic function w on [0; 1], which is supposed to be continuous in a neighborhood of 0.Weight functions (5.4) are very popular in survival analysis, see Fleming and Harrington

10(1991), p.257/258. We will study the asymptotics of our statistic (1.9) under local alter-natives. For this purpose let us assume that the distribution functions Fi;n are absolutelycontinuous with corresponding hazard rates �i;n. Then local alternatives are given by�1;n � �2;n = #rbn nn1n2 � F1;n�1;n + �n:Here # 2 IR is a local parameter and : [0; 1] ! IR is a deterministic function, whichdescribes the direction of the alternatives, see also the discussion of local survival modelsgiven in Janssen (1994). Assume that the function �n is a remainder term withsupt�0 jrbnn1n2n �n(t)j ! 0: (5.5)Under these assumptions we get the following asymptotic result:Theorem 5.1 Let t > 0 be a real number with Hi(t) > 0 and Gi(t) < 1 for i = 1; 2 andsuppose that the direction is continuous in a neighborhood of 0. Then we haveL�pbn(W n(s))0�s�t�! �B � �2(s) + #�(s)�0�s�t) weakly in D([0; t]) (5.6)and under the Nullhypothesis f# = 0gbnVn(t)! �2(t) in probability, (5.7)where the asymptotic mean and variance are given by�(t) = w(0) (0)Z t0 1�G21� �G dH1 (5.8)and �2(t) = w2(0)�(1� �)Z t0 (1�G2)2(1� �G)2 dH1 + � Z t0 (1�G1)2(1� �G)2 dH2� (5.9)and �G = �G1 + (1� �)G2 denotes the mixture of the asymptotic censoring distributions.Corollary 5.1 Under the Nullhypothesis f# = 0g and the assumptions of Theorem (5.1)we have Wn(t)pVn(t) ! N(0; 1) and sup0�s�t Wn(s)pVn(t) ! sup0�s�t B (s) (5.10)in distribution as n!1.Remark 5.1 Our results con�rm two very intuitive facts. On one hand they show thatunder extreme censoring, only di�erences in the left tails of the survial distributions canbe detected, which are measured by (0). On the other hand the rate of local alternatives isnow pn=bn instead of pn. Roughly speaking this means that the rate of local alternativesonly depends on the expected number of uncensored data.

116 The proofsThe proof of Theorem (2.1). In the �rst step we write Wn(1) as an ordinary rankstatistic. Using summation by parts we obtainWn(1) = nXi=1 cn�nian(i); nXi=1 an(i) = 0; (6.1)where new regression coe�cients an(i) are given byan(i) = wn(i)�i:n � iXj=1 wn(j)�j:nn� j + 1 ; (6.2)see also Janssen(1994). The variance formula of H�ajek and �Sid�ak (1967) together with(2.2) and (2.6) yields �2n(1) = 1n� 1 nXi=1 an(i)2 (6.3)Without restrictions we may assume �n(1) = 1 since our assumptions are homogenous in�n(1) (otherwise we may take new weights wn(i)=�n(1)). Then Lemma (6.1) below impliesthe condition max1�i�n jn�1=2an(i)j ! 0: (6.4)In view of (2.9) and (6.4) H�ajek's central limit theorem for linear rank statistics nowimplies the standard asymptotic normality of the numerator Wn(1), see for instance theversion given by Lemma 3.4 of Janssen (1997).Now it remains to show �̂n2(1)! 1 (6.5)in ~P -probability. Obviously it remains to prove (6.5) along a certain subsequence for everygiven sequence. We may assume cn1 � cn2 � : : : � cnn. According to Janssen (1997),proof of Lemma 3.4, there exists a subsequence fmg � IN such that the step functionsu 7! 'm(u) := m1=2(cm(1+[mu]) � �cm); 0 < u < 1; (6.6)converge to some function ' in L2(0; 1). Observe that ' is monoton and, due to (2.2),R '2d��j(0;1) = 1, R 'd��j(0;1) = 0. Below let us identify m with n and we may assume(6.6) for 'n. Throughout we keep the assumption �n(1) = 1 for all n. Let now U1; U2; : : :be a sequence of i.i.d. uniformly distributed random variables on (0; 1) and let �n =(�n1; : : : ; �nn) be the rank vector (of converse order) of (U1; : : : ; Un) with rank �ni of Un+1�isuch that Un+1�i = U�ni:n holds. Again summation by parts yieldsWn � n�1=2Pni=1wn(i)�i:n n'(Un+1�i)� 1n�i+1Pnj=i '(Un+1�j)o= n�1=2Pni=1wn(i)�i:n nn1=2cn�ni � '(Un+1�i)� 1n�i+1Pnj=i(n1=2cn�nj � '(Un+1�j)o= Pni=1 n�1=2an(i)(n1=2cn�ni � '(Un+1�i)) =: Yn (6.7)

12According to H�ajek and �Sid�ak (1967), V 1.5, theorem (a), one has by (6.6)V ar(Yn)! 0 as n!1 (6.8)for the unconditional variance and consequentlyV ar(E(Ynj(Uk:n)1�k�n))! 0 as n!1 (6.9)in L1. Let now �̂n2(1; �n; (cnk)1�k�n) be the predictable quadratic variation (2.7) basedon coe�cients cnk. For �xed order statistics we will now consider n�1=2'(Uk:n) instead ofthe array cnk. By the strong law of large numbers together with condition (2.10) we have�̂n2(1; �n; (n�1=2'(Uk:n))1�k�n)= 1nPni=1 wn(i)2�i:n�Pnj=i '(Un+1�j)2n�i+1 � �Pnj=i '(Un+1�j)n�i+1 �2� (6.10)= 1nPni=1wn(i)2�i:n� 1n�i+1Pnj=i �'(Un+1�j)� Pnj=i '(Un+1�j))n�i+1 �2� (6.11)! 1almost surely. On the other hand we may take coe�cients cnk � n�1=2'(Uk:n). It will beshown that their predictable quadratic variation is convergent to zero0 � �̂n2(1; �n; (cnk � n�1=2'(Uk:n))1�k�n)! 0 (6.12)in L1. The proof can be given as follows. Since the order statistics and the ranks areindependent we may choose the order statistics to be �xed. Since the expectation of �̂n2is �2n (see 2.7, �.) we have similarly to (6.3) thatE ��̂n2(1; �n; (cnk � n�1=2'(Uk:n))1�k�n)j(Uk:n)1�k�n�is equal to (6.9) which implies (6.12). Routine calculations using a2 � b2 = (a� b)(a+ b),the representation (6.10), and Cauchy-Schwarz's inequality show that���̂n2(1; �n; (cnk)1�k�n)� �̂n2(1; �n; (n�1=2'(Uk:n))1�k�n)�� ��̂n(1; �n; (cnk � n�1=2'(Uk:n))1�k�n)�̂n(1; �n; (cnk + n�1=2'(Uk:n))1�k�n)! 0 (6.13)in probability. Observe that in addition to (6.12) the second factor is stochastically boun-ded since (6.11) and (6.12) can be combined via the triangular inequality. For these reasons(6.13) implies the desired result (6.5) 2.Lemma 6.1 Suppose that an(i) is given by (6.2) and let �n(1) = 1 for all n. Then thecondition (2.10) implies the condition (6.4) for the maximum of the array n�1=2an(i).

13Proof.Without restrictions we may assume that wn(i) � 0 holds for each i. For convenience wemay also consider the restriction 1n n�1Xi=1 wn(i)2�i:n = 1 (6.14)instead of �n(1) = 1 since12(n� 1) n�1Xi=1 wn(i)2�i:n � �n(1) � 1n n�1Xi=1 wn(i)2�i:n:For �nite n and �xed � > 0 our result is related to the optimization problemf(vn(�)) := n�1=2 n�1Xj=1 vn(j)n� j + 1 = max (6.15)for non-negatives schemes 0 � vn(i); 1 � i � n� 1, under the restriction1n nXi=1 vn(i)2 = 1 and max1�i�n�1(n�1=2vn(i)) � �: (6.16)Let 1 � r � n�1 be a �xed integer and consider � = �(r) = r�1=2. Obviously, the solutionof (6.14), (6.15) is given by vn(i; r) := n1=2r�1=21[n�r;n�1](i)with the maximumf(vn(�; r)) = r�1=2 n�1Xj=n�r 1n� j + 1 = r�1=2 r+1Xk=2 1k � r�1=2 log r (6.17)The bound f(vn(�; r)) decreases if r is increasing. By the requirements of (2.10) we mayconsider �(r)! 0 as n!1 and the upper bound (6.17) converges to zero. 2The proof of Theorem 2.2 follows the line of the proof above. Again we may assume�n(1) = 1. Consider the rank statistic form (6.1) and (6.2) of Wn(1). We will deal �rstwith the numerator.1. Assume that the weights jwn(i)j � K are bounded for all n and i. By Lemma 3.4of Janssen (1997) we have asymptotic standard normality of Wn(1), whenever the scores(6.2) satisfy the following condition: For each � > 0 there exists some d > 0 with1n� 1 nXi=1 an(i)21[d;1)(jan(i)j) � � for all n: (6.18)

14Again we may assume wn(n) = 0. Notice that������ iXj=1 wn(j)�j:nn� j + 1 ������ � K iXj=1 1n� j + 1 =: bn(i):Obviously, it remains to show (6.18) for bn(i). This is a consequence of the L2(0; 1)convergence of the step function u 7! bn(1+ [nu]) in L2(0; 1) and (6.18) is just its uniformintegrability.2. In the general case the weights can be splitted by (2.12)wn(i) = w1n(i) + w2n(i) with w1n(i) = wn(i)1[0;d](jwn(i)j: (6.19)The statistics can be written as Wn(1) =W1n(1) +W2n(1) where Wjn corresponds to theweights wjn(i) for j = 1; 2. By (6.3) and (2.12) we haveV ar(W2n(1)) � � for each n: (6.20)The variance �21n(1) of W1n(1) is close to 1, �21n(1) � 1 � �. Since W1n(1)=�21n(1) isasymptotically standard normal routine arguments using (6.20) imply the result forWn(1).To prove the convergence of the denominator, we need the following lemma:Lemma 6.2 ConsiderYi;n := 0@Pnj=i(pncn�nj )2n� i+ 1 � Pnj=ipncn�njn� i+ 1 !21A (6.21)Under the assumptions of Theorem (2.2) we have:sup0�s�t jY[ns];n � 1j ! 0 in probability (6.22)as n!1 for every t 2 [0; 1).Proof. W.l.o.g. we may assume that �cn = 0. By a functional central limit theoremfor exchangeable random variables (cf. Theorem 24.1 of Billingsley (1968)) we have:0@ [ns]Xj=1 cn�nj1As2[0;1] D! B 0in D[0; 1], where B 0 denotes a Brownian bridge on [0; 1]. Thussup0�s�t �����Pnj=[ns]pncn�njn� [ns] + 1 �����! 0 in probability (6.23)

15immediately follows. It remains to studyZi;n := Pnj=i(pncn�nj )2n� i+ 1 � 1 = Pnj=i n(c2n�nj � 1=n)n� i+ 1 (6.24)The well known variance formula for linear rank statistics yieldsV ar(Z[ns];n) = n3(n� [ns] + 1)2(n� 1)( nXj=1(c2nj � 1=n)2)(n� [ns] + 1n )( [ns]� 1n )� nn� [nt] + 1 max1�i�n jc2ni � 1=nj nXj=1(c2nj + 1=n)! 0for 0 � s � t. For a �xed number of points s0 = 0 � s1 � : : : � sk = t we then haveP ( max1�i�k jZ[nsi];nj � �)! 0:For � > 0 we now consider such a partition with max1�i�k jsi � si�1j � �, where � ischoosen such thatmax1�i�kmaxfj1� n� [nsi+1] + 1n� [nsi] + 1 j; j1 � n� [nsi] + 1n� [nsi+1] + 1 jg � ~�for a given 1 > ~� > 0 and n large enough. For s 2 [si; si+1]Z[nsi];n + (1� n� [ns] + 1n� [nsi] + 1) � n� [ns] + 1n� [nsi] + 1Z[ns];n (6.25)� n� [ns] + 1n� [nsi+1] + 1Z[ns];n� Z[nsi+1];n + (1� n� [ns] + 1n� [nsi+1] + 1)holds andP ( sup0�s�t jZn;sj � �) �Pk�1i=0 �P �(Z[nsi] � ~�) � �(1� ~�)��+ P �(Z[nsi+1] + ~�) � (1 + ~�)��� ! 0 (6.26)follows for an appropriate choice of ~� (0 < ~� < �1+�). The lemma follows immediately from(6.23) and (6.26).2Corollary 6.1 Under the conditions of Theorem 2.2 we have for t0 2 [0; 1)sup0�t�t0 j�̂2n(t)� �2n(t)j ! 0 in probability (6.27)as n!1.

16Proof For t0 2 [0; 1) Lemma (6.2) yieldssup0�s�t0 j�̂2n(s)� n� 1n �2n(s)j � 1n [nt0]Xi=1 wn(i)2�i:njYi;n � n� in� i+ 1 j (6.28)� sup0�s�t0 jY[ns];n � n� [ns]n� [ns] + 1 j 1n [nt0]Xi=1 wn(i)2�i:n! 0 in probability.2Now we are able to show, that �̂2n(1) converges to 1 in probability. Because of (2.12) weare able to choose d > 0 for given � > 0, such that �2n(1)� �2n(t0) � �+ d(1� t0) for all n.By Markovs inequality we havelimt0!1 lim supn!1 P (j�̂2n(1)� �̂2n(t0)j � �) � (6.29)limt0!1 lim supn!1 1� ((�2n(1)� �2n(t0)) = 0:Applying Theorem 4.2 of Billingsley (1968) then �nishes the proof of Theorem 2.2. 2The proof of Theorem 3.1 The present proof follows the lines of the proofs of Theo-rem 2.1 and 2.2, where now a martingale central limit theorem ensures the convergence ofthe numerator. For this purpose consider the array of �-�elds fFn;i : n � 1; i � 1g givenby Fn;i = �(�nj : 1 � j � ig: (6.30)Because of E(cn�ni jFn;i�1) =Pnj=i cn�n;j=(n� i+ 1) we conclude that f�n;ig with�n;i := wn(i)�i:n(cn�ni � Pnj=i cn�njn� i+ 1 ) (6.31)is a martingale di�erence array with respect to fFn;ig. Again we may assume that �2n(1) =1 holds for all n 2 IN . Consider t 7! �2n(t) de�ned by (2.6) for t 2 [0; 1], which is adistribution function with left continuous inverse u 7! (�2n)�1(u) for u 2 (0; 1). The inverseserves as a time transformation within [0,1]. For 0 < t < 1 and rn(t) := [n(�2n)�1(t)](rn(0) := 0; rn(t) := n for t � 1))now consider the processt 7! Zn(t) := Wn((�2n)�1(t)) = rn(t)Xi=1 �n;i; (6.32)with Zn(0) = 0 and Zn(1) =Wn(1), which is a discrete martingale with respect to fFn;ig.Its variance is equal to V ar(Zn(t)) = �2n((�2n)�1(t)) (6.33)

17We will show that Zn(t)0�t�1 converges to a standard Brownian motion B (t)0�t�1 inD[0; 1]. According to Theorem 3.2 (a) of Helland (1983) if su�ces to show thatrn(t)Xi=1 E(�2n;ijFn;i�1)! t in ~P -probability (6.34)and nXi=1 E(�2n;i1(j�n;kj>�)jFn;i�1)! 0 in ~P -probability for all � > 0 (6.35)hold for n!1 and t > 0.W.l.o.g. we may replace the uniform integrability condition (2.9) (in the case of Theo-rem 2.1.) by the stronger conditionpn max1�i�n jcni � �cnj = O(1): (6.36)For �xed d > 0 we may otherwise write Wn =W d1n+W d2n, where W d1n is based on the newregression coe�cients cni1[0;d)(n1=2jcnij) and W d2n on cni1[d;1)(n1=2jcnij). Let (�̂d1n)2 and(�̂d2n)2 be the corresponding variance estimators Using (2.6) we see thatE((�̂d2n)2) = V ar(W d2n(1)) = �2n(1) nXi=1 c2ni1[d;1)(jn1=2cnij)converges to 0 for d!1. Applying the Birnbaum-Marshall inequality to the martingaleW2n, see Shorack and Wellner (1986) p.873, then yieldsP ( sup0�t�1 jW d2n(t)j � �) � V ar(W d2n(1))=�2for � > 0. Thus sup0�t�1 jW d2n(t)j converges to 0 stochastically for d ! 1 and from theMarkov inequality we get the same result for �̂d2n. As �̂n ! 1 stochastically, �̂n and �̂d1nare stochastically bounded away from 0 and easy calculations show thatlimd!1 lim supn!1 P ����� sup0�t�1 Wn(t)�̂n(1) � sup0�t�1 W d1n(t)�d1n(1) ���� > ��! 0for � > 0 and so Theorem 4.2 of Billingsley (1968) gives us the desired result.Without restrictions we may also replace (2.12) bymax1�i�n ����wn(i)�i:n�n(1) ���� = O(1) (6.37)in the situation of Theorem 2.2 (as before �2n(1) = 1 can always be assumed). Otherwisewe can split the weights as in (6.19) and use the same arguments as before.1. In a �rst step will prove thatlimn!1V ar(Zn(t)) = t for all t 2 [0; 1]: (6.38)

18The proof is based on (2.10) (notice that (2.13) also yields (2.10)), which impliessup1�i�n��2n( in)� �2n( i� 1n )� � �for each � > 0 and su�ciently large n. For each t 2 (0; 1) we may choose un � t � vn withun; vn 2 �2n([0; 1]) and vn � un � 2�. Thusun = �2n((�2n)�1(un)) � �2n((�2n)�1(t)) � �2n((�2n)�1(vn)) = vnfollows and (6.38) is proved.2. In the second step we will verify the conditional Lindeberg condition (6.35) whichholds under (2.10) combined with (6.36) (situation of Theorem 2.1) or (6.37) combinedwith (2.11) (situation of Theorem 2.2) .To compute CLn(�; 1) := nXi=1 E(�2n;i1(j�n;kj>�)jFn;i�1)for �xed n we de�ne the random variables qi;k := wn(i)�i:n(cnk �Pnj=i cn�njn�i+1) (1 � i; k �n), where qi;k is obviously Fn;i�1-measurable. We can write�ni = nXk=1 qi;k1fkg(�ni)and thus we obtain for each � > 0E(�2n;i1(j�n;ij>�)jFn;i�1) = nXk=1E(�2n;i1(j�n;ij>�)1fkg(�ni)jFn;i�1)= nXk=1 q2i;k1(jqi;kj>�)E(1fkg(�ni)jFn;i�1)= nXk=1 q2i;k1(jqi;kj>�)Pnj=i 1fkg(�nj)n� i+ 1= w2n(i)�i:nn� i+ 1 nXj=i(cn�nj � nXk=i cn�nkn� i+ 1)2 � 1fjwn(i)�i:n(cn�nj�Pnk=i cn�nkn�i+1 )j��g:ThusCLn(�; 1) � 2n max1�i�n jcni � �cnj21f2max1�i�n jwn(i)�i:n(cni��cn)j��g � ( 1n nXi=1 wn(i)2�i:n)! 0(6.39)follows under our conditions in both cases.3. We claim that the predictable quadratic variation converges to the identity, i.e.�̂2n((�2n)�1(t))! t (6.40)

19in probability for each t 2 [0; 1]. For its proof let us �rst show thatsup0�t�1 j�̂2n(t)� �2n(t)j ! 0: (6.41)in probability. The proof of (6.41) is done separately for our two cases.(a) Suppose that the conditions of Theorem 2.1 hold and let tn be any sequence in[0; 1]. Then we may choose new weights ~wn(i) with ~wn(i) = wn(i) whenever 1 � i � [ntn]and ~wn(i) = 0 otherwise. From (6.13) we obtain�������̂2(tn)� 1n [ntn]Xi=1 wn(i)2�i:n8<:Pnj=i '(Un+1�j)2n� i+ 1 � Pnj=i '(Un+1�j)n� i+ 1 !29=;������! 0 (6.42)in probability. On the other hand it is easy to see that the strong law of large numberstogether with (2.10) implies�������2n(tn)� 1n [ntn]Xi=1 wn(i)2�i:n8<:Pnj=i '(Un+1�j)2n� i+ 1 � Pnj=i '(Un+1�j)n� i+ 1 !29=;������! 0 (6.43)in probability and (6.42) now implies (6.41).(b) In the situation of Theorem 2.2 we have for an abitrary t0 2 [0; 1)sup0�t�1 j�̂2n(t)� �2n(t)j � (6.44)sup0�t�t0 j�̂2n(t)� �2n(t)j+ j�̂2n(1)� �̂2n(t0)j+ j�2n(1)� �2n(t0)j (6.45)The �rst term converges to zero in probability by Corollary 6.1. Applying (6.29) to theother two terms and using Theorem 4.2 of Billingsley again then yields (6.41).In both cases (a) and (b) (6.41) combined with (6.33) and (6.38) implies the desiredstatement (6.40).2Proof of Lemma 4.1 and Theorem 4.1. Lemma 4.1 follows immediately fromTheorem 2.1 and Theorem 2.2. (4.8) (a) follows by the independence of the vector(�n; (Xi:n)i�n) from the antiranks under the restricted null hypothesis. The conditio-nal central limit theorem now implies the unconditional one. Because of (4.3) the criticalvalue of ~�n converges to u1��, so that (4.9) follows. As this convergence of the criticalvalues also holds under alternatives we also get the same asymptotic power functions asstated in (4.10). 2

20For the proof of Theorem 5.1 we need the following three lemmas:Lemma 6.3 Under the assumptions of Theorem (5.1)sups2[0;t] j nn1n2 Y1Y2Y (s)� (1�G1;n)(1�G2;n)1� �Gn (s)j ! 0 P � a:e: (6.46)holds for n!1 where �Gn is de�ned by �Gn = �G1;n + (1� �)G2;n.Proof. As 1� Yini is the left-continuous version of the empirical distribution function ofsample i the extended Glivenko-Cantelli-Theorem (cf. Shorack and Wellner(1986),p.106)yields sups2[0;t] jYi(s)ni � (1�Gi;n)(1� Fi;n)(s))j ! 0 P � a:e:for i = 1; 2. Because of (5.1) we have R (1�Gi) dFi = 0 and thus (1�Fi)(1�Gi) = 1�Gifollows. This means that the Glivenko-Cantelli theorems also holds if we replace 1� Fi;nby 1. Similarly sups2[0;t] jY (s)n � (1� �Gn(s))j ! 0 P � a:e:holds for n!1. These results imply (6.46).2Lemma 6.4 Under the assumptions of Theorem 5.1 we havesup0�s�t jbnNi(s)ni �Hi(s)j ! 0 in probability (6.47)as n!1.Proof(a) For i = 1 (w.l.o.g.) and �xed s 2 (0; t] we have that N1(s)n1 istB(n1; pn(s)) binomiallydistributed with pn(s) = R s0 (1�G1;n) dF1;n, so thatE(bnN1(s)n1 ) = bnpn(s)! H1(s) (6.48)and V ar(bnN1(s)n1 ) = b2nn1 pn(s)(1 � pn(s))! 0 (6.49)since bn=n1 ! 0. Thus we havebnN1(s)n1 ! H1(s) in probability. (6.50)As we can get this result for all s 2 Q \(0; t] simultaneously along almost surely convergentsubsequences the continuity of H1 implies the result. 2

21Lemma 6.5 Under the assumptions of Theorem (5.1) we havesup0�s�t jF̂n(X[nt]:n)j ! 0 P � a:e: (6.51)for all t 2 [0; 1)Proof. By the same calculations as in Neuhaus (1993), p.1773 we getj � log(1� F̂n(X[ns]:n)� [ns]Xj=1 �j:nn� j + 1 j � 1(1� s)n ! 0for all s 2 [0; t], t < 1. On the other handsup0�s�t j [ns]Xj=1 �j:nn� j + 1 j � 1n� [nt] + 1 nXj=1�j ! 0 P � a:e:holds by (5.2). Thus for almost every !1� F̂n(X[ns]:n(!))! 1holds for all s 2 [0; t]. Because s 7! F̂n(X[ns]:n) is a non decreasing function the resultfollows. 2Proof of Theorem 5.1. The proof basically follows the lines of Gill(1980) (seealso Andersen et al. (1993)) by using Rebolledos central limit theorem for martingales(cf. Andersen et al. (1993),page 83). First note that similarly to (4.7) we can write ourtest-statistic as pbnW n(t) = M n(t) + Rn(t) (6.52)where M n(t) =rbn nn1n2 Z t0 �wnY1Y2Y �(dN1Y1 � d�1;n)� (dN2Y2 � d�2;n)� (6.53)is a martingale with respect to the natural �ltration and by de�nition we have �i;n =R �0 �i;n(s) ds . The remainder term is given byRn(t) = rbn nn1n2 Z t0 �wnY1Y2Y (d�1;n � d�2;n)= #bn nn1n2 Z t0 �wnY1Y2Y � F1;n d�1;n+ rbn nn1n2 Z t0 �wn(s)Y1Y2Y (s)�n(s) dsBy Lemma (6.3) and Lemma (6.5) we havesup0�s�t j �wn(s) nn1n2 Y1Y2Y (s)� w(0)(1 �G1n)(1 �G2n)1� �Gn (s)j ! 0 P � a:e:

22This fact combined with (5.5) yields that the last term of Rn converges to zero in probabi-lity. Because of G1(t) < 1 it follows that F1(t) = 0 and so F1;n(t)! 0 and �F1;n ! (0)hold. Now its easy to see thatRn(t) = nn1n2 Z t0 �wnY1Y2Y � F1;n 11� F1;n #bndF1;n + oP (1) (6.54)converges in probability to #�(t) as n ! 1 since the integrand is uniformly convergentand (5.3) holds.In the next step (5.7) will be proved. Applying Lemma 6.4 we also get in a very similarway that bnVn(t) = nn2 Z t0 �w2nY1Y2Y 2 bndN1n1 + nn1 Z t0 �w2nY1Y2Y 2 bndN2n2 ! (6.55)w2(0)�� R t0 (1�G1)(1�G2)(1� �G)2 dH1 + (1� �) R t0 (1�G1)(1�G2)(1� �G)2 dH2� (6.56)in probability. Since R s0 (1 �G1) dH2 = R s0 (1�G2) dH1 holds for all s 2 [0; t] and # = 0,the last term is equal to �2(t) and thus (5.7) is proved (notice that both sides occur aslimit of bn R s0 (1�G1n)(1�G2;n) dF1;n since F1;n = F2;n).It remains to show that M n converges to B � �2(�) in D[0; t]. The correspondingpredictable variation process is given by (cf. Andersen et al., p. 84)hM n;M ni(t) = bn nn1n2 �Z t0 �w2nY1Y 22Y 2 d�1;n + Z t0 �w2nY 21 Y2Y 2 d�2;n� : (6.57)Again our Lemmas (6.3) and (6.5) imply as above thatjhM n;M ni(t)� �2(t)j ! 0 (6.58)holds in probability for n ! 1. For � > 0 let us now consider the process M n�, which isbased on all the jumps of M n bigger than �. Since Y 2i =Y 2 � 1 holds our assertions (5.1)and (6.5) imply sup0�s�t jbn nn1n2 �w2n(s)Y 22Y 2 (s)j ! 0for i = 1; 2 in probability. ThushM n�;M n�i = bn nn1n2 Z t0 �w2nY1Y 22Y 2 1fjbn nn1n2 �w2nY 22Y 2 j > �g d�1;n+ bn nn1n2 Z t0 �w2nY 21 Y2Y 2 1fjbn nn1n2 �w2nY 21Y 2 j > �g d�2;n! 0 in probability (6.59)follows. Now Rebolledos Theorem gives the desired result.2

23ReferencesAndersen, P.K., Borgan, O., Gill, R.D. & Keiding, N. (1982) Linear nonparametric tests forcomparison of counting processes, with applications to censored survival data. Int. Stat.Rev. 50, 219-258.Andersen, P.K., Borgan, O., Gill, R.D. & Keiding, N. (1993) Statistical Models Based onCounting Processes. Springer, New York.Billingsley, P. (1968). Convergence of Probability Measures. Wiley, New York.Einmahl, U. & Mason, D. M. (1992). Approximations to permutation and exchangeable pro-cesses. J. Theor. Probab. 5, 101-126.Fleming, T.R. & Harrington, D.P. (1991). Counting Processes and Survival Analysis. Wiley,New York.Gill, R.D. (1980). Censoring and Stochastic Integrals. Mathematical Centre Tracts 124. Ma-thematisch Centrum, Amsterdam.H�ajek, J. & �Sid�ak, Z. (1967). Theory of Rank Tests. Academic Press, New York.Heller, G. & Venkatraman, E.S. (1996). Resampling procedures to compare two survival dis-tributions in the presence of right-censored data. Biometrics 52, 1204-1213.Hu�skov�a, M. (1997) Limit theorems for rank statistics. Stat. Probab. Lett. 32, 45-55.Janssen, A. (1991) Conditional rank tests for randomly censored data. Ann. Stat. 19, 1434-1456.Janssen, A. (1994) On local odds and hazard rate models in survival analysis. Stat. Probab.Lett. 20, 355-365.Janssen, A. (1997). Studentized permutation tests for non i.i.d hypotheses and the generalizedBehrens-Fisher problem. Stat. Probab. Lett. 36, 9-21.Janssen, A. & Neuhaus, G. (1997) Two-sample rank tests for censored data with non-predictableweights. J. Stat. Plann. Inference 60, 45-59.Mason, D. M.& Newton, M. A (1997). A rank statistics approach to the consistency of ageneral bootstrap. Ann. Stat. 20, 1611-1624.Moser, M. (1994). Completeness of time-ordered indicators in censored data models. Stat.Probab. Lett. 21, 163-166.Neuhaus, G. (1993). Conditional rank tests for the two-sample problem under random censor-ship. Ann. Stat. 21, 1760-1779.Neuhaus, G. (1994) Conditional rank tests for the two-sample problem under random censor-ship: Treatment of ties. Vilaplana, J. Perez (ed.) et al., Recent advances in statisticsand probability. Proceedings of the 4th international meeting of statistics in the BasqueCountry, San Sebastian, Spain, 4-7 August, Utrecht: VSP, 127-138.Pr�stgaard, J.T. (1995) Permutation and bootstrap Kolmogorov-Smirnov tests for the equalityof two distributions. Scand. J. Statist. 22, 305-322.Shorack, G.R. & Wellner, J.A. (1986) Empirical Processes with Applications to Statistics.Wiley, New York.A. Janssen, Mathematical Institute, University of D�usseldorf, Universit�atsstr. 1, D-40225D�usseldorf, Germany, E-Mail: [email protected] (Corresponding Author)C.-D. Mayer, Mathematical Institute, University of D�usseldorf, Universit�atsstr. 1, D-40225D�usseldorf, Germany, E-Mail: [email protected]