Download - EM-REML estimation of covariance parameters in Gaussian mixed models for longitudinal data analysis

Transcript

Genet. Sel. Evol. 32 (2000) 129–141 129c© INRA, EDP Sciences

Original article

EM-REML estimationof covariance parameters

in Gaussian mixed modelsfor longitudinal data analysis

Jean-Louis FOULLEYa∗, Florence JAFFREZICb,Christele ROBERT-GRANIEa

a Station de genetique quantitative et appliquee,Institut national de la recherche agronomique,

78352 Jouy-en-Josas Cedex, Franceb Institute of Cell, Animal and Population Biology

The University of EdinburghEdinburgh EH9 3JT, UK

(Received 24 September 1999; accepted 30 November 1999)

Abstract – This paper presents procedures for implementing the EM algorithm tocompute REML estimates of variance covariance components in Gaussian mixedmodels for longitudinal data analysis. The class of models considered includes randomcoefficient factors, stationary time processes and measurement errors. The EMalgorithm allows separation of the computations pertaining to parameters involvedin the random coefficient factors from those pertaining to the time processes anderrors. The procedures are illustrated with Pothoff and Roy’s data example on growthmeasurements taken on 11 girls and 16 boys at four ages. Several variants andextensions are discussed.

EM algorithm / REML / mixed models / random regression / longitudinal data

Resume – Estimation EM-REML des parametres de covariance en modelesmixtes gaussiens en vue de l’analyse de donnees longitudinales. Cet articlepresente des procedes permettant de mettre en œuvre l’algorithme EM en vuedu calcul d’estimations REML des composantes de variance covariance en modelesmixtes gaussiens d’analyse de donnees longitudinales. La classe de modeles considereeconcerne les coefficients aleatoires, les processus temporels stationnaires et les erreursde mesure. L’algorithme EM permet de dissocier formellement les calculs relatifsaux parametres des coefficients aleatoires de ceux impliques dans les processus et laresiduelle. Ces methodes sont illustrees par un exemple provenant de Pothoff et Roy

∗ Correspondence and reprintsE-mail: [email protected]

130 J.-L. Foulley et al.

sur des mesures de croissance prises sur 11 filles et 16 garcons a quatre ages differents.On discute enfin plusieurs variantes et extensions de cette methode.

algorithme EM / REML / modeles mixtes / regression aleatoire / donneeslongitudinales

1. INTRODUCTION

There has been a great deal of interest in longitudinal data analysis amongbiometricians over the last decade: see e.g., the comprehensive synthesis of boththeoretical and applied aspects given in Diggle et al. [4] textbook. Since thepioneer work of Laird and Ware [13] and of Diggle [3], random effects models[17] have been the cornerstone of statistical analysis used in biometry for thiskind of data. In fact, as well illustrated in the quantitative genetics and animalbreeding areas, practitioners have for a long time restricted their attention tothe most extreme versions of such models viz. to the so called intercept orrepeatability model with a constant intra-class correlation, and to the multipletrait approach involving an unspecified variance covariance structure.

Harville [9] first advocated the use of autoregressive random effects tothe animal breeding community for analysing lactation records from differentparities. These ideas were later used by Wade and Quaas [33] and Wade et al.[34] to estimate correlation among lactation yields produced over different timeperiods within herds and by Schaeffer and Dekkers [28] to analyse daily milkrecords.

As well explained in Diggle et al. [3], potentially interesting models mustinclude three sources of variation: (i) between subjects, (ii) between timeswithin a subject and (iii) measurement errors. Covariance parameters of suchmodels are usually estimated by maximum likelihood procedures based onsecond order algorithms. The objective of this study is to propose EM-REMLprocedures [1, 21] for estimating these parameters especially for those involvedin the serial correlation structure (ii).

The paper is organized as follows. Section 2 describes the model structureand Section 3 the EM implementation. A numerical example based on growthmeasurements will illustrate these procedures in Section 4, and some elementsof discussion and conclusion are given in Section 5.

2. MODEL STRUCTURE

Let yij be the jth measurement (j = 1, 2, . . . , ni) recorded on the ithindividual i = 1, 2, . . . , I at time tij . The class of models considered herecan be written as follows:

yij = x′ijβ+ εij (1)

where x′ijβ represents the systematic component expressed as a linear combina-tion of p explanatory variables (row vector x′ij) with unknown linear coefficients(vector β), and εij is the random component.

EM-REML for longitudinal data 131

As in [3], εij is decomposed as the sum of three elements:

εij =K∑k=1

zijkuik + wi(tij) + eij . (2)

The first term represents the additive effect of K random regression factors uikon covariable information zijk (usually a (k − 1)th power of time) and whichare specific to each ith individual. The second term wi(tij) corresponds to thecontribution of a stationary Gaussian time process, and the third term eij isthe so-called measurement error.

By gathering the ni measurements made on the ith individual such thatyi = yij, εi = εij and Xi(ni×p) = (xi1, xi2, . . . , xini)

′, (1) and (2) can beexpressed in matrix notation as

yi = Xiβ+ εi, (3)

and

εi = Ziui + Wi + ei, (4)

where Zi(ni×K) = (zi1, zi2, . . . , zij , . . . , zini)′, zij(K×1) = zijk, ui(K×1) =

uik for k = 1, 2, . . . , K, Wi = wi(tij), and ei = eij for j = 1, 2, . . . , ni.We will assume that εi ∼ N(0,Vi) with

Vi = ZiGZ′i + Ri (5)

where G(K×K) is a symmetric positive definite matrix, which may alternativelybe represented under its vector form g = vechG. For instance, for a linearregression, g = (g00, g01, g11)′ where g00 refers to the variance of the intercept,g11 to the variance of the linear regression coefficient and g01 to their covariance.

Ri in (5) has the following structure in the general case

Ri = σ2Hi + σ2eIni , (6)

where σ2eIni = var(ei), and for stationary Gaussian simple processes, σ2 is the

variance of each wi(tij) and Hi = hij,ij′ the (ni × ni) correlation matrixamong them such that hij,ij′ = f(ρ, dij,ij′) can be written as a function f of areal positive number ρ and of the absolute time separation dij,ij′ = |tij − tij′ |between measurements j and j′ made on the individual i.

Classical examples of such functions are the power: f(ρ, d) = ρd; theexponential: exp(−d/ρ), and the Gaussian: exp(−d2/ρ2), functions. Notice thatfor equidistant intervals, these functions are equivalent and reduce to a firstorder autoregressive process (AR1).

Ri in (6) can be alternatively expressed in terms of ρ, σ2 and of the ratioλ = σ2

e/σ2

Ri = σ2(Hi + λIni) = σ2Hi. (7)

This parameterisation via r = (σ2, ρ, λ)′ allows models to be addressed bothwith and without measurement error variance (or “nugget” in geostatistics).

132 J.-L. Foulley et al.

3. EM IMPLEMENTATION

Let γ = (g′, r′)′ be the 3+K(K+1)/2 parameter vector and x = (y′,β′,u′)′be the complete data vector where y = (y′1,y

′2, . . . , y′i, . . . , y′I)

′ and u =(u′1,u

′2 . . . , u′i, . . . , u′I)

′. Following Dempster et al. [1], the EM algorithmproceeds from the log-likelihood L(γ; x) = ln p(x|γ) of x as a function ofγ. Here L(γ; x) can be decomposed as the sum of the log-likelihood of u as afunction of g and of the log-likelihood of ε∗ = y−Xβ −Zu as a function of r,

L(γ; x) = L(r; ε∗) + L(g; u) + const., (8)where X(N×p) = (X′1,X

′2, . . . , X′i, . . . , X′I)

and Z(N×KI) = (Z′1,Z′2, . . . ,Z

′i, . . . , Z′I)

′.Under normality assumptions, the two log-likelihoods in (8) can be expressed

as:

L(g; u) = −1/2

[KIln2π + Iln|G|+

I∑i=1

u′iG−1ui

], (9)

L(r; ε∗) = −1/2

[N ln2π +

I∑i=1

ln|Ri|+I∑i=1

ε∗′i R−1i ε

∗i

]. (10)

The E-step consists of evaluating the conditional expectation of the completedata log-likelihood L(γ; x) = ln p(x|γ) given the observed data y with γ setat its current value γ[t] i.e., evaluating the function

Q(γ|γ[t]) = E[L(γ; x)|y,γ = γ[t]], (11)while the M -step updates γ by maximizing (11) with respect to γ i.e.,

γ[t+1] = arg maxΥQ(γ|γ[t]). (12)

The formula in (8) allows the separation of Q(γ|γ[t]) into two components, thefirst Qu(g|γ[t]) corresponding to g, and the second Qε(r|γ[t]) corresponding tor, i.e.,

Q(γ|γ[t]) = Qu(g|γ[t]) +Qε(r|γ[t]). (13)We will not consider the maximization of Qu(g|γ[t]) with respect to g in detail;this is a classical result: see e.g., Henderson [11], Foulley et al. [6] and Quaas[23]. The (k, l) element of G can be expressed as

(G[t+1])kl = E

(I∑i=1

uikuil|y,γ[t]

). (14)

If individuals are not independent (as happens in genetical studies), one has to

replaceI∑i=1

uikuil by u′kA−1ul where uk = uik for i = 1, 2, . . . , I and A is

a (I × I) symmetric, positive definite matrix of known coefficients.Regarding r, Qε(r|γ[t]) can be made explicit from (10) as

Qε(r|γ[t]) = −1/2

[I∑i=1

ln|Ri|+I∑i=1

tr(R−1i Ωi)

]+ const., (15)

EM-REML for longitudinal data 133

where Ωi(ni×ni) = E(ε∗i ε∗′i |y,γ[t]) which can be computed from the elements

of Henderson’s mixed model equations [10, 11].

Using the decomposition of Ri in (7), this expression reduces to (16)

Qε(r|γ[t]) = −1/2[N ln σ2 +

I∑i=1

ln|Hi(ρ, λ)|

+σ−2I∑i=1

tr[Hi(ρ, λ)]−1Ωi]

+ const.

In order to maximize Qε(r|γ[t]) in (16) with respect to r, we suggest usingthe gradient-EM technique [12] i.e., solving the M -step by one iteration of asecond order algorithm. Since here E(Ωi) = σ2Hi, calculations can be madeeasier using the Fisher information matrix as in [31]. Letting Q = ∂Q/∂r,Q = E(∂2Q/∂r∂r′) the system to solve can be written

−Q∆r = Q, (17)

where ∆r is the increment in r from one iteration to the next.

Here, elements of Q and Q can be expressed as:

q1 = Nσ−2 − σ−4I∑i=1

tr(H−1i Ωi)

q2 =I∑i=1

tr[∂Hi

∂ρ(H−1

i − σ−2H−1i ΩiH−1

i )]

q3 =I∑i=1

tr(H−1i − σ−2H−1

i ΩiH−1i )

and

q11 = Nσ−4; q12 = σ−2I∑i=1

tr(∂Hi

∂ρH−1i

)

q13 = σ−2I∑i=1

tr(H−1i ); q22 =

I∑i=1

tr(∂Hi

∂ρH−1i

∂Hi

∂ρH−1i

)

q23 =I∑i=1

tr(

H−1i

∂Hi

∂ρH−1i

); q33 =

I∑i=1

tr(H−1i H−1

i )

where 1, 2 and 3 refer to σ2, ρ and λ respectively.

134 J.-L. Foulley et al.

The expressions for Q and Q are unchanged for models without measurementerror; one just has to reduce the dimension by one and use Hi in place of Hi.

The minimum of −2L can be easily computed from the general formula givenby Meyer [20] and Quaas [23]

−2Lm = [N − r(X)]ln 2π+ln|G#|+ln|R#|+ln|M#|+y′ R#−1y− θ′ R#−1ywhere G# = A ⊗ G (A is usually the identity matrix), R# = ⊕Ii=1Ri, (⊗and ⊕ standing for the direct product and sum respectively) M# = M/σ2

with M the coefficient matrix of Henderson’s mixed model equations in

θ = (β′, u′)′ i.e., for Ti = (Xi, 0, 0, . . . ,Zi, . . . , 0) and Γ− =

[0 00 G#−1

],

M =I∑i=1

T′iH−1i Ti + σ2Γ−.

Here y′R#−1y − θ′ R#−1y = [N − r(X)]σ2/σ2 which equals to N − r(X) forσ2 evaluated at its REML estimate, so that eventually

−2Lm = [N − r(X)](1 + ln 2π) +Kln|A|+ Iln|G|+ ln|M|

+[N − dim(M)]ln σ2 +I∑i=1

ln |Hi|. (18)

This formula is useful to compute likelihood ratio test statistics for comparingmodels, as advocated by Foulley and Quaas [5] and Foulley et al. [7,8].

4. NUMERICAL APPLICATION

The procedures presented here are illustrated with a small data set dueto Pothoff and Roy [22]. These data shown in Table I contain facial growthmeasurements made on 11 girls and 16 boys at four ages (8, 10, 12 and 14 years)with the nine deleted values at age 10 defined in Little and Rubin [14].

The mean structure considered is the one selected by Verbeke and Molen-berghs [32] in their detailed analysis of this example and involves an interceptand a linear trend within each sex such that

E(yijk) = µ+ αi + βitj , (19)where µ is a general mean, αi is the effect of sex (i = 1, 2 for female and malechildren respectively), and βi is the slope within sex i of the linear increasewith time t measured at age j (tj = 8, 10, 12 and 14 years).

The model was applied using a full rank parameterisation of the fixed effectsdefined as β′ = (µ + α1, α2 − α1, β1, β2 − β1). Given this mean structure,six models were fitted with different covariance structures. These modelsare symbolized as follows with their number of parameters indicated withinbrackets:

1 intercept + error (2)2 POW (2)3 POW + measurement error (3)4 intercept + POW (3)5 intercept + linear trend + error (4)6 unspecified (10)

EM-REML for longitudinal data 135

Table I. Growth measurements in 11 girls and 16 boys (from Pothoff and Roy [22]and Little and Rubin [14]).

Age (years) Age (years)

Girl 8 10 12 14 Boy 8 10 12 14

1 210 200 215 230 1 260 250 290 3102 210 215 240 255 2 215 230 2653 205 245 260 3 230 225 240 2754 235 245 250 265 4 255 275 265 2705 215 230 225 235 5 200 225 2606 200 210 225 6 245 255 270 2857 215 225 230 250 7 220 220 245 2658 230 230 235 240 8 240 215 245 2559 200 220 215 9 230 205 310 26010 165 190 195 10 275 280 310 31511 245 250 280 280 11 230 230 235 250

12 215 240 28013 170 260 29514 225 255 255 26015 230 245 260 30016 220 235 250

Distance from the centre of the pituary to the pteryomaxillary fissure (unit 10−4 m).

Table II. Covariance structures associated with the models considered.

Modelsa Zi G Ri

1 1ni g00 σ2e Ini

2 0ni — σ2Hi

3 0ni — σ2Hi + σ2e Ini

4 1ni g00 σ2Hi

5 (1ni , ti)(g00 g01g01 g11

)σ2

e Ini

6 0ni — σe[ti,t′i]

a 1 = intercept + error; 2 = POW; 3 = POW + measurement error; 4 =intercept + POW; 5 = intercept + linear trend + error; 6 = unspecified where

POW is defined as σ2Hi with Hi = hi,tt′ = ρ|ti−t′i|; ti is the ni × 1 vector of ages

at wich measurements are made on individual i.

Variance covariance structures associated with each of these six models areshown in Table II. Due to the data structure, the power function f(ρ, d) = ρd

(in short POW) reduces here to an autoregressive first order process (AR1)having as correlation parameter ρ2.

136 J.-L. Foulley et al.

EM-REML estimates of the parameters of those models were computed viathe techniques presented previously. Iterations were stopped when the norm√√√√(∑

i

∆γ2i

)/

(∑i

γ2i

)of both g and r, was smaller than 10−6. Estimates of

g and r, −2L values and the corresponding elements of the covariance structurefor each model are shown in Tables III and IV.

Random coefficient models such as 5 are especially demanding in termsof computing efforts. Models involving time processes and measurement errorsrequire a backtracking procedure [2] at the beginning of the iterative processi.e., one has to compute r[k+1] as the previous value r[k] plus a fraction ω[k+1] ofthe Fisher scoring increment ∆r[k+1] where r[k] is the parameter vector definedas previously at iteration k. For instance, we used ω = 0.10 up to k = 3 in thecase of model 3.

Model comparisons are worthwhile at this stage to discriminate between allthe possibilities offered. However, within the likelihood framework, one has tocheck first whether models conform to nested hypotheses for the likelihood testprocedure to be valid.

E.g. model 3 (POW + m-error) can be compared to model 2 (POW), asmodel 2 is a special case of model 3 for σ2

e = 0, and also to model 1 (intercept)which corresponds to ρ = 1. The same reasoning applies to the 3-parametermodel 4 (intercept + POW) which can be contrasted to model 1 (equivalentto model 4 for ρ = 0) and also to model 2 (equivalent to model 4 for g00 = 0).

In these two examples, the null hypothesis (H0) can be described as a pointhypothesis with parameter values on the boundary of the parameter spacewhich implies some change in the asymptotic distribution of the likelihood ratiostatistic under H0 [29, 30]. Actually, in these two cases, the asymptotic nulldistribution is a mixture 1/2X2

0 +1/2X21 of the usual chi-square with one degree

of freedom X21 and of a Dirac (probability mass of one) at zero (usually noted

X20 ) with equal weights. This results in a P-value which is half the standard

one i.e., P − value = 1/2Pr[X21 > ∆(−2L)obs]; see also Robert-Granie et al.

[26], page 556, for a similar application.In all comparisons, model 2 (POW) is rejected while model 1 (intercept)

is accepted. This is not surprising as model 2 emphasizes the effect of timeseparation on the correlation structure too much as compared to the valuesobserved in the unspecified structure (Tab. IV). Although not significantlydifferent from model 1, models 3 (POW + measurement error) and 5 (intercept+ linear trend) might also be good choices with a preference to the first onedue to the lower number of parameters.

As a matter of fact, as shown in Table III, one can construct several modelswith the same number of parameters which cannot be compared. There aretwo models with two parameters (models 1 and 2) and also two with threeparameters (models 3 and 4). The same occurs with four parameters althoughonly the random coefficient model was displayed because fitting the alternativemodel (intercept + POW + measurement error) reduces here to fitting thesub-model 3 (POW + measurement error) due to g00 becoming very small.Incidentally, running SAS Proc MIXED on this alternative model leads tog00 = 331.4071, ρ = 0.2395 and σ2

e = 1.0268 i.e. to fitting model 4 (intercept +

EM-REML for longitudinal data 137

Table

III.

Lik

elih

ood

statist

ics

(ML,

RE

ML,−

2L

)of

sever

al

model

sfo

rth

eanaly

sis

of

faci

al

gro

wth

in11

gir

lsand

16

boy

s(P

oth

off

and

Roy

[22];

Little

and

Rubbin

[14]).

Met

hod

Model

Random

Effec

taT

ime

pro

cess

bE

rror

cLik

elih

ood

g 00

g 01

g 11

σ2

ρσ

2 e#

iter

d#

par

−2L

com

p∆

[−2L

]D

istr

eP

-valu

e

1

337.2

7207.4

816

2843.6

408

2

545.4

00.8

02

92

850.7

416

RE

ML

3

380.9

60.9

66

164.9

913

3842.8

263

3-2

7.9

153

0:1

0.0

024

3-1

0.8

145

0:1

0.1

833

4

331.4

2213.6

00.2

39

43

3843.5

586

4-2

7.1

830

0:1

0.0

037

4-1

0.0

822

0:1

0.3

872

5

835.5

0−

46.5

34.4

2176.6

6163

4842.3

559

5-1

1.2

849

1:2

0.3

914

6f

10

835.3

176

6-1

8.3

222

80.4

025

1

309.5

3201.7

415

2857.2

247

2

510.9

50.7

92

72

865.4

353

3

342.7

30.9

71

168.6

913

3856.7

004

3-2

8.7

349

0:1

0.0

016

3-1

0.5

243

0:1

0.2

345

4

307.3

6203.9

20.1

51

39

3857.2

106

4-2

8.2

247

0:1

0.0

021

4-1

0.0

141

0:1

0.4

527

5

678.6

3−

34.9

93.3

7177.0

0209

4856.3

640

5-1

0.8

607

1:2

0.5

019

6f

10

849.1

997

6-1

8.0

250

80.4

310

aR

andom

effec

ts m

odel

inte

rcep

t(0

)and/or

slope

(1):

fixed

part

with

sex,linea

rre

gre

ssio

non

age

vary

ing

acc

ord

ing

tose

x.

bP

OW

pro

cess

:C

ovjk

djkσ

2.

cR

esid

ualin

random

regre

ssio

nm

odel

sand

mea

sure

men

ter

ror

σ2 e

(“nugget

”)fo

rm

odel

sin

volv

ing

Ri=

σ2H

i+

σ2 eI n

i.

dSto

ppin

gru

le:norm

set

topow

er−

6.

eA

sym

pto

tic

dis

trib

ution

ofth

elikel

ihood

ratio

under

the

null

hypoth

esis

:C

hi-sq

uare

or

mix

ture

ofch

i-sq

uare

s.f

Model

with

an

unsp

ecifi

edco

vari

ance

stru

cture

(10

para

met

ers)

.

ML

138 J.-L. Foulley et al.

Table

IV.

Vari

ance

and

corr

elation

matr

ixam

ong

mea

sure

sw

ithin

indiv

iduals

gen

erate

dby

sever

al

model

sfo

rth

eanaly

sis

of

faci

algro

wth

in11 g

irls

and

16

boy

s.

a)

RE

ML

Model

Vari

ance

sC

orr

elations

σ11

σ22

σ33

σ44

r 12

r 23

r 34

r 13

r 24

r 14

1

inte

r544.7

5544.7

5544.7

5544.7

50.6

191

0.6

191

0.6

191

0.6

191

0.6

191

0.6

191

2

PO

W545.4

0545.4

0545.4

0545.4

00.6

426

0.6

426

0.6

426

0.4

129

0.4

129

0.2

654

3

PO

W+

m-e

rror

545.9

5545.9

5545.9

5545.9

50.6

511

0.6

511

0.6

511

0.6

075

0.6

075

0.5

669

4

inte

r+

PO

W545.0

2545.0

2545.0

2545.0

20.6

304

0.6

304

0.6

304

0.6

094

0.6

094

0.6

082

5

inte

r550.3

1523.1

4531.3

0574.7

70.6

546

0.6

482

0.6

651

0.6

081

0.6

145

0.5

448

6

unsp

ecifi

ed542.2

9486.5

9626.8

2498.9

40.6

334

0.4

963

0.7

381

0.6

626

0.6

164

0.5

225

b)

ML

Model

Vari

ance

sC

orr

elations

σ11

σ22

σ33

σ44

r 12

r 23

r 34

r 13

r 24

r 14

1

inte

r511.2

6511.2

6511.2

6511.2

60.6

054

0.6

054

0.6

054

0.6

054

0.6

054

0.6

054

2

PO

W510.9

5510.9

5510.9

5510.9

50.6

265

0.6

265

0.6

265

0.3

925

0.3

925

0.2

459

3

PO

W+

m-e

rror

511.4

2511.4

2511.4

2511.4

20.6

323

0.6

323

0.6

323

0.5

967

0.5

967

0.5

630

4

511.2

8511.2

8511.2

8511.2

80.6

103

0.6

103

0.6

103

0.6

014

0.6

014

0.6

012

5

511.4

1492.7

4501.0

2536.2

50.6

341

0.6

302

0.6

461

0.5

971

0.6

041

0.5

465

6

unsp

ecifi

ed505.1

2455.7

9598.0

3462.3

20.6

054

0.4

732

0.7

266

0.6

570

0.6

108

0.5

226

+sl

ope

inte

r+

PO

Win

ter

+sl

ope

EM-REML for longitudinal data 139

POW). However, since the value of −2Lm for model 4 is slightly higher thanthat for model 3, it is the EM procedure which gives the right answer.

5. DISCUSSION-CONCLUSION

This study clearly shows that the EM algorithm is a powerful tool forcalculating maximum likelihood estimates of dispersion parameters even whenthe covariance matrix V is not linear in the parameters as postulated in linearmixed models.

The EM algorithm allows separation of the calculations involved in the Rmatrix parameters (time processes + errors) and those arising in the G matrixparameters (random coefficients), thus making its application to a large classof models very flexible and attractive.

The procedure can also be easily adapted to get ML rather than REMLestimates of parameters with very little change in the implementation, involvingonly an appropriate evaluation of the conditional expectation of uikuil and ofε∗i ε∗′i along the same lines as given by Foulley et al. [8]. Corresponding results

for the numerical example are shown in Tables III and IV suggesting as expectedsome downward bias for variances of random coefficient models and of timeprocesses.

Several variants of the standard EM procedure are possible such as thosebased e.g., on conditional maximization [15, 18, 19] or parameter expansion[16]. In the case of models without “measurement errors”, an especially simpleECME procedure consists of calculating ρ[t+1] for σ2 fixed at σ2[t], with σ2[t+1]

being updated by direct maximization of the residual likelihood (withoutrecourse to missing data), i.e.,

ρ[t+1] = ρ[t] −

I∑i=1

tr[∂Hi

∂ρ(H−1

i − σ−2H−1i ΩiH−1

i )]

I∑i=1

tr(∂Hi

∂ρH−1i

∂Hi

∂ρH−1i

) , (20)

σ2[t+1] =

I∑i=1

[yi −Xiβ(ρ[t],D[t])

]′[Wi(ρ[t],D[t])]−1[yi −Xiβ(ρ[t],D[t])]

N − r(X)

where Wi = Z′iDZi + Hi with D = G/σ2, and which can be evaluated usingHenderson’s mixed model equations by

σ2[t+1] =

I∑i=1

y′iH−1i (ρ[t])yi − θ

′I∑i=1

T′iH−1i (ρ[t])yi

N − r(X)· (21)

Finally, random coefficient models can also be accommodated to includeheterogeneity of variances both at the temporal and environmental levels [8, 24,25, 27] which enlarges the range of potentially useful models for longitudinaldata analysis.

140 J.-L. Foulley et al.

ACKNOWLEDGEMENTS

Thanks are expressed to Barbara Heude for her help in the numericalvalidation of the growth example using the SASr proc mixed procedure andto Dr. I.M.S. White for his critical reading of the manuscript.

REFERENCES

[1] Dempster A.P., Laird N.M., Rubin D.B., Maximum likelihood from incom-plete data via the EM algorithm, J. R. Statist. Soc. B 39 (1977) 1–38.

[2] Denis J.E., Schnabel R.B., Numerical methods for unconstrained optimizationand non linear equations, Prentice-Hall Inc, Englewood Cliffs, New Jersey, 1983.

[3] Diggle P.J., An approach to the analysis of repeated measurements, Biometrics44 (1988) 959–971.

[4] Diggle P.J., Liang K.Y., Zeger S.L., Analysis of longitudinal data, OxfordScience Publications, Clarendon Press, Oxford, 1994.

[5] Foulley J.L., Quaas R.L., Heterogeneous variances in Gaussian linear mixedmodels, Genet. Sel. Evol. 27 (1995) 211–228.

[6] Foulley J.L., Im S., Gianola D., Hoeschele I., Empirical Bayes estimation ofparameters for n polygenic binary traits, Genet. Sel. Evol. 19 (1987) 197–224.

[7] Foulley J.L., Gianola D., San Cristobal M., Im S., A method for assessingextent and sources of heterogeneity of residual variances in mixed linear models, J.Dairy Sci. 73 (1990) 1612–1624.

[8] Foulley J.L., Quaas R.L., Thaon d’Arnoldi C., A link function approach toheterogeneous variance components, Genet. Sel. Evol. 30 (1998) 27–43.

[9] Harville D.A., Recursive estimation using mixed linear models with autoregres-sive random effects, in: Variance Components and Animal Breeding, in: Proceedingsof a Conference in Honor of C.R. Henderson, Cornell University, Ithaca, NY, 1979.

[10] Henderson C.R., Sire evaluation and genetic trends, in: Proceedings of theanimal breeding and genetics symposium in honor of Dr J Lush, American society ofanimal science-American dairy science association, Champaign, 1973, pp. 10–41.

[11] Henderson C.R., Applications of linear models in animal breeding, Universityof Guelph, Guelph, 1984.

[12] Lange K., A gradient algorithm locally equivalent to the EM algorithm, J. R.Stat. Soc B 57 (1995) 425–437.

[13] Laird N.M., Ware J.H., Random effects models for longitudinal data, Bio-metrics 38 (1982) 963–974.

[14] Little R.J.A., Rubin D.B., Statistical analysis with missing data. J. Wileyand Sons, New-York, 1977.

[15] Liu C., Rubin D.B., The ECME algorithm: a simple extension of EM andECM with fast monotone convergence, Biometrika 81 (1994) 633–648.

[16] Liu C., Rubin D.B., Wu Y.N., Parameter expansion to accelerate EM: ThePX-EM algorithm, Biometrika 85 (1998) 765–770.

[17] Longford N.T., Random coefficient models, Clarendon Press, Oxford, 1993.[18] Meng X.L., Rubin D.B., Maximum likelihood estimation via the ECM algo-

rithm: A general framework, Biometrika 80 (1993) 267–278.[19] Meng X.L., van Dyk D.A., Fast EM-type implementations for mixed effects

models, J. R. Stat. Soc. B 60 (1998) 559–578.[20] Meyer K., Restricted maximum likelihood to estimate variance components

for multivariate animal models with several random effects using a derivative-freealgorithm, Genet. Sel. Evol. 21 (1989) 317–340.

EM-REML for longitudinal data 141

[21] Patterson H.D., Thompson R., Recovery of interblock information whenblock sizes are unequal, Biometrika 58 (1971) 545–554.

[22] Pothoff R.F., Roy S.N., A generalized multivariate analysis of variance modeluseful especially for growth curve problems, Biometrika 51 (1964) 313–326.

[23] Quaas R.L., REML Note book, Mimeo, Cornell University, Ithaca, New York,1992.

[24] Robert C., Foulley J.L., Ducrocq V., Inference on homogeneity of intra-classcorrelations among environments using heteroskedastic models, Genet. Sel. Evol. 27(1995) 51–65.

[25] Robert C., Foulley J.L., Ducrocq V., Estimation and testing of constantgenetic and intra-class correlation coefficients among environments, Genet. Sel. Evol.27 (1995) 125–134.

[26] Robert-Granie C., Ducrocq V., Foulley J.L., Heterogeneity of variance fortype traits in the Montbeliarde cattle breed, Genet. Sel. Evol. 29 (1997) 545–570.

[27] San Cristobal M., Foulley J.L., Manfredi E., Inference about multiplicativeheteroskedastic components of variance in a mixed linear Gaussian model with anapplication to beef cattle breeding, Genet. Sel. Evol. 25 (1993) 3–30.

[28] Schaeffer L.R., Dekkers J.C.M., Random regressions in animal models fortest-day production in dairy cattle, in: Proceedings 5th World Cong. Genet. Appl.Livest. Prod. 18 (1994) 443–446.

[29] Self S.G., Liang K.Y., Asymptotic properties of maximum likelihood estima-tion and likelihood ratio tests under nonstandard conditions, J. Am. Stat. Assoc. 82(1987) 605–610.

[30] Stram D.O., Lee J.W., Variance components testing in the longitudinalmixed effects model, Biometrics 50 (1994) 1171–1177.

[31] Titterington D.M., Recursive parameter estimation using incomplete data.J.R. Stat. Soc. B 46 (1984) 257–267.

[32] Verbeke G., Molenberghs G., Linear mixed models in practice, SpringerVerlag, New York, 1997.

[33] Wade K.M., Quaas R.L., Solutions to a system of equations involving a first-order autoregression process, J. Dairy Sci. 76 (1993) 3026–3032.

[34] Wade K.M., Quaas R.L., van Vleck L.D., Estimation of the parametersinvolved in a first-order autoregressive process for contemporary groups, J. DairySci. 76 (1993) 3033–3040.