Total, soluble and insoluble oxalate content of bran and bran products

9

Transcript of Total, soluble and insoluble oxalate content of bran and bran products

LIDAUER ET AL.2790

where yijklmnopq is test-day milk yield; herdi is herdeffect; ymj is test-year effect × test-month effect; skrare five regression coefficients of test-day milk yieldon DIM, which describe the shape of lactation curveswithin calving season class k; v = [ 1 c c2 d d2 ]′, wherec and c2 are the linear and quadratic Legendre polyno-mials (9) for DIM, and d = ln (DIM); agel is calvingage effect; dccm is days carried calf effect; htmn is ran-dom effect of test-month within herd; ap is a vector ofthree random regression coefficients describing breed-ing value of animal p; φo(p) is a vector of first threeLegendre polynomials (9) for DIM of observation o ofanimal p; pp is a vector of first three random regressioncoefficients for nonhereditary animal effects describ-ing the environmental covariances among measure-ments along lactation of animal p; and eijklmnopq is ran-dom residual. There were 24,321 herds, 106 test-year× test-month classes, 8 calving age classes, 5 d carriedcalf classes, and 1,933,641 test month within herd lev-els. The fixed regression coefficients were estimatedwithin three calving season classes (October to Febru-ary; March to June; July to September). Similarly toSTM, RRM can be written in matrix notation as

y = H h + X f + T c + Z a + W p + e

[H′R–1H H′R–1X H′R–1T H′R–1Z H′R–1W

X′R–1H X′R–1 X X′R–1T X′R–1Z X′R–1 W

T′R–1H T′R–1X T′R–1T + Iσ–2c T′R–1Z T′R–1W

Z′R–1H Z′R–1X Z′R–1T Z′R–1Z + A–1 ⊗ K–1a Z′R–1W

W′R–1H W′R–1X W′R–1T W′R–1Z W′R–1W + I ⊗ K–1p

][hfcap] = [H′R–1y

X′R–1y

T′R–1y

Z′R–1y

W′R–1y] .

The variance-covariance components (Table 1) forRRM were derived from multiple-trait REML variancecomponents using continuous covariance function ap-proach described by Kirkpatrick et. al (9). Note thatthe additive genetic variance-covariance matrix for thefirst 305 test days can be obtained by multiplication:G305×305 ≅ Φ Ka Φ′, where Φ305×3 = [φ1, . . . , φ305]′. Theheritability for a particular test day j is

h2j =

φ′jKaφj

φ′jKa φj + φ′

jKpφj + σ2e

(Table 2). For all analyses,

variance-covariance components and observationswere scaled to units of residual standard deviation.

Algorithms

The MME for STM and RRM contained 1,294,694and 7,280,477 equations, respectively. Because of thesize of RRM (Table 3), iteration on data technique (11,

Journal of Dairy Science Vol. 82, No. 12, 1999

where h contains the herd effect; f includes all otherfixed effects; c comprises the random test monthwithin herd effect; and a = [a′

1, . . . , a′n]′, and p =

[p′1, . . . , p′

m]′ where n is the number of animals andm is the number of cows with records, and e containsthe random residuals. H, X, T, Z, and W are the inci-dence and covariate matrices. For each animal withobservations, Z and W contain the appropriate Φ; foranimal i with n observations, Φi = [φi1, . . . , φin]′. Notethat H, X, T, and Z, as well as the corresponding vec-tors h, f, c, and a, had different meanings in RRMthan in STM. It was assumed that

var[cape] = [Iσ2

c 0 0 00 A ⊗ Ka 0 00 0 I ⊗ Kp 00 0 0 R]

where A is the numerator relationship matrix, Ka andKp are the variance-covariance matrices of additivegenetic and nonhereditary animal effects, and R = Iσ2

e. Then, MME becomes

16, 18) was employed in the algorithm when solvingthe unknowns. Iteration on data technique avoidsforming the MME. It allows solving the MME, al-though it cannot be stored in memory, but the cost isthat of reading the data at each round of iteration.

Let C be the coefficient matrix of the MME, x thevector of unknowns, and b the right-hand side (i.e., Cx = b). Following Ducrocq (3), we rewrite the equa-tion as

[M0 + (C – M0)]x = b [1]

then the functional iterative procedure for several iter-ative algorithms can be outlined as

x(k + 1) = M–10 (b – C x(k)) + x(k). [2]

Let L be strictly the lower triangular of C, and D thediagonal of C. Then, if M0 = D, Equation [2] defines

SOLVING LARGE TEST-DAY MODELS 2791

the Jacobi iteration. If M0 = L + D, Equation [2] givesthe Gauss-Seidel iteration. Extending Jacobi to sec-ond-order Jacobi method increases the rate of conver-gence (11). Following the notation of [2], second-orderJacobi can be written as

x(k + 1) = M–10 (b – C x(k)) + x(k) + γ(x(k) – x(k – 1))

[3]

where M–10 is D–1, and γ is the relaxation factor.

[H′R–1H 0 0 0

X′R–1H X′R–1X 0 0

T′R–1H 0 diags×s{T′ R–1T + Iσ–2c } 0

Z′R–1H 0 0 diagt×t{Z′R–1Z + A–1σ–2a }]

where s = t = 1 and, correspondingly, for RRM, M0 was

[H′R–1H 0 0 0 0

X′R–1H X′R–1X 0 0 0

T′R–1H 0 diags×s{T′R–1T + Iσ–2c } 0 0

Z′R–1H 0 0 diagt×t{Z′R–1Z + A–1 ⊗ K–1a } 0

W′R–1H 0 0 0 diagt×t{W′R–1W + I ⊗ K–1p }

]where s = 1, and t = 3. For a particular animal i withobservations, diagt×t{Z′R–1 Z + A–1 ⊗ K–1

a }i, is diago-nal block Φ′

iR–1Φi + aiiK–1a , where aii is diagonal ele-

ment i in A–1, and diagt×t{W′R–1 W + I ⊗ K–1p }i is diag-

onal block Φ′iR–1Φi + K–1

p . For effects solved by second-order Jacobi, corresponding diagonal blocks in M0 wereinverted and stored on disk. Relaxation factor γ forSTM was 0.9, as suggested by Stranden and Manty-saari (19). For the RRM two relaxation factors, γ = 0.8and γ = 0.9, were investigated. For herd solutions (h)the relaxation factor in [3] was zero, leading to Gauss-Seidel for this effect. The equations for the first levelof calving age × days open effect in STM and for thefirst level of test year × test month, calving age, anddays carried calf effect in RRM were removed to ensureX′R–1X being full rank.

Preconditioned conjugate gradient algorithm.Implementation of the PCG iterative method requiredstoring four vectors (size equal to the number of un-knowns in MME) in random access memory; a vector ofresiduals (r), a search-direction vector (d), the solutionvector (x), and a work vector (v). Each round of itera-tion required one pass through the data to calculate

Journal of Dairy Science Vol. 82, No. 12, 1999

Gauss-Seidel second-order Jacobi algorithm.The Gauss-Seidel second-order Jacobi algorithm(GSSJ) was used as a reference in this study. Thealgorithm is a hybrid of the iterative methods givenabove and solves the fixed effect of herd (h) by Gauss-Seidel and other effects by second-order Jacobi (8, 10,11). The GSSJ algorithm was implemented to utilizethe block structure in MME. The diagonal block forequations pertaining to f were treated as a singleblock. For STM the design of the matrix M0 in Equation[3] becomes

the product Cd. The preconditioner matrices M wereblock diagonal matrices formed from the M0 matricesin the GSSJ but without the off-diagonal blocks (X′R–

1H, T′R–1H, Z′R–1H, andW′R–1H). The inverse of thepreconditioner matrix (M–1) was stored on disk andread at each round of iteration. The starting valueswere x(0) = 0, r(0) = b – Cx(0) = b, and d(0) = M–1r(0) =M–1b. At every iteration step (k + 1) the followingcalculations were performed:

v = Cd(k),

α =r′(k)M–1r(k)

d′(k)v,

x(k + 1) = x(k) + αd(k),r(k + 1) = r(k) – αv,v = M–1r(k+1),

β =r′(k + 1)v

r′(k)M–1 r(k), and

d(k + 1) = v + βd(k) [4]where α and β are step sizes in the PCG method. Re-strictions were imposed on the same equations as forGSSJ— either on both C and M or on M only.

LIDAUER ET AL.2792

TABLE 1. Variance-covariance components for test month within herd effect (σ2c) and additive genetic (Ka)

and nonhereditary animal (Kp) effects, each with three regression coefficients, and for residual effect (σ2e)

when estimating breeding values for milk yield with random regression test-day model.1

Ka Kp

Linear Quadratic Cubic Linear Quadratic Cubicσ2

c term term term term term term σ2e

0.451 0.974 −0.014 −0.158 1.808 0.036 −0.102 1.000(−0.04) (−0.67) (0.04) (−0.19)

0.121 0.024 0.443 0.065(0.28) (0.25)0.058 0.153

1Variance-covarance components are scaled by residual standard deviation. The correlations betweenregression coefficients are in parenthesis.

Investigation of Convergence

For both algorithms, the stage of convergence wasmonitored after each round of iteration. Two conver-gence indicators were used: the relative difference be-tween consecutive solutions

c(n)d =

�x(n + 1) – x(n)��x(n + 1)�

and the relative average difference between the right-hand and left-hand sides (12)

c(n)r =

�b – Cx(n + 1)��b�

where �y� = ∑i

y2i .

To allow comparisons between the methods, we firstinvestigated how small the values of cr and cd neededto be to reach the accuracy of the solutions sufficientin practical breeding work. Therefore, quasi-true EBVwere obtained by performing PCG iterations until crbecame smaller than 10–26, which corresponded to astandard deviation of the values in r being more than107 times smaller than the residual standard devia-tion. This required 301 and 681 rounds of iteration for

TABLE 2. Heritability (diagonal) and genetic correlations for daily milk yield in different DIM for therandom regression test-day model.

DIM

DIM 5 55 105 155 205 255 305

5 0.14 0.92 0.81 0.72 0.65 0.57 0.4355 0.19 0.97 0.93 0.88 0.79 0.61

105 0.23 0.99 0.96 0.88 0.69155 0.25 0.99 0.94 0.77205 0.24 0.98 0.84255 0.22 0.94305 0.17

Journal of Dairy Science Vol. 82, No. 12, 1999

STM and RRM, respectively. In the case of RRM, thebreeding values for 305-d lactation were calculatedusing the animal EBV coefficients ai:EBVi =Σ(Φ305×3ai). Intermediate EBV for various cr valueswere obtained from corresponding solutions of MME.The EBV were standardized before comparing them.In Finland, the published indices are formed by divid-ing EBV by 1/10 of the standard deviation of activesires EBV and rounding them to the nearest full inte-ger. Thus, a difference of one index point in the pub-lished index was equal to 43.3 kg of milk in EBV.

For each investigated cr value, the correlation be-tween the intermediate and the quasi-true indices wascalculated. Furthermore, the percentage of indiceswere recorded if different from the quasi-true indicesby one or more index points. Solutions were consideredas converged if less than 1% of the indices deviated,at most, one index point from the quasi-true indices.This least significant change in the indices (LSC) wasused as convergence criterion. To avoid a reduction inselection intensity caused by inaccurate solutions ofMME, LSC was a minimum requirement. The conver-gence of the indices was analyzed in three differentanimal groups: young cows, evaluated sires, and youngsires. The group of young cows included all cows havingtheir first lactation in 1995; evaluated sires consistedof bulls born in 1984 and 1985; and young sires com-

SOLVING LARGE TEST-DAY MODELS 2793

TABLE 3. Number of equations, nonzeros in corresponding mixed model equations, and memory require-ments for preconditioned conjugate gradient (PCG) and Gauss-Seidel second-order Jacobi (GSSJ) methodwhen solving a single-trait animal model (STM) and a random regression test-day model (RRM). Memoryrequirements are given in megabytes.

Iteration on data

Size of iteration Random accessNumber of data files memory

Number of nonzeroModel equations elements C1 PCG2 GSSJ PCG GSSJ

STM 1,294,697 17,271,472 111 47 41 50 35RRM 7,280,477 403,117,019 2,376 392 582 237 130

1Memory requirement for storing the nonzero elements of the lower triangle and diagonal of the coefficientmatrix (C) of the mixed model equations as linked list.

2Covariables, to account for the shape of the lactation curve, were stored in a table rather than readingthem from the iteration files.

prised progeny tested bulls born in 1991 and 1992.There were 82,109; 651; and 318 animals in the youngcow, evaluated sire, and young sire groups, respec-tively.

RESULTS AND DISCUSSION

For STM, PCG required 88 rounds of iteration tomeet the convergence criterion LSC, whereas GSSJneeded 122 rounds (Table 4). This result was inagreement with the findings by Berger et al. (1). Theyreported 83 rounds of iteration with PCG versus 169

TABLE 4. Different convergence indicators for preconditioned conjugate gradient (PCG) and Gauss-Seidel second-order Jacobi (GSSJ) methodwhen solving a single-trait animal model with 1,294,694 unknowns in mixed model equations.

Cows Evaluated sires Young sires

% of indices deviate % of indices deviate % of indices deviateIteration Iterationmethod cr

1 cd2 rounds 1 pt3 ≥2 pt4 rI,It

5 1 pt ≥2 pt rI,It 1 pt ≥2 pt rI,It

10−7 6.5 × 10−4 12 26.7 62.6 0.9887 38.9 39.1 0.9761 15.4 75.5 0.954310−8 9.4 × 10−5 20 54.5 2.7 0.9975 48.5 14.9 0.9930 52.2 24.6 0.997210−9 4.0 × 10−5 39 9.8 0.9994 19.4 0.1 0.9989 10.1 0.6 0.9994

PCG 10−10 3.6 × 10−6 58 3.4 0.9998 5.7 0.9997 4.4 0.999910−11 3.8 × 10−7 74 2.1 0.9999 1.5 0.9999 1.6 0.999910−12 5.9 × 10−8 88 0.3 1.0 0.3 1.0 0.9 1.010−13 2.4 × 10−9 105 <0.1 1.0 1.0 0.3 1.010−14 4.8 × 10−10 117 <0.1 1.0 1.0 1.010−7 5.6 × 10−5 57 19.0 0.5 0.9988 24.7 0.8 0.9985 33.0 1.8 0.998310−8 4.6 × 10−6 81 6.3 0.9996 6.5 0.9997 8.2 0.9997

GSSJ 10−9 6.0 × 10−7 100 2.0 0.9999 2.2 0.9999 1.9 0.9999γ6 = 0.9 10−10 5.3 × 10−8 122 0.5 1.0 0.8 1.0 0.6 1.0

10−11 6.1 × 10−9 143 0.2 1.0 0.3 1.0 0.3 1.010−12 8.0 × 10−10 162 <0.1 1.0 0.2 1.0 1.010−13 6.1 × 10−11 186 <0.1 1.0 1.0 1.0

1Relative difference between right-hand and left-hand sides.2Relative difference between consecutive solutions.3Percentage of indices that deviate one index point from their quasi-true indices.4Percentage of indices that deviate two or more index points from their quasi-true indices.5Correlation between intermediate indices obtained by PCG or GSSJ and quasi-true indices obtained after the PCG iteration process

reached cr-value below 10−26.6Relaxation factor for second-order Jacobi in GSSJ.

Journal of Dairy Science Vol. 82, No. 12, 1999

rounds for successive overrelaxation, when solving areduced animal model. For RRM the difference be-tween methods was even more apparent. Convergencewas reached after 149 rounds of iteration with PCGbut not before 305 rounds with GSSJ (Table 5). ForGSSJ, the rate of convergence decreased considerablyat the later stages of iteration, whereas for PCG itremained almost unchanged (Figure 1). This findingreflected the weakness of Gauss-Seidel and second-order Jacobi related methods, which required manyiterations to gain additional increase in accuracy to-ward the end of the iteration process. If the relaxation

LIDAUER ET AL.2794

factor is not optimal, this problem can be even moresevere. For instance, satisfying the convergence crite-rion LSC required over 600 rounds of iteration whenthe relaxation factor for GSSJ was 0.8 (Table 5).

Carabano et al. (2) observed in all their analyses twodistinct iteration phases for PCG; an unstable startingphase in which solutions converged and diverged alter-nately was followed by a phase with a very high rateof convergence. We observed the same behavior in PCGwhenever we imposed restrictions on the fixed effectequations in both the coefficient matrix and the pre-conditioner matrix. Note that our implementation re-quired restrictions in the X′R–1X block of the precondi-tioner to enable matrix inversion. When constraintswere applied only for the preconditioner matrix, a highrate of convergence was realized during the entire iter-ation process (Figure 1, Tables 4 and 5). With con-straints in both matrices, 32 and 219 additional roundsof iteration were required to reach convergence forSTM and RRM, respectively. This result was converseto the findings of Berger et. al (1), who reported 50%reduction in the number of iteration rounds when re-strictions were imposed on the fixed effect equations.Their result was based on a sire model in which the

TABLE 5. Different convergence indicators for preconditioned conjugate gradient (PCG) and Gauss-Seidel second-order Jacobi (GSSJ) methodwhen solving random regression test-day model with 7,280,477 unknowns in mixed model equations.

Cows Evaluated sires Young sires

% of indices % of indices % of indicesdeviate deviate deviate

Iteration Iterationmethod cr

1 cd2 rounds 1 pt3 ≥2 pt4 rI,It

5 1 pt ≥2 pt rI,It 1 pt ≥2 pt rI,It

10−9 2.2 × 10−5 38 56.0 8.3 0.9960 48.2 11.1 0.9942 45.3 28.3 0.997310−10 9.5 × 10−6 49 56.2 1.8 0.9979 40.7 4.3 0.9971 59.7 12.2 0.998710−11 1.0 × 10−6 79 13.0 0.9992 8.9 0.9996 23.9 0.999410−12 7.4 × 10−7 83 9.9 0.9994 7.1 0.9996 18.9 0.9995

PCG 10−13 6.6 × 10−8 116 3.9 0.9998 2.3 0.9999 3.5 0.999910−14 7.5 × 10−9 149 0.9 0.9999 0.3 1.0 0.6 1.010−15 3.1 × 10−10 219 0.1 1.0 0.2 1.0 1.010−16 2.4 × 10−10 225 0.1 1.0 0.2 1.0 1.010−17 1.9 × 10−11 259 <0.1 1.0 1.0 1.0

GSSJ 7.0 × 10−12 6.6 × 10−9 300 4.1 0.9997 3.7 0.9998 6.0 0.9998γ6 = 0.8 4.5 × 10−14 6.3 × 10−11 600 0.5 1.0 0.3 1.0 1.3 1.0

2.2 × 10−14 2.3 × 10−11 700 0.3 1.0 0.2 1.0 0.9 1.010−9 1.2 × 10−5 117 8.7 0.9995 4.5 0.9998 4.7 0.999810−10 4.7 × 10−6 127 5.3 0.9997 2.9 0.9999 4.4 0.9999

GSSJ 10−11 3.2 × 10−8 174 1.7 0.9999 2.3 0.9999 2.2 0.9999γ = 0.9 10−12 1.7 × 10−9 213 1.0 0.9999 1.1 0.9999 1.9 0.9999

10−13 1.7 × 10−10 305 0.3 1.0 0.3 1.0 0.9 1.010−14 2.1 × 10−11 421 0.1 1.0 0.2 1.0 0.3 1.010−15 2.3 × 10−12 552 <0.1 1.0 1.0 1.0

1Relative difference between right-hand and left-hand sides.2Relative difference between consecutive solutions.3Percentage of indices that deviate one index point from their quasi-true indices.4Percentage of indices that deviate two or more index points from their quasi-true indices.5Correlation between intermediate indices obtained by PCG or GSSJ and quasi-true indices obtained after the PCG iteration process

reached cr-value below 10−26.6Relaxation factor for second-order Jacobi in GSSJ.

Journal of Dairy Science Vol. 82, No. 12, 1999

herd-year-season effect was absorbed, and the re-maining 890 equations consisted of five fixed birth yeargroups and 885 sires. The restriction was performedby deleting the first birth year group. According to thetheory demonstrated in the literature (4, 17), the PCGmethod guarantees convergence to the true solutionsfor symmetric and positive definite coefficient matri-ces. Without restrictions the coefficient matrix was notof full rank and, hence, was only semi-positive definite.Because the rate of convergence clearly improved with-out restrictions, and because the numerical values ofall estimable functions do not change (1), it seemsbeneficial to leave the coefficient matrix unrestrictedwhen PCG method is used.

From a practical point of view, comparison of algo-rithms with respect to execution time is more useful.For RRM, the PCG method required 59 CPU secondsper round of iteration, and convergence was reachedafter 2.5 CPU hours of computation. In contrast, theGSSJ algorithm needed 203 CPU seconds per round(without calculation of cr), and convergence wasreached after 17.2 CPU hours. Both analyses wereperformed on a Cycle SPARCengine Ultra AXmp (300MHz) workstation of the Finnish Agricultural Data

SOLVING LARGE TEST-DAY MODELS 2795

Figure 1. Relative average difference between left-hand and right-hand sides (Cr), for the Gauss-Seidel second-order Jacobi methodwith two different relaxation factors, (a) γ = 0.8 and (b) γ = 0.9, andfor (c) preconditioned conjugate gradient, when solving a randomregression test-day model with 7,280,477 unknowns in the mixedmodel equations.

Processing Centre. All data files were kept in randomaccess memory during the iteration process to keepCPU time unaffected by input/output operations. Tworeasons existed for the large difference in executiontime between algorithms. Implementation of PCG en-abled a more efficient program code than an algorithmemploying Gauss-Seidel. Both algorithms requiredreading of the data at each round of iteration. Addi-tional computing time was required by GSSJ to storethe contributions to the MME of each herd and toreread them to adjust the right-hand sides with newGauss-Seidel solutions for the herd effect. For thesame reason GSSJ does not allow the method of resid-ual updating (18), but PCG does. Stranden and Li-dauer (18) introduced a new technique for iterationon data. Iteration on data requires a fixed number ofcalculations for each record say p (multiplications andadditions), to compute the record contribution to thematrix multiplication Cx in [3] and Cd in [4]. By usingstandard iteration on data technique, p follows a qua-dratic function of the number of effects in the statisti-cal model. The PCG allows a reordering of the multipli-cations in a way that p is a linear function of thenumber of effects in the statistical model (18). Conse-quently, for RRM p was 573 for GSSJ but was 66 forPCG. This reduction explained most of the difference

Journal of Dairy Science Vol. 82, No. 12, 1999

in computing time per round of iteration. In fact, com-putation of the product Cd in [4], with the new itera-tion on data technique required less multiplicationsand additions than if the sparse matrix of coefficients(403,117,019 nonzero elements) would have been used.A disadvantage of PCG, in comparison to the GSSJmethod, was a greater demand of random access mem-ory, which may limit its use in large applications. Oneway to circumvent this problem is to store the solutionvector on a disk, and to make the work vector unneces-sary by reading the data twice at each round of itera-tion. An expense of these modifications is increasedcomputing time.

The most common convergence indicator in animalbreeding applications is cd because it is easy to obtain.However, it has been demonstrated (12) that the evalu-ation of convergence from solely cd may be inappropri-ate, because the real accuracy of the solutions can bemuch lower than indicated by cd. Our results supportedthis conclusion. When cd was applied to solutions ob-tained by GSSJ, the indicator suggested that the accu-racy of the solutions from round 300, with γ = 0.8, washigher than that from round 174, with γ = 0.9 (Table5). However, indices with one point deviation from thequasi-true indices and a correlation between interme-diate and quasi-true indices proved to be the opposite(Table 5). This finding was also supported by the ap-proximated accuracy of the solutions as derived byMisztal et. al (12), which was 0.0075 for solutions fromround 300 with γ = 0.8 versus 0.0017 for solutions fromround 174 with γ = 0.9. The convergence indicator cr

was regarded as more reliable (20), but for second-order Jacobi and Gauss-Seidel methods, calculation ofcr was expensive. In PCG all components of cr werereadily available.

Estimation of breeding values with RRM requiredgreater accuracy in the solutions of MME than withSTM. This was because a breeding value of an animalin RRM was a function of breeding value coefficients(ap) rather than a single solution from MME. For bothmodels, a high correlation of at least 0.9999 was ob-served between the quasi-true indices and the indicesthat fulfilled the convergence criterion LSC. The corre-lation between the converged indices from the STMand the RRM were 0.967, 0.990, and 0.988, for youngcows, evaluated sires, and young sires, respectively.However, percentages of indices differing two or moreindex points between the two models were 49.5, 28.4,and 44.6 for young cows, evaluated sires, and youngsires, respectively. This finding indicated a significantchange in ranking of the animals when estimatingEBV with STM or with RRM.

LIDAUER ET AL.2796

CONCLUSIONS

The PCG seemed to be an attractive alternative tosolve large MME. Solving the MME of RRM was accom-plished in only 14% of the computation time needed forGSSJ. The implementation of PCG was straightforwardand without any parameter estimates (e.g., relaxationfactors). This gave another advantage over second-orderJacobi-related methods. We observed that PCG per-formed better when no restrictions were imposed on thecoefficient matrix. Thus, the convergence was not im-paired by the coefficient matrix being semi-positivedefinite.

The estimation of breeding values with RRM requireda greater accuracy of the solutions of MME than withSTM. This finding favored PCG in particular, for whichan additional increase in the accuracy of the solutionswas computationally less costly than for GSSJ becauseof the high rate of convergence during later stages of it-eration.

REFERENCES

1 Berger, P. J., G. R. Luecke, and A. Hoekstra. 1989. Iterative algo-rithms for solving mixed model equations. J. Dairy Sci. 72:514–522.

2 Carabano, M. J., S. Najari, and J. J. Jurado. 1992. Solving itera-tively the M. M. E. Genetic and numerical criteria. Pages 258–259in Book Abstr. 43rd Annu. Mtg. Eur. Assoc. Anim. Prod., Madrid,Spain. Wageningen Pers, Wageningen, The Netherlands.

3 Ducrocq, V. 1992. Solving animal model equations through anapproximate incomplete Cholesky decomposition. Genet. Sel. Evol.24:193–209.

4 Hageman, L. A, and D. M. Young. 1992. Applied iterative methods.Acad. Press, Inc., San Diego, CA.

5 Hestenes, M. R., and E. L. Stiefel. Methods of conjugate gradientsfor solving linear systems. Natl. Bur. Std. J. Res. 49:409–439.

6 Jamrozik, J., L. R. Schaeffer, and J.C.M. Dekkers. 1997. Geneticevaluation of dairy cattle using test day yields and random regres-sion model. J. Dairy Sci. 80:1217–1226.

7 Jamrozik, J., L. R. Schaeffer, Z. Liu, and G. Jansen. 1997. Multipletrait random regression test day model for production traits. Pages43–47 in Bull. no. 16. INTERBULL Annu. Mtg., Vienna, Austria.Int. Bull. Eval. Serv., Uppsala, Sweden.

8 Jensen, J., and P. Madsen. 1994. DMU: A package for the analy-

Journal of Dairy Science Vol. 82, No. 12, 1999

sis of multivariate mixed models. Proc. 5th World Congr. Genet.Appl. Livest. Prod., Guelph, ON, Canada XXII:45–46.

9 Kirkpatrick, M., W. G. Hill, and R. Thompson. 1994. Estimatingthe covariance structure of traits during growth and aging, illus-trated with lactation in dairy cattle. Genet. Res. Camb. 64:57–69.

10 Lidauer, M., E. A. Mantysaari, I. Stranden, A. Kettunen, andJ. Poso. 1998. DMUIOD: A multitrait BLUP program suitablefor random regression testday models. Proc. 6th World Congr.Genet. Appl. Livest. Prod., Armidale, NSW, AustraliaXXVII:463–464.

11 Misztal, I., and D. Gianola. 1987. Indirect solution of mixedmodel equations. J. Dairy Sci. 70:716–723.

12 Misztal, I., D. Gianola, and L. R. Schaeffer. 1987. Extrapolationand convergence criteria with Jacobi and Gauss-Seidel iterationin animal models. J. Dairy Sci. 70:2577–2584.

13 Misztal, I., L. Varona, M. Culbertson, N. Gengler, J. K. Bertrand,J. Mabry, T. J. Lawlor, and C. P. Van Tassell. 1998. Studiesof the values of incorporating effect of dominance in geneticevaluations of dairy cattle, beef cattle, and swine. Proc. 6thWorld Congr. Genet. Appl. Livest. Prod., Armidale, NSW, Aus-tralia XXV:513–516.

14 Poso, J., E. A. Mantysaari, M. Lidauer, I. Stranden, and A.Kettunen. 1998. Empirical bias in the pedigree indices of heifersevaluation using test day models. Proc. 6th World Congr. Genet.Appl. Livest. Prod., Armidale, NSW, Australia XXIII:339–342.

15 Reents, R., J.C.M. Dekkers, and L. R. Schaeffer. 1995. Geneticevaluation for somatic cell score with a test day model for multi-ple lactations. J. Dairy Sci. 78:2847–2870.

16 Schaeffer, L. R., and B. W. Kennedy. 1986. Computing strategiesfor solving mixed model equations. J. Dairy Sci. 69:575–579.

17 Shewchuk, J. R. 1994. An introduction to the conjugate gradientmethod without the agonizing pain. School of Computer Sci.,Carnegie Mellon Univ. Pittsburgh, PA.

18 Stranden, I., and M. Lidauer. 1999. Solving large mixed linearmodels using preconditioned conjugate gradient iteration. J.Dairy Sci. 82:2779–2787.

19 Stranden, I., and E. A. Mantysaari. 1992. Animal model evalua-tion in Finland: experience with two algorithms. J. Dairy Sci.75:2017–2022.

20 Van Vleck, L. D., and D. J. Dwyer. 1985. Successive overrelaxa-tion, block iteration, and method of conjugate gradients for solv-ing equations for multiple trait evaluation of sires. J. Dairy Sci.68:760–767.

21 Wiggans, G. R., and M. E. Goddard. 1997. A computationallyfeasible test day model for genetic evaluation of yield traits inthe United States. J. Dairy Sci. 80:1795–1800.

22 Wiggans, G. R., I. Misztal, and L. D. Van Vleck. 1988. Animalmodel evaluation of Ayrshire milk yield with all lactations, herd-sire interaction, and groups based on unknown parents. J. DairySci. 71:1319–1329.