EVALUATING ALTERNATIVE LINEAR PROGRAMMING MODELS TO SOLVE THE TWO-GROUP DISCRIMINANT PROBLEM

12
EVALUATING ALTERNATIVE LINEAR PROGRAMMING MODELS To SOLVE THE TWO-GROUP DISCRIMINANT PROBLEM Ned Freed School of Business, University of Portland, Portland, OR 97225 Fred Glover Graduate School of Business, University of Colorado, Boulder, CO 80303 ABSTRACT The two-group discriminant problem has applications in many areas, for example, dif- ferentiating between good c:redit risks and poor ones, between promising new firms and those likely to fail, or between patients with strong prospects for recovery and those highly at risk. To expand our tools For dealing with such problems, we propose a class of nonpara- metric discriminant procedures based on linear programming (LP). Although these proce- dures have attracted considerable attention recently, only a limited number of computa- tional studies have examined the relative merits of alternative formulations. In this paper we provide a detailed study of three contrasting formulations for the two-group problem. The experimental design provides a variety of test conditions involving both normal and nonnormal populations. Our results establish the LP model which seeks to minimize the sum of deviations beyond the two-group boundary as a promising alternative to more con- ventional linear discriminant techniques. Subject Areas: Goal Programming, Linear Programming, Linear Statistical Models, and Statistical Techniques. INTRODUCTION Over the years, a number of statistical procedures have evolved with the capacity to assign individuals to groups systematically (see, eg., 121 [8] 191). The two-group discriminant problem-in which an observation (or set of observations) must be placed appropriately in one of two candidate groups-in particular has received attention. The applications for an effective two-group discriminant procedure are virtually limitless. Differentiatinggood credit risks from poor ones, promising new firms from those likely to fail, or job applicants with good chances for success from those with less potential illustrate but a few of the widespread and varied possibilities. Moreover, discriminant analysis has a natural (but often overlooked) application to pattern recognition in artificial intelligence-an area whose links to the decision sciences are becoming increasingly apparent. Discussions of linear programming-based alternatives to standard discrimi- nant methods have appeared in a number of publications [l] [3] [4] [5] [7] in the past few years. Since we first presented the basic model form [3], the relative merits of competing procedures have been argued vigorously [5] [7]. Bajgier and Hill [I] reported significant experimental results confirming the potential of the linear pro- gramming (LP) approach for cases in which the assumption of population nor- mality is upheld. The goal of our paper is to augment and extend previous find- ings through additional experimentation under controlled conditions involving both normal and nonnorrnal populations. 151

Transcript of EVALUATING ALTERNATIVE LINEAR PROGRAMMING MODELS TO SOLVE THE TWO-GROUP DISCRIMINANT PROBLEM

EVALUATING ALTERNATIVE LINEAR PROGRAMMING MODELS To SOLVE THE TWO-GROUP DISCRIMINANT PROBLEM

Ned Freed School of Business, University of Portland, Portland, OR 97225

Fred Glover Graduate School of Business, University of Colorado, Boulder, CO 80303

ABSTRACT

The two-group discriminant problem has applications in many areas, for example, dif- ferentiating between good c:redit risks and poor ones, between promising new firms and those likely to fail, or between patients with strong prospects for recovery and those highly at risk. To expand our tools For dealing with such problems, we propose a class of nonpara- metric discriminant procedures based on linear programming (LP). Although these proce- dures have attracted considerable attention recently, only a limited number of computa- tional studies have examined the relative merits of alternative formulations. In this paper we provide a detailed study of three contrasting formulations for the two-group problem. The experimental design provides a variety of test conditions involving both normal and nonnormal populations. Our results establish the LP model which seeks to minimize the sum of deviations beyond the two-group boundary as a promising alternative to more con- ventional linear discriminant techniques.

Subject Areas: Goal Programming, Linear Programming, Linear Statistical Models, and Statistical Techniques.

INTRODUCTION

Over the years, a number of statistical procedures have evolved with the capacity to assign individuals to groups systematically (see, eg., 121 [8] 191). The two-group discriminant problem-in which an observation (or set of observations) must be placed appropriately in one of two candidate groups-in particular has received attention. The applications for an effective two-group discriminant procedure are virtually limitless. Differentiating good credit risks from poor ones, promising new firms from those likely to fail, or job applicants with good chances for success from those with less potential illustrate but a few of the widespread and varied possibilities. Moreover, discriminant analysis has a natural (but often overlooked) application to pattern recognition in artificial intelligence-an area whose links to the decision sciences are becoming increasingly apparent.

Discussions of linear programming-based alternatives to standard discrimi- nant methods have appeared in a number of publications [l] [3] [4] [5] [7] in the past few years. Since we first presented the basic model form [3], the relative merits of competing procedures have been argued vigorously [5] [7]. Bajgier and Hill [I] reported significant experimental results confirming the potential of the linear pro- gramming (LP) approach for cases in which the assumption of population nor- mality is upheld. The goal of our paper is to augment and extend previous find- ings through additional experimentation under controlled conditions involving both normal and nonnorrnal populations.

151

152 Decision Sciences [Vol. 17

We have undertaken to determine the capacity of the LP procedure to identify an effective linear discriminator under a wide variety of testing conditions. Although the basic experiment is by no means an exhaustive test of the procedure’s poten- tial, we set for ourselves the following goals:

to establish whether the general LP approach is a viable procedure capable of producing acceptable linear discriminators under a broad range of conditions,

2. to provide a context for comparing the performance of simple variants of the LP formulation,

3. to isolate those formulation variants that fail to produce adequate dis- criminant capacity and the situations in which this occurs,

4. to suggest special-case modifications which may correct apparent deficiencies,

5 . to promote a reasonably comprehensive comparison of the LP approach with standard discriminant procedures (the Fisher model), and

6. to identify which more general type(s) of LP formulation(s) holds the greatest promise for future elaboration.

1.

EXPERIMENTAL DESIGN

To insure that a broad range of cases was considered in our experimental design, we generated 48 test samples of size 100 from 24 distinct bivariate normal popula- tion pairs and an additional 12 samples of size 100 from a set of decidedly skewed populations. Cases were limited to bivariant populations to facilitate the interpreta- tion of experimental results through graphical analysis. Such a restriction also pro- vided a substantial degree of control in generating populations with specific types of distinctive characteristics. It should be noted, however, that our ability to gen- eralize results unconditionally to the n-dimensional case may have been correspond- ingly limited.

The immediate task in each case was to produce a boundary that would effec- tively segregate members of each of the two representative sample groups. The boundary then was tested for its ability to discriminate members of respective parent populations (each of size 1 ,OOO). Results produced by conventional discriminant procedures (i.e., Fisher’s linear discriminant function) were compared to the per- formance of the LP formulation.

We tested three variants of the general LP formulation. The first and simplest, hereafter referred to as the MMD (minimize maximum deviation) model, can be summarized as follows:

Minimize CY

subject to: A , x i b + a for all A, in group 1

A c L b - CY for all A, in group 2

19861 Freed and Glover 153

where: x= [ 1, A,= [ ~ , ~ a ~ ~ l , and the variables satisfy: x2

xl, x2, and b unrestricted in sign a10.

Although the simpler form was not expected to produce solutions consistent- ly competitive with those generated by the other forms, we wanted to determine whether this formulation would provide generally satisfactory discriminators. Pre- vious experiments [I] suggested that the simple form works well in certain situa- tions. Since the number of variables required is relatively small, the success of such a formulation might qualify it as a useful alternative in the solution of very large problems.

The second LP formulation, hereafter referred to as the MSID (minimize the sum of interior distances) model, seeks to:

Minimize Ha - Edi

subject to A p + d i s b + a for all A, in group 1

Af l - -d i lb - a for ali A, in group 2

where the variables satisfy:

x and b unrestricted in sign a20 and d i r O for all i.

The objective is essentially two-fold: Find the discriminant function (x) and the boundary (b) that will minimize group overlap (a) and maximize the total (interior) distance (di) of group members from the designated boundary hyperplane (Ax= b).

A parametric procedure was used to produce the set of ranges on the values of H which would generate the full family of distinct MSID solutions. Thus, a given two-group problem might produce 5 or 6 solutions (i.e., discriminant func- tions, x), each corresponding to a specific range of values assigned to H.

The final LP variant tested, hereafter referred to as the MSD (minimize sum of deviations) model, has the form:

Minimize Cai

subject to: A,xsb+ai for all A, in group 1

A,x L b - ai for all Al in group 2

where the variables satisfy:

x and b unrestricted in sign airO all i.

154 Decision Sciences [Vol. 17

The objective here focuses on the minimization of total (vs. maximal) group overlap. The objective has value zero when the paired groups can be separated by a hyper- plane. This formulation has the advantage (over the preceding formulation) of not requiring parametric adjustment.

In all three formulations we used the normalization b+Zxj=N and chose N=10. N = l or 100 would work as well since N serves only to scale the solution.

It should be stressed that these three formulations are the most rudimentary forms of the general formulation categories indicated in [4]. We restricted atten- tion to these forms in order to establish a clear and uncomplicated basis for comparison.

A principal objective of this study was to compare the performance of our LP models with that of the more conventional statistical procedure embodied in Fisher’s linear discriminant approach. Consequently, for each sample-group pair in the experiment, we ran the SPSS discriminant program to produce the Fisher- based solution to the two-group classification problem. We used a fairly simple form of the Fisher discriminant procedure in which prior probabilities of group membership were set equal, and the relative costs of misclassification were unspe- cified (and so assumed equal).

Both the LP and the SPSS (Fisher) approaches can be modified to reflect a user’s judgment concerning particular, possibly unique, problem characteristics (cg., to assign relatively high costs to misclassifying group 1 points as group 2 members). However, attempting to introduce the full range of fine-tuning possibilities for each procedure and for every sample case was judged excessively cumbersome in this case and potentially confounding to any interpretation of test results. Accordingly, each of the approaches was implemented in the most straightforward manner, free of additional intervention.

GENERATING EXPERIMENTAL GROUPS (BIVARIATE NORMAL)

Three basic population characteristics were used to differentiate test cases: (1) the relative orientation of the paired normal populations, (2) the degree of sep- aration between the paired populations, and (3) the similarity of respective vari- ance-covariance matrices. The three orientations of Figure 1 were selected to pro- vide distinctly contrasting possibilities. To insure the possibility of significant group segregation by a linear discriminator, the two degrees of separation described in Figure 2 were established, (We might have used Hotelling’s T2 to establish the degree of group separation but chose to use the more graphic explanation of Figure 2.) Finally, to test the sensitivity of procedures to variance-covariance differences, we examined two sets of paired normal distributions: (1) those with equal variance- covariance matrices, corresponding to the standard assumption of classical tech- niques; and (2) those with distinctly unequal variance-covariance matrices (here, making the variance-covariance of one population 9 times that of the other).

To carry out these tests, we developed a program for randomly generating mul- tivariate normal distributions with any given centroid @) and variance-covariance matrix (C). The routine was used to produce sets of 50 sample points from each

19861 Freed and Glover 155

FIGURE 1 Orientation Schemes for the Normal-Groups Experiment

of the paired bivariate populations. A sample size of 100 (50 plus 50) was selected as sufficiently large to represent a practically acceptable discriminant problem while at the same time providing a manageable data set that could be analyzed readily. A significantly larger sample size would complicate the task of dissection and evaluation.

We constructed the paired variance-covariance matrices C, and C,, to satisfy the various test conditions previously described, using the experimental design de- tailed in Table 1.

Once the full set of paired samples had been generated, each pair in turn was used as input for the three solution procedures (the SPSS discriminant analysis and the full and simple models of the LP formulations). The capacity of each

Deckion Sciences [Vol. 17 156

(a)

FIGURE 2 Degrees of Group Separation for the Normal-Groups Experiment

TABLE 1 Experimental Design (Normal Populations)

Normal Normal

Orientation Orientation 1 2 3 4 5 6

C, =C2 C1=9C2

1st degree separation A1 A3 AS a , b a,b a , b

2nd degree separation A2 A4 A6 a, b a, b a, b

1st degree separation B1 B3 B5 a , b a , b a , b

2nd degree separation B2 B4 B6 a , b a , b a, b

Note: W o samples (a, b) of size 50 from each of two groups were selected for each cell. For cases designated A and C, within-group correlation between the two variables was fixed at approximately .5; for cases B and D, correlation was set at .7.

discriminant solution to assign sample group members properly was identified by a simple count of misclassified sample points. Since the discriminant problem has been cast here as a sampling exercise, any test of performance ultimately must relate to the capacity of a candidate discriminant function to establish proper group mem- bership for all members of those populations from which the representative sample- group pairs were selected. Consequently, 1,000-member multivariate normal popula- tions possessing the requisite set of general characteristics were generated, each

19861 FEed and Glover 157

serving as the parent population for a corresponding sample group. The power of a discriminator to differentiate populations subsequently was measured with a sim- ple count of misclassified population points.

It should be noted that in examining the MSID LP approach, each member of the family of solutions produced by the procedure was evaluated for its ability to differentiate sample groups, but only the solution which performed most effec- tively at this first level of the! test is reported. Further only this “best” vector was evaluated at the second level; that is, only this vector was evaluated for its ability to discriminate members of the paired populations.

EXPERIMENTAL RESULTS

Summary results are presented in Tables 2 and 3. Analysis of these results shows that the MMD model typically performed much worse than the classical (Fisher) discriminant procedure embodied in SPSS, while the MSID model proved more

TABLE 2 Misclassification Summary (Normal Populations) (Equal VadCov)

Proportion of Total Population Values Misclassified

Case SPSS MMD MSID MSD

LP Model LP Model LP Model

A la lb 2a 2b 3a 3b 4a 4b 5a 5b 6a 6b

B la lb 2a 2b 3a 3b 4a 4b 5a 5b 6a 6b

~~ ~

.13

.12

.06

.05

.12

.12

. Q6

.07

.2 1

.22

.13

.14

.11

.10

.05

.04

.12

.13

.05

.05

.18

.16

.07

.06

.14

.19

.17

.05

.29

.23

. l l

.07 S O .44 .21 .46 .12 . l l .05 .05 .30 .13 .05 .08 .21 .46 .16 .12

.14

.13

.13

.05

.13

.12

.08

.06

.22

.22

.16

.17

.13

.I0

.05

.05

.16

.13

.05

.05

.21

.20

.09

.06

.12

.12

.12

.05

.12

.16

.06

.06

.27

.26

.09

. l l

.11

. l l

.05

.05

.14

.15

.07

.08

.16

.16

.07

.10

158 Decision Sciences [Vol. 17

TABLE 3 Misclassification Summary (Normal Populations) (Unequal Var/Cov)

Proportion of Total Population Values Misclassified

MMD MSID MSD Case SPSS LP Model LP Model LP Model

C la .12 .35 .36 .18 lb .12 .17 .15 .12 2a .07 .04 .05 .05 2b .06 .28 .04 .04 3a -15 .22 .17 .12 3b .12 .18 .14 .12 4a .07 .05 .05 .07 4b .07 .11 .11 .06 5a .16 .16 .16 .16 5b .15 .53 .16 ' .16 6a .08 .09 .09 .08 6b .06 .41 .07 . l l

D la .12 .17 .16 .09 lb .10 .20 .20 .13 2a .05 .08 .07 .08 2b .06 .10 .08 .06 3a .12 .18 .14 .I1 3b .12 .15 -25 .I 1 4a .08 .22 .08 .05 4b .08 .21 .06 .07 5a .12 .26 .12 .12 5b .12 .5 1 .14 .13 6a .06 .12 .05 .05 6b .08 .05 .05 .05

competitive. The MSD model, however, emerges as the strongest alternative to the classical approach. Let us discuss these conclusions in greater detail.

MMD model solutions, as expected, proved only marginally successful in predicting group membership. In only 19 of the 48 test cases did the simple formulation provide a solution vector that was somewhat com- petitive, in terms of its ability to classify population group members, with the SPSS discriminator. Our results suggest that the hope of using this model because of its simplicity is misplaced. In 20 of the cases examined, the simple model produced a solution which was drastically inferior to those produced by the other approaches. Overall, the simple maximiza- tion of group overlap (the primary objective of the MMD model formula- tion) appeared erratic at best in its ability to produce an acceptable dis- criminant solution. The MSID formulation appeared more promising. In 20 of 48 cases ex- amined, the classification ability of this LP discriminator equaled or

1.

2.

FEed and Glover 159 19861

3.

exceeded that of the SPSS-generated function, showing a clear superior- ity in 7 of those 20 cases. In 16 additional cases, misclassifications pro- duced by the LP solution were within 2 percentage points of the SPSS result. In only 3 cases did the LP formulation produce a solution clearly inferior to that of the conventional discriminant technique. (That is, only in cases Cla, Dlb, and D3b did LP misclassifications of population mem- bers exceed corresponding SPSS-associated misclassifications by more than 5 percentage points.) The MSD model proved most effective in correctly classifying popula- tion group members. In 31 of the 48 cases examined, this LP form pro- duced a discriminant solution which matched or exceeded the classifica- tion ability of the corresponding SPSS solution, showing clear superior- ity in 16 of these cases. In only three cases (Ma, A5a and Cla) was the MSD discriminator decidedly inferior to the SPSS solution. Further, in comparison to the MSID model, the MSD formulation clearly was supe- rior in 25 of the 48 cases. In only 12 cases did the MSID form prove to be more effective that MSD in classifying population points.

We may add to-these conclusions the following observations: First, when the task of overlap minimization is allowed to dominate the objective function com- pletely, as in the MMD formulation, the solution procedure is influenced by relatively few “extreme” points. If these points generally are atypical of the groups they should represent, a wholly ineffectual discriminant solution may be selected. In contrast, the MSID and MSD formulations allow overlap minimization to play a key role but incorporate additional factors that exert a balancing influence.

In half of the cases studied, the MMD model solution was identical (except for a scaling factor) to the solution produced by the MSID formulation in which maximum weight (H) is given to the task of minimizing a. For those cases in which these solutions differed, the more flexible MSID model nearly always proved superior.

It is noteworthy that outliers appeared to be an important factor for the cases (Cla, Dlb, and D3b) in which the MSID solution clearly was inferior to the SPSS discriminator. That is, the MSID formulation appears to be adversely affected by extreme points in a manner similar to that of the MMD procedure; it is “misled” by a few points from each group which reach most deeply into the territory of the other. Removing those extreme points suspected to be unrepresentative of the parent group should improve the solution produced significantly.

Case D3b serves to illustrate the point. An examination of the discriminant scores assigned to group members by the original MSID solution vector revealed seven “border” points. By removing these points from the data set and reoptimiz- ing, it was possible to produce a new solution which reduced the total number of population misclassifications to 13 percent (down from 25 percent for the original MSID solution). Comparable results were produced with this procedure for cases Cla and Dlb.

The solutions also suggest that the parametric adjustments involved in the MSID procedure could be eliminated. In nearly two-thirds of the cases examined,

160 Decision Sciences [Vol. 17

the MSID solution which proved to be the best discriminator among the family of candidate MSID vectors produced by the various weighting strategies resulted in the “internal deviation” goal involving the dj variables being dominated by the goal of minimizing a (so that the former served a tie-breaking function). This was a result of the large weight given to H by the solution.

It should be noted that the MSD model, while not so strongly influenced by extreme outliers, nevertheless may produce a more effective discriminant capacity by diminishing the role of “unrepresentative” border points. This may be accom- plished by reducing those weights associated with “suspect” points using a more general form of the MSD formulation (one in which ai variables may be given dif- ferent weights in the objective function [4]). We currently are developing a version of the MSD approach that does this adaptively using LP postoptimization without the need for human intervention.

As a general conclusion, the LP prucedures produced more balanced discrimi- nant solutions than Fisher’s classical approach (embodied in SPSS) in cases for which paired-group variance-covariance matrices differed substantially. From an examination of case C and case D results, it is apparent that SPSS provided dis- criminators which, although effective classifiers in an overall sense, followed a dis- turbing (though not surprising) pattern: an overwhelming proportion of the total points misclassified came from that group with the larger variance-covariance matrix. In fact, in 15 of the 24 C and D cases, all of the misclassified points were members of the larger variance-covariance group (group 1). Corresponding LP solu- tions, on the other hand, did not display this consistent bias.

Our findings concerning the performance of alternative formulations for prob- lems whose groups are drawn from normal populations may be compared briefly to those of Bajgier and Hill 111, who also conducted tests for problems that satisfied the standard normality assumption. Although the design of our study differed some- what from theirs, our results strongly support their conclusions concerning the viability of an LP alternative to the Fisher discriminant procedure and the identity of the MSD model as the most competitive LP variant. Our results for the MMD model suggest, however, that not only is it a less attractive approach than the MSD approach, but that in general it fails to produce a consistently effective discrimi- nant function. From our more extensive experimentation with the MSID model, we similarly conclude that it also is a less attractive approach. On the other hand, the manner in which the MSID approach fails as a primary discriminator leads us to believe that it may serve as a useful secondary discriminator, increasing the power of the MSD model to differentiate between sample populations either by a combined formulation such as proposed in [4] or by a two-stage approach that carries out a subordinate postoptimization using the MSID criterion.

ADDITIONAL EXPERIMENTATION USING SKEWED GROUPS

To provide a test of the LP formulations under a broader range of conditions, a set of nonnormal populations was generated. ’helve (skewed) sample-group pairs, each of size 100, were examined to determine whether tests of normally distributed

19861 FEed and Glover 161

populations would be significantly altered when the classical assumption of nor- mality was abandoned.

A careful examination of these results reveals that the relative attractiveness of the three LP formulations tested is remarkably similar to that for normal popu- lations. Somewhat surprising, however, is the fact that nonnormality appears to have had a more adverse effect on the two less-effective variances of the LP for- mulation than on the SPSS (Fisher) procedure. Here both the MMD and the MSID models showed themselves substantially less reliable predictors of population-group membership than the Fisher model. (In the normal case, MSID was quite competi- tive.) The MSD formulation once again was the best of the LP approaches. How- ever, in contrast to the findings for normal populations, the MSD and Fisher models proved nearly indistinguishable from the standpoint of overall effectiveness, with perhaps a slight advantage going to the Fisher model. It appeared that those LP forms dominated by the goal of minimizing maximum overlap require additional modifications to deal adequately with the problem of differentiating skewed pop- ulations. On the other hand, as in the normal case, the LP forms tended to mis- classify points more impartially (without biasing the misclassifications of one group over the other) when the variance-covariance matrices of the groups differed greatly.

We believe these outcomes provide a clear incentive to pursue extended forms of the MSD approach [4]. For example, in both the normal and nonnormal cases, a weighted version that incorporates a subordinate internal deviation goal (such as that involving the di variables of the MSID approach) would seem promising. Further experimentation certainly is in order. Testing the stability of discriminant weights over a wide range of sample sizes and population shapes represents a possible next step. Concern for the potentially confounding effects of data-point outliers will require a more systematic assessment of the likely impact and effective reme- dies for each of the discriminant models. Also of interest would be a fuller evalua- tion of the LP method's capacity for extension beyond the simple bivariate two- group case. One approach that might merit investigation, for example, would be the generation of a hyperplane separating group 1 from all others, then a second hyperplane separating group 2 from all remaining groups (except group l), and so forth. Such possibilities invite further research in the multigroup case to iden- tify ways in which groups may be ordered more effectively.

SUMMARY

LP formulation has shown itself a competitive alternative to the classical dis- criminant technique. While the simplest of the LP variants proved less satisfac- tory, the more flexible forms compared favorably with the conventional Fisher pro- cedure. Test results showed that among the LP variants tested, the MSD formula- tion generally is the most reliable predictor of group membership.

Violation of the standa.rd classical assumption of equal variance-covariance had slight effect on the relative performances of the discriminant procedures vis- A-vis the Fisher model. Violation of the normality assumption left the standings of the LP procedures the same relative to each other but noticeably decreased the effectiveness of the MMD and MSID approaches. Nonnormality had only a slightly

162 Decision Sciences [Vol. 17

negative effect on the superior MSD approach. However, in both normal and non- normal cases, the LP forms displayed a more balanced pattern of misclassifica- tions; they tended to misclassify an equal number of points from each of the sub- ject groups when variance-covariance matrices differed radically.

It would appear that outliers pose a greater problem for the two simpler LP forms than for Fisher’s model. Modifications in the basic procedure, including the elimination or differential weighting of those points which most seriously violate group boundaries, can reduce this difficulty. The effective MSD approach is well suited for conversion to an adaptive method that we anticipate will yield still better results both in the presence and absence of such outliers.

In all, the assumption-free LP procedure offers a simple and direct approach to the classification problem. Evaluation of more general LP formulations that extend the basic MSD model [6] clearly is in order. [Received: June 13, 1984. Ac- cepted: June 13, 1985.1

REFERENCES

111

I21

I31

141

151

[a1

[71

I81

191

Bajgier, S. M., & Hill, A. V. An experimental comparison of statistical and linear programming approaches to the discriminant problems. Decision Sciences, 1982, 13, 604-618. Charnes, A., & Cooper, W. W. Management models and industrial application of linearprogtum- ming (Vol. 1). New York Wiley, 1%1. Freed, N.. & Glover, E A linear programming approach to the discriminant problem. Decision Sciences, 1981, 12, 68-74. Freed, N., &Glover, F. Simple but powerful goal programming formulations for the discriminant problem. European Journal of Opemtions Research, 1981, 7, 44-60. Freed, N., & Glover, F. Linear programming and statistical discrimination-The LP side Deci- sion Sciences, 1982, 13, 172-175. Freed, N., & Glover, E An adaptive linearprogtumming appmch to the discrimination problem (in preparation). Glorfeld, L. W., & Gaither, N. On using linear programming in discriminant problems. Decision Sciences, 1982, 13, 167-171. Kendall, M. G. Discrimination and classification. In P. R. Krishnaiah (Ed.), Multivariate analysis. New York: Academic Press, 1966. Marks, S., & Dunn, 0. J. Discriminant functions when covariance matrices are unequal. Journal of the American Statistical Association, 1W3, 68, 399-404.

Ned Freed is Associate Professor of Management at the University of Portland. He received his doctorate in management science from the University of Colorado and holds an MBA from the Whar- ton School. Dr. Freed has published a number of articles on the application of mathematical program- ming techniques to statistical problems in such journals as Decision Sciences, International Journal of Opemtions Research. and Communications in Statistics.

Fred Glover is Professor of Management Science at the University of Colorado. He has been spon- s o d as a National Visiting Lecturer by TIMS and ORSA and has served in the U.S. National Academy of Sciences Program of Scientific Exchange. In addition to holding editorial and publication commit- tee posts for Management Science and Opemtions Research, Dr. Glover has published more than 150 articles in the field of mathematical and computer optimization, particularly in the areas of integer programming, networks, and large-scale systems. Dr. Glover has received awards and honorary fellowships from the American Association for the Advancement of Science, NATO, Decision Sciences Institute, Colorado Energy Research Institute, Ford Foundation, American Association of Collegiate Schools of Business, Miller Research Institute, and TIMS College of Practice He also consults widely for in- dustry and government.