An analysis of software correctness prediction methods

7
An Analysis of Software Correctness Prediction Methods P. Kokol, V. Podgorelec, M. Zorman, M. Sprogar Laboratory for System Design FERI, University of Maribor, Slovenia (kokol, vili.podgorelec} @mi-mb.si M. Pighin Department of Mathematics and Computer Science University of Udine, Italy Abstract Reliability is one of the most important aspects of soft- ware systems of any kind. Software development is a com- plex and complicated process in which software faults are inserted into the code by mistakes during the development process or maintenance. It has been shown that the pat- tern of the faults insertion phenomena is related to measur- able attributes of the software. In the paper we will intro- duce some methods for reliability prediction based on soft- ware metrics, present the results of using these methods in the particular industrial sofhYare environment for which we have a large database of modules in C language. Finally we will compare the results and methods and give some di- rections and ideasfor future work. 1 Introduction Reliability is one of the most important aspects of soft- ware systems of any kind. The size and complexity of soft- ware is growing dramatically during last decades and es- pecially during last few years. The systems consisting of hundred of millions lines of code are not a rarity any more, and the demand for highly complex hardwarekoftware sys- tems is increasing more rapidly than the ability to design, implement and maintain them. When the requirements for and dependencies of computers increase, the possibility of crises from failures also increases. The impact of these fail- ures ranges from inconvenience to economic damages to 'This project was partially supported by the grants: Ministry of Ed- ucation, Science and Sport of Slovenia - Project INCOMPETENT J2- 05 14-0796-98; Italian-Slovenian bilateral project "New Software Metrics for Real World Applications"; USA-Slovenian bilateral project "Physical Based Software Complexity Metrics", Italian Minister of University and Scientific Research project "SALADIN-Software Architecture and Lan- guages to Coordinate Distributed Mobile Components". 0-7695-1287-9/01 $17.00 0 2001 IEEE 33 loss of lives - therefore it is clear that software reliability is becoming a major concern not only for software engineers and computer scientists, but also for the society as a whole [4, 17, 11. Software development is a complex and complicated process in which software faults are inserted into the code by mistakes during the development process or mainte- nance. It has been shown that the pattern of the faults insertion phenomena is related to measurable attributes of the software objects, especially with the software metrics [IO, 31. During the last twenty years hundreds of metrics have been proposed for the software assessment. But the mea- surement carried out during initial phases of a development cycle can be used for purposes other than those specifi- cally related to assessment. These forms of measurement are known as predictive metrics, a term which underlines this forecasting element and can be used for example for the prediction of the "dangerous" modules or the number of faults remaining in software [5]. Due to the large number of software metrics and various attributes produced by them, a practising software engineer required to assess or predict the reliability of a software sys- tem/module has a difficult task to answer some of the fol- lowing questions first: Which metrics/attributes will accurately predict the re- liability? What will be the maximal accuracy of selected met- rics? Will the integration of various metrics/attributes im- prove the accuracy of prediction? Which metricdattributes to integrate? How to integrate various metricdattributes?

Transcript of An analysis of software correctness prediction methods

An Analysis of Software Correctness Prediction Methods

P. Kokol, V. Podgorelec, M. Zorman, M. Sprogar Laboratory for System Design

FERI, University of Maribor, Slovenia (kokol, vili.podgorelec} @mi-mb.si

M. Pighin Department of Mathematics and Computer Science

University of Udine, Italy

Abstract

Reliability is one of the most important aspects of soft- ware systems of any kind. Software development is a com- plex and complicated process in which software faults are inserted into the code by mistakes during the development process or maintenance. It has been shown that the pat- tern of the faults insertion phenomena is related to measur- able attributes of the software. In the paper we will intro- duce some methods for reliability prediction based on soft- ware metrics, present the results of using these methods in the particular industrial sofhYare environment for which we have a large database of modules in C language. Finally we will compare the results and methods and give some di- rections and ideas for future work.

1 Introduction

Reliability is one of the most important aspects of soft- ware systems of any kind. The size and complexity of soft- ware is growing dramatically during last decades and es- pecially during last few years. The systems consisting of hundred of millions lines of code are not a rarity any more, and the demand for highly complex hardwarekoftware sys- tems is increasing more rapidly than the ability to design, implement and maintain them. When the requirements for and dependencies of computers increase, the possibility of crises from failures also increases. The impact of these fail- ures ranges from inconvenience to economic damages to

'This project was partially supported by the grants: Ministry of Ed- ucation, Science and Sport of Slovenia - Project INCOMPETENT J2- 05 14-0796-98; Italian-Slovenian bilateral project "New Software Metrics for Real World Applications"; USA-Slovenian bilateral project "Physical Based Software Complexity Metrics", Italian Minister of University and Scientific Research project "SALADIN-Software Architecture and Lan- guages to Coordinate Distributed Mobile Components".

0-7695-1287-9/01 $17.00 0 2001 IEEE 33

loss of lives - therefore it is clear that software reliability is becoming a major concern not only for software engineers and computer scientists, but also for the society as a whole [4, 17, 11.

Software development is a complex and complicated process in which software faults are inserted into the code by mistakes during the development process or mainte- nance. It has been shown that the pattern of the faults insertion phenomena is related to measurable attributes of the software objects, especially with the software metrics [ IO, 31.

During the last twenty years hundreds of metrics have been proposed for the software assessment. But the mea- surement carried out during initial phases of a development cycle can be used for purposes other than those specifi- cally related to assessment. These forms of measurement are known as predictive metrics, a term which underlines this forecasting element and can be used for example for the prediction of the "dangerous" modules or the number of faults remaining in software [ 5 ] .

Due to the large number of software metrics and various attributes produced by them, a practising software engineer required to assess or predict the reliability of a software sys- tem/module has a difficult task to answer some of the fol- lowing questions first:

Which metrics/attributes will accurately predict the re- liability?

What will be the maximal accuracy of selected met- rics?

Will the integration of various metrics/attributes im- prove the accuracy of prediction?

Which metricdattributes to integrate?

How to integrate various metricdattributes?

In this paper we will first introduce some available meth- ods which can help to find answers to above questions, present the results of using these methods in the particular industrial software environment for which we have a large database of modules in C language (about 900, 300.000 LOC, 168 attributes, signalled faults). Finally we will com- pare the results and methods and give some directions and ideas for future work.

2 General Methodology

2.1 Experimental Settings

We performed our experiment on a Hospital information system, released in 1996 and since then regularly main- tained and updated. At the moment the system consists of 91 1 modules, ranging from 23 to 614 LOC (alltogether more than 300.000 LOC). For each module we collected the fault (software defect [2]), history (date of detection, de- scription). Additionally we built a software metrics tool, which for each module calculated 168 different metrics shown bellow.

2.2 Definition of a set of attributes

The starting point for our analysis was the definition and measurement of a set of experimental attributes connected to the structure of software products after the code phase. Such parameters may, for example, be the total number of lines of code and lines of comment, the occurrence of vari- ous types of instruction, the operators and the types of data used. For each program, moreover, we considered the fault signals up to the moment when measurement started. By faults we mean all the malfunctions encountered during the internal test phase and after the release of the software. In our experimental environment the code phase includes a preliminary test of module. After this phase the modules go on to the real test session, in which faults are signalled and measurement starts.

The first problem to be dealt with was whether the cho- sen set of parameters would be sufficiently large to identify the structure of a program. We accordingly started with a very large set of parameters. These parameters were af- fected by the persistence of multicollinearity. It was nec- essary to reduce the total number of parameters to a smaller set of independent parameters. This was achieved by statis- tical procedures eliminating those which were heavily de- pendent on other parameters in explaining the presence of faults in a code, or which were completely irrelevant. We defined a subset of structural parameters which the facto- rial analysis identified as being reasonably free from mul- ticollinearity, plus the dependent variable, the number of faults.

2.3 Risk Evaluation

We decided to compare the number of faults detected in a code to the attributes found in the same code. In this way we aimed to link the degree of reliability of a program to partic- ular structural features. In general, the aim of the method is to identify those parameters which, in the light of the values measured for them, explain the presence of faults in many programs and which can for that reason be defined as risk parameters.

In every experiment we divided the total set in two sets: The first one (the training set) is necessary to build the model, The second one (the testing set) is used to assess the validity of the classification model.

The classification procedure works on the testing set starting from the built model. It has to be able to identify those files whose characteristics would suggest a high de- gree of risk (usually the discriminant level was five faults). In this regard, the predictive decision-making procedure might make mistakes in two distinct ways: by making a so-called Type I error, whereby a program is presumed to contain a large number of faults when it is, in fact, virtu- ally correct; or by making a Type I1 error, in which a pro- gram is believed to be free of faults, when the opposite is in fact true. Since we want to be able to use the results in order to fine-tune testing procedures on programs, Type I errors are clearly preferable since, at most, one further ses- sion of more detailed testing or a better restructuring will be required (and may indeed be superfluous) on a defect- free program. A Type I1 error means that dangerous pro- grams will not be signalled and so no particular action will be taken on them. This implies that an early identification or correction of potential defects will be more difficult. We evaluate the accuracy of the prediction as the total sum of errors on the test set.

2.4 General outline of methods

In our experiment we used different methods:

- Statistical methods. We used discriminant analysis in pure form and with factorisation.

- Threshold methods. We defined a threshold level to discriminate the two groups in testing set. The thresh- old is identified using particular metrics.

- Linear programming methods. With this method we used classical algorithms of linear programming to correctly divide the testing set into two groups.

- Decision trees methods. This method is based on in- ductive inference, which is the process of moving from concrete examples to general models, where the goal is to learn how to classify objects by analyzing a set

34

of instances (already solved cases) whose classes are known.

Type of exp. DIS-1

Some of the methods, especially statistical ones, enable the reduction of the data set. That means that some ob- jects which can not be classified due to noise are left out of the classification. The column ”% classified” in the ta- bles which follow represents the percent of the remaining objects in the reduced data set.

% Clas- %Err. T %Err. T % Err. sified I I1 Tot. 4.7 7.2 21.7 10.6

3 Methods

1

DIS-1 1 91.3 1 7.8 DIS-2 I 72.3 I 7.5

3.1 Statistical methods - Discriminant analisys

26.1 12.4 18.6 9.7

The hypothesis which is fundamental to the techniques of discriminant analysis (details of this method can be found in [ 131) is that for a given set of multiple and diverse ob- servations there exists an a priori classification into two or more mutually exclusive groups. In our case the obser- vations correspond, more specifically, to programs; it was decided to combine these in only two groups (or popula- tions), the first containing files with a fairly small number of faults, the second with a fairly high number of faults. It was thus to be expected that the characteristics of the pro- grams would be as similar as possible within a group and, at the same time, markedly different between programs be- longing to different groups.

At this point there were two alternatives to construct the model:

DIS-2 DIS-3 DIS-3

I . discriminant analysis of the data, as they stand, without any factorisation;

2. preliminary factorisation of the files, by using the main components technique; in this way we would be able to significantly reduce the number of distinct discrim- inant variables, and above all with a correlation equal to zero (perpendicular). The aim was to group the vari- ables on the basis of correlation values so as to explain the variability of the observed data by means of a more concise description of the structure of dependence.

Other authors approached this methodology with similar result [6, 91. The result can significantly embed only with drastic reduction of the percentage of classified files.

84.0 7.7 25.0 11.5 81.6 6.3 14.3 8.2 87.6 7.0 16.1 9.1

3.2 McCabe Metric - Threshold level

Type of exp. McCabe McCabe

The idea is to use the McCabe complexity metric as risk measurement (details of this method can be found in [ 131). The thresholds of classification were based upon the calcu- lation of the mean value of the McCabe cyclomatic number in each group of the learning set. Then the reference thresh- old value was calculated as the mean value of the results.

% Clas- %Err. T %Err. T % Err. sified I I1 Tot. 100.0 19.7 34.3 24.2 78.8 17.1 29.2 21.1

Type of exp. Alpha Alpha

% Clas- %Err. T % Err. T % Err. sified I I1 Tot. 100.0 18.1 35.8 30.9 81.3 16.5 30.8 24.7

Table 2. McCabe metric.

3.3 Alpha metric - Threshold level

In this experiment we used alpha metric as a risk mea- surement. Alpha metric calculates the information den- sity of a software module by transforming a module into the Brownian random walk. Brownian walk is then further analysed using average displacement function and regres- sion techniques. Further details can be found in [7, 81.

3.4 RPSM metric - Threshold level

The analytical steps of the metric definition are the fol- lowing (details of this method ca be found in [ 1 11 and [ 121):

1. For each parameter we calculate the frequency in each file in the database of programs Fij (1 5 i 5 p , 1 5 j 5 n, being n the number of files)

2. For each parameter IC fixed, we considered the min- imum and maximum value of F k j (1 <_ j 5 n) FMAA’k a n d F M I N k

35

3 . We calculated the value Dk = FMAXk - Fhf INk

4. We divided Dk in c subsets Skh = ( ( F M I N k + ( h - 1) * Dk/C 5 Skh < FMINk + h * Dk/c; 1 5 h 5 C) (we included FhfAXk in the last subset)

5. Then we found the class SA4k which contains the high- est number of occurrences of Fk3

6. For each parameter k , for each file j , we evaluated the values DIsTk, as the distance of the class s k , from S M k

7. We weighted the values DIsTk, with the number of fault which have been signalled on file j , and then we normalized the results; we obtained the risk definition for parameter k and file j RISKk ,

8. Then we computed the total risk of parameter IC, R I S K & as the normalized sum of RISKk , for all files j of the set

9. At last we computed the total risk of file j , R I S K F , as the normalized sum of all RISKkj for all parameters k contained in the file.

10. We define the RISKPk as the parameter risk level ( P R L ) of the k-th parameter, which is the risk using a particular syntactic structure in building software and R I S K F , as the file risk level ( F R L ) of the j- th file, which is the risk level of a whole file

With the defined value of fault in the learning set we di- vided it in two classes, we computed the mean of the F R L for each class. Then the fixed F R L used as threshold was the mean of these two last values.

Type of exp. RPSM- LP RPSM- LP RPSM- LP

% Clas- %Err. T %Err. T % Err. sified I I1 Tot. 100.0 32.6 8.4 16.0

90.2 28.6 6.4 13.1

80.2 25.3 4.9 10.9

Table 4. RPSM metric - Threshold method.

Type of exp. RPSM- Th RPSM- Th RPSM- Th

3.5 RPSM metric - Linear Programming Tech- niques

% Clas- %Err. T %Err. T % Err. sified I I1 Tot. 100.0 31.5 14.9 18.6

90.0 27.6 11.7 14.9

80.5 24.5 8.4 11.7

The proposed model represents the software modules as points in a n-dimensional space (every dimension represents one of the structural attributes for each module). Starting from this model the problem to find-out the more dangerous

3.6 Classical decision trees

Inductive inference is the process of moving from con- crete examples to general models, where the goal is to learn how to classify objects by analysing a set of instances (al- ready solved cases) whose classes are known. Instances are typically represented as attribute-value vectors. Learning input consists of a set of such vectors, each belonging to a known class, and the output consists of a mapping from attribute values to classes. This mapping should accurately classify both the given instances and other unseen instances. A decision tree [ 161 is a formalism for expressing such map- pings and consists of tests or attribute nodes linked to two or more sub-trees and leafs or decision nodes labelled with a class which means the decision. A test node computes some outcome based on the attribute values of an instance, where each possible outcome is associated with one of the sub-trees. An instance is classified by starting at the root node of the tree. If this node is a test, the outcome for the in- stance is determined and the process continues using the ap- propriate sub-tree. When a leaf is eventually encountered, its label gives the predicted class of the instance. Decision trees methods don’t allow to reduce the data set, so all the experiments are performed on 100% of cases.

In induction of a classical decision tree, we start with an empty tree and the entire training set. At every step an ’un- used’ attribute is chosen with the help of a heuristic evalua- tion function and the training set is divided, according to the values of the chosen attribute. This procedure is repeated over and over, until we end up with an empty training set and a decision tree completed.

Candidate attributes that may possibly describe the con- cept, are outlined using a heuristic evaluation function that is normally based on information gain of a single attribute.

36

Type of % Clas- %Err. T exp. sified I’ See5 - 100.0 19.2 5% See5 - 100.0 19.2 10% See5 - 100.0 13.7 boost

3.7 Evolutionary decision trees

%Err. T % Err. I1 Tot. 51.9 28.0

55.6 29.0

55.6 25.0

Evolutionary algorithms are adaptive heuristic search methods which may be used to solve all kinds of com- plex search and optimization problems. They are based on the evolutionary ideas of natural selection and genetic processes of biological organisms. As the natural popula- tions evolve according to the principles of natural selection and ”survival of the fittest”, first laid down by Charles Dar- win, so by simulating this process, evolutionary algorithms are able to evolve solutions to real-world problems, if they have been suitably encoded. They are often capable of find- ing optimal solutions even in the most complex of search spaces or at least they offer significant benefits over other search and optimization techniques.

As the traditional decision trees induction methods con- tain several disadvantages we decided to use the power of evolutionary algorithms to induct the decision trees. In this manner we developed the evolutionary decision sup- port model that evolves decision trees in a multi-population genetic algorithm called genTrees [14, 151, and a tool for the construction of evolutionary vector decision trees called DecRain [18]. In the evolutionary approach instead of us- ing a heuristic function genetic operators are used (selec- tion, crossover and mutation) and a fitness function which determines the quality of a single evolved decision tree as the process of evolution progresses.

The basic steps of the evolutionary induction of decision trees are:

Type of exp.

DecRain genTrees

1. Initialization of the starting population, which is done by randomly constructing some number of trees.

% Clas- %Err. T %Err. T % Err. sified I I1 Tot. 100.0 16.7 28.6 20.0 100.0 15.1 22.2 17.0

2. All the trees in a population are evolved with the fitness function regarding the accuracy, size and type I and type I1 errors.

Type of exp. NNDT

3. Using the genetic operators new population is created and the process is repeated until an appropriate solu- tion is obtained.

% Clas- %Err. T %Err. T % Err. sified I I1 Tot. 100.0 2.7 74.1 22:o

full NNDT short

Table 7. Evolutionary decision trees.

100.0 30.1 3.7 23.0

3.8 Neural network based decision trees

NNDT step

Building neuro generated decision trees can be divided in the following steps [20, 191:

1. Build a classical decision tree, using a training set (source decision tree).

100.0 28.8 14.8 25.0

2. Convert the source decision tree to a neural network.

3. Train the network using the training set and the back- propagation algorithm. Mean square error of such net- work converges toward 0 much faster, then it would in the case of randomly set weights in the network.

4. Examine the trained neural network in order to deter- mine the most important attributes that influence the outcomes of the neural network.

5. Use a list containing the most important attributes in- stead of classical heuristic function and a training set to build the final decision tree.

The neural network that serves as a middle stage in the conversion has by default two hidden layers.

4 Assessment of results

From tables 1 to 8 we can extract following conclusions:

1 . In general statistical and mathematical methods work better regarding the total accuracy.

2. The operative constraint in accuracy seems to be near 80-85% if 100% of cases have been classified or 85- 90% if only about 80% of modules have been clas- sified. These constraints are probably due to noise

37

Method Allow reduction Statistical J Threshold J

Linear programming J Decision trees

Table 9. Summary of methods by operative features.

Risk parameters Threshold level Hierarchy of attributes

J J J J J J

present in any population of software modules. In our case the noise can be the consequence of the fact that not all the faults have been detected in the time of the analysis of modules, different styles of programming and testing, different "age" of modules, etc. Proba- bly this noise can not be eliminated using current tech- niques.

% Clas- %Err. T sified I 100.0 13.7

81.3 16.5 100.0 2.7 78.8 17.1 100.0 15.1 80.5 24.5

3. The remaining methods are close to this accuracy, es- pecially considering the case when 100% of modules have been classified.

%Err. T % Err. I1 Tot. 55.6 25.0

30.8 24.7 74.1 22.0 29.2 21.1 22.2 17.0 8.4 11.7

-

4. Some methods can embed the reduction of the set elim- inating the uncertain classifications. The considerable reduction can not exceed 20-25%. In this case the total error can drop to 8-10%.

5. Some methods work as black box (especially statistical methods with factorisation), other as white box speci- fying the risk parameters, threshold levels and the hi- erarchy of attributes. That is very important to support the test or maintenance engineer.

In tables 9 and I O we summarise the most important characteristic of the methods presented.

Type of exp. See5 - boost Alpha NNDT McCabe DecRain

Th RPSM- LP DIS-3

RPSM-

Table 10. Summary of methods ranked by to- tal error.

5 Conclusions

In this paper we presented different reliability prediction methods and their use on the real-world application. The results show that there is an upper limit of the accuracy that can be reached, due to the noise inherent in database. Our analyses show that statistical and mathematical meth- ods more accurately predict the reliability of software mod- ules, but are black box methods which can not explain the reasoning behind the prediction. On the contrary decision tree methods are not so accurate, but are white box meth- ods revealing all the important decisions behind reasoning like risk parameters, threshold levels and the hierarchy of prediction attributes.

We showed that the use of presented methods can help a software engineer to answer the questions stated in the introduction. According to these questions and the charac- teristics of methods the software engineers can choose one method instead of the others.

In our future work we plan some extensions to better un- derstand the link between inner structure of software mod- ules and reliability. In that manner we will:

1. use unsupervised machine learning methods like Ko- honen neural networks,

2. use other metrics like alpha to assess the style of pro- gramming to reduce the noise in the database,

3. extend the analysis to other databases,

4. embed the module reduction possibility into the deci- sion tree methods reducing the uncertainty.

References

[ 11 M. Bush. The changing perception of software metrics. Soft- ware Qualify Management, 1:417-429, 1994.

[2] K. Y. Cai. Software Defect and Operational Projile Model- ing. Kluwer Academic Publishing, 1998.

[3] D. Coleman. Using metrics to evaluate software system maintainability. IEEE Computer, 27(8):44-49, 1994.

[4] N. E. Fenton. Software Metrics - A Rigorous Approach. Chapman & Hall, 2nd edition, 1997.

[5 ] R. B. Grady. Practical results from measuring software qual- ity. Communications of the ACM, 36( 11):62-68, 1993.

38

[6] T. M. Khoshgoftaart and P. Oman. Software metrics: Chart- ing the course. IEEE Computer, 27(9):13-15, 1994.

[7] P. Kokol, V. Podgorelec, M. Zorman, T. Kokol, and T. Njivar. Computer and natural language texts - a comparison based on long-range correlations. Journal of the American Society for Information Science, 50( 14): 1295-1301, 1999.

[8] P. Kokol, V. Podgorelec, M. Zorman, and M. Pighin. Alpha - a generic software complexity metric. In Combined 10th European Software Control and Metrics conference and the 2nd SCOPE conference on Software Product Evaluation ES- COM - SCOPE 99, pages 397405, 1999.

[9] J . C. Munson and T. M. Khoshgoftaart. The detection of fault-prone programs. IEEE Transactions on Software Engi- neering, 18(5):423-433, 1992.

[lo] D. J. Paulish and A. D. Carleton. Case studies of soft- ware process improvement measurement. IEEE Computer,

[l I] M. Pighin and P. Kokol. Rpsm: a risk predictive structural experimental metric. Proceedings of the 2nd European Soft- ware Measurement Conference, FESMA'99, Technological Institute Publications, Antwerp, pages 459-464, 1999.

[I21 M. Pighin and S. Rizzato. Using metrics to enhance pre- release testing. Proceedings of International Conference on Software Testing, ICST-2000, pages M/17-23, 2000.

[I31 M. Pighin and R. Zamolo. A predictive metric based on statistical analysis. Proceedings of International Conference on Software Engineering, ICSE'97, ACM Press, pages 262- 270, 1997.

[I41 V. Podgorelec and P. Kokol. Self-adapting evolutionary de- cision support model. Proceedings of the 1999 IEEE In- ternational Symposium on Industrial Electronics ISIE'99, pages 1484-489, 1999.

[I51 V. Podgorelec and P. M. Evaluating medical software us- ing software metrics and evolutionary decision trees. Pro- ceedings of the International Conference on Artificial Intel- ligence in Science and Technology AISAT'2000, pages 3 17- 322,2000.

[ 161 J. R. Quinlan. C4.5: Programs for Machine Learning. Mor- gan Kaufmann, 1993.

[I71 J. M. Roche. Software metrics end measurement princi- ples. ACM SIGSOFT Software Engineering Notes, 19( l):2- 18, 1994.

[I81 M. Sprogar, P. Kokol, S . H. BabiE, V. Podgorelec, and M. Zorman. Vector decision trees, intelligent data analysis. IOS Press, 4(3-4):305-321, 2000.

[I91 M. Zorman and P. Kokol. Neuro generated decision trees. Proceedings of the ICSC symposia on Neural computational (NC'2000), 2000.

[20] M. Zorman, P. Kokol, and V. Podgorelec. Medical decision making supported by hybrid decision trees. Proceedings of the ICSC symposia on Intelligent systems & applications (ISA '2000), 2000.

27(9):50-57, 1994.

39