An image enhancement technique and its evaluation through bimodality analysis

10
CVGIP: GRAPHICAL MODELS AND IMAGE PROCESSING Vol. 54, No. 1, January, pp. 13-22, 1992 An Image Enhancement Technique and Its Evaluation through Bimodality Analysis ALAIN LE N&GRATE Laboratoire d’&thologie ExpCrimentale, UniversitP Paris Nord, Avenue Jean-Baptiste Ckment, 93430 Villetaneuse, France AZEDDINE BEGHDADI Laboratoire des PropriPtCs Mkcaniques et Thermodynamiques des Matkriaux, UniversitP Paris Nerd, Avenue Jean-Baptiste ClPment, 93430 Villetaneuse, France AND HENRI DUPOISOT Groupe d’dnalyse d’lmages Biomkdicales, Universitk Rent Descartes and C.N.A.M., 3, Boulevard Pasteur, 75015 Paris, France Keceived May 3, 1989; accepted June 4, 1991 It is well known that the most effective method for comparing images at present is subjective human evaluation. One factor com- plicating the design and evaluation of a given image treatment is the lack of an objective measure of picture quality. In the present paper a contrast enhancement and noise filtering technique is developed and evaluated through bimodality measure analysis. The proposed algorithm makes the different classes of an image statistically separable without sensibly modifying the contours. This method can be used as an aid in gray-level thresholding. Q 1992 Academic Press, Inc. 1. INTRODUCTION The automatic object classification within a given scene has long been an area of active computer vision research. The key role of the pattern detection algorithm is to point out the different classes in order to improve the visual quality and thus to facilitate the image segmen- tation for automated description and pattern recognition [I, 21. It is often necessary to use an image posttreatment to make structural classification in the image easier. Con- trast enhancement may be used to allow the separation of the different gray-level classes. This may be followed by a thresholding, a multithresholding, or a labeling process that gives a compressed description of the scene. Most of these posttreatments are based on the gray-level histo- gram. However, in many cases, it is well known that the tone distribution is not sufficient for automatic discrimi- nation of the different classes. In very-low-contrast images, the gray-level distribu- tion does not allow any distinction between the different components although these components are revealed by a simple visual inspection of the image. In order to im- prove the perceptibility of details, many algorithms based on the transformation of the gray-level histogram have been considered in various papers [3-91. Most of them improve the visual quality of the image according to a predefined gray-level distribution transformation. The main idea of this new algorithm is to enhance se- lectively the contrast according to a physical measure, the edge gray level. It is well known that, in a given image composed of a background class and an object class, low edge values correspond to the inner region of the classes whereas the highest edge values correspond to the fron- tiers [7, 11, 121. But, at the same time, high edge values may correspond to noise. Thus, enhancing the contrast of noisy images without amplifying the visibility of isolated points appears to be a somewhat difficult task. The main drawback of most of the contrast enhance- ment techniques is the difficulty in discriminating be- tween the contour gray levels on the one hand and the isolated points, hence the noise, on the other hand. In- deed high edge values may be high contrast noise instead of real edges. The classical way to reduce the noise effect is to replace the current pixel gray level by an averaged gray level computed within a large window centered on this pixel. However, this method, while blurring the edges, reduces both the noise effect and the contour de- tection quality. In the present communication we present a method that enhances the contrast and filters the noise at the same time. Furthermore it permits the modes in the gray- 13 1049-9652192 $3.00 Copyright 0 1992by Academic Press, Inc. All rights of reproduction in any form reserved.

Transcript of An image enhancement technique and its evaluation through bimodality analysis

CVGIP: GRAPHICAL MODELS AND IMAGE PROCESSING

Vol. 54, No. 1, January, pp. 13-22, 1992

An Image Enhancement Technique and Its Evaluation through Bimodality Analysis

ALAIN LE N&GRATE

Laboratoire d’&thologie ExpCrimentale, UniversitP Paris Nord, Avenue Jean-Baptiste Ckment, 93430 Villetaneuse, France

AZEDDINE BEGHDADI

Laboratoire des PropriPtCs Mkcaniques et Thermodynamiques des Matkriaux, UniversitP Paris Nerd, Avenue Jean-Baptiste ClPment, 93430 Villetaneuse, France

AND

HENRI DUPOISOT

Groupe d’dnalyse d’lmages Biomkdicales, Universitk Rent Descartes and C.N.A.M., 3, Boulevard Pasteur, 75015 Paris, France

Keceived May 3, 1989; accepted June 4, 1991

It is well known that the most effective method for comparing images at present is subjective human evaluation. One factor com- plicating the design and evaluation of a given image treatment is the lack of an objective measure of picture quality. In the present paper a contrast enhancement and noise filtering technique is developed and evaluated through bimodality measure analysis. The proposed algorithm makes the different classes of an image statistically separable without sensibly modifying the contours. This method can be used as an aid in gray-level thresholding. Q 1992 Academic Press, Inc.

1. INTRODUCTION

The automatic object classification within a given scene has long been an area of active computer vision research. The key role of the pattern detection algorithm is to point out the different classes in order to improve the visual quality and thus to facilitate the image segmen- tation for automated description and pattern recognition [I, 21.

It is often necessary to use an image posttreatment to make structural classification in the image easier. Con- trast enhancement may be used to allow the separation of the different gray-level classes. This may be followed by a thresholding, a multithresholding, or a labeling process that gives a compressed description of the scene. Most of these posttreatments are based on the gray-level histo- gram. However, in many cases, it is well known that the tone distribution is not sufficient for automatic discrimi- nation of the different classes.

In very-low-contrast images, the gray-level distribu-

tion does not allow any distinction between the different components although these components are revealed by a simple visual inspection of the image. In order to im- prove the perceptibility of details, many algorithms based on the transformation of the gray-level histogram have been considered in various papers [3-91. Most of them improve the visual quality of the image according to a predefined gray-level distribution transformation.

The main idea of this new algorithm is to enhance se- lectively the contrast according to a physical measure, the edge gray level. It is well known that, in a given image composed of a background class and an object class, low edge values correspond to the inner region of the classes whereas the highest edge values correspond to the fron- tiers [7, 11, 121. But, at the same time, high edge values may correspond to noise. Thus, enhancing the contrast of noisy images without amplifying the visibility of isolated points appears to be a somewhat difficult task.

The main drawback of most of the contrast enhance- ment techniques is the difficulty in discriminating be- tween the contour gray levels on the one hand and the isolated points, hence the noise, on the other hand. In- deed high edge values may be high contrast noise instead of real edges. The classical way to reduce the noise effect is to replace the current pixel gray level by an averaged gray level computed within a large window centered on this pixel. However, this method, while blurring the edges, reduces both the noise effect and the contour de- tection quality.

In the present communication we present a method that enhances the contrast and filters the noise at the same time. Furthermore it permits the modes in the gray-

13 1049-9652192 $3.00

Copyright 0 1992 by Academic Press, Inc. All rights of reproduction in any form reserved.

14 LE N&RATE, BEGHDADI, AND DUPOISOT

level histogram corresponding to the different classes in the image to be pointed out. We deal more specifically with bimodality as defined by Phillips et al. [13] and we make reference to our previous contrast enhancement technique [lo] and to the gray-level thresholding method of Weszka and Rosenfeld [14].

The comparison of the contrast enhancement ability of the proposed approach with that of the existing methods is not considered in this paper, since it was already given in our previous work, on which this new method is based. The comparative study is devoted to the filtering part of the method in Section 7.2.

2. NOTATION

The notation used in this paper is as follows:

xkl

xi, wkl

S,j

Eki

WL

w:, ckl

Gr

@ij

-i R/c/

the gray level of the pixel at coordinates (k, I) of the original image;

the gray level of the pixel at (k, 1) of the treated image;

the window of odd size centered on (k, I); edge value associated with the pixel (i, j); the mean edge gray level associated with the win-

dow Wk,; the set of pixels (i, j) belonging to Wkl such that -

X, 5 Ek,; the complementary set of W:, in Wkl; the local contrast associated with the (k, I) pixel in

the original image; the local contrast associated with (k, 1) in the

treated image; the weighting function associated with the (i, j)

pixel; the inner representative gray level of the ith class

defined in Wk[ (i = 1 or 2).

3. METHOD

Let us first remember our previous contrast enhance- ment algorithm, which is based on local contour detec- tion operators. The contrast associated with a pixel (k, I) of gray level Xkl and center of a given window Wk/ is defined by

where &[ is the mean edge gray level associated with the window Wkl. It corresponds to the mean gray level of object frontiers within Wkr. It can be obtained by comput- ing the edge value Sij of each pixel (i, j) within Wkl and by averaging their gray level Xij weighted by 6ij. One can use the expression

(2)

The edge value 6ij is the result of applying a differential operator to the pixel (i, j), such as the Laplacian, Sobel, Prewitt, or Rosenfeld operators [6, 71.

Once the mean edge gray level &l is computed, the contrast Ck[ can be obtained from definition (1) and mag- nified into Ch = F(Ck[) by using a contrast enhancement function F satisfying the condition Ckl E 10, 11, F(Ckr) B Ckl, and F(CkI) E [O, 11. This new contrast value is used to transform the gray level X4, into Xii according to the sign of the difference (Xkl - &I). It leads to

(3)

In this new method, instead of computing the contrast by using definition (1) we first classify the pixels accord- ing to their gray level with respect to & defined by (2). For a given window Wkl centered on the pixel (k, I), two sets are defined: W:, the set of pixels (I’, j) such that Xij 5 &, and W#$, the complementary set in Wk[ (i.e., the set of pixels (i, j) such that Xij > Ekl).

In order to enhance the contrast selectively, a repre- sentative gray level of each class is computed. It is well known that a good representative of a class must be an inner gray level of this class. Twelve years ago, Weszka and Rosenfeld investigated a method of computing a weighted histogram [14] so that the valley in a nearly unimodal histogram would show up. In their method, points having low edge values were heavily weighted in the gray-level histogram whereas points with high edge values were lightly weighted. We use the same basic idea in the computation of the inner gray level of each class. For example, if 6ij is the edge value at a given pixel (i, j), one could give to that point the weight @ij = l/( 1 + 6 t) in the computation of the inner gray level. This gives full weight to points having zero edge value (inner points) and light weight to those having high edge value (border points or isolated points). Other elaborated weighting functions can be used, as is shown in Section 5.

Now, given a window Wkl and the mean edge gray level &, the inner representative gray level of each class can be computed by using the expressions

Ri[ = 2 Cpij * Xij Kxw:, I C (BP,

WEWL (4)

& is the inner representative of the class W:,. Simi- larly, R,$ is that of W’,,.

IMAGE ENHANCEMENT TECHNIQUE ANALYSIS 15

The contrast Ck, associated with the current pixel (k, 0, which is at the center of the window Wk,, may then be defined as

- (i = 1 if Xk, I Ek,

- and i = 2 if Xk, > Ek,).

Until now, the only differences from our previous ap- proach are the introduction of the weighting function aV, the inner representative gray level &, and the new defi- nition of the local contrast. Now Ck, becomes related to the representative of the considered class to which Xk, belongs.

The noise effect is reduced by replacing Xk/ by & in expression (1). Indeed the inner representative gray level pi, is an average value computed over all the pixels be- longing to the class Wi,. Furthermore high edge values aij, which may correspond to noise, lead to a low weight- ing function @ij and hence to a reduced contribution of noisy points to the computation of &.

It can be noted that a mean filtering process can be obtained if one gives full weight to all points in Wk, inde- pendent of their associated edge values. Indeed setting the @ij value to 1 corresponds to a zero value for 6ij. The inner representative gray level is then nothing else than the mean gray level.

Once R’k, and Ck, are computed in the current window Wkl, one must amplify Ckl into CL, using an increasing function in the range [O, l] as suggested in [15, 161. Then Xkl is transformed into Xi, according to the expressions

- 1 - CL, Xi., = Ekl ~

1 + c;, if (k, 1) E WA,

(6)

Note that Xkl is transformed according to the relative position of its representative Rir with respect to ,!?kl, in as much as the contrast is related to J!?~/ and &. The differ- ence IR$ - &I is amplified by the process and the trans- formed gray level is shifted either to the right or to the left of &I. This effect is illustrated and discussed below.

4. ALGORITHM

Step 1. Choose an odd number m for the window size.

Step 2. Compute the mean edge gray level Ekl in the window W,I centered on the current pixel (k, I) using Eq. (2).

Step 3. Classify the pixel (i, j) according to the sign of the difference & = (Xkl - &J. If & I 0 then (k, I) E W:C; otherwise (k, 1) E W5.

Step 4. Compute in each class the inner representa- tive gray level R~I using Eq. (4). Now Xkl is replaced by R;/.

Step 5. Compute the contrast associated with the cur- rent pixel (k, 1) using definition (5).

Step 6. Amplify the original contrast Ckl into CL, using a contrast enhancement function that satisfies the condi- tions mentioned previously.

Step 7. Transform the original gray level X,, into XL, following expressions (6).

Step 8. Move to the next pixel and repeat the treat- ment until the last point of the image.

Remarks.

-One can stop the process at Step 4 and then go to Step 8. This can be used for filtering purposes.

--If Xi, is outside the gray-level range, here [0, 2551, force it to the maximum gray level, here 255.

5. THE UNDERLYING IMAGE MODEL AND ITS LIMITATIONS

We assume that the given image can be subdivided into windows, or subimages, that contain dark objects on a light background, or vice versa, and then can be seg- mented into two classes by thresholding.

We further assume that the gray levels of points inte- rior to the objects, or to the background, are highly cor- related, while across the edges, the adjacent pixels differ significantly in gray level.

The validity of these assumptions is debatable but so are other image representations encountered in the cur- rent practice.

If an image satisfies these assumptions one could then define three main gray levels, the mean edge grey level given by Eq. (2) and the two inner representative gray levels defined by expressions (4).

The algorithm may be effective for one class of images, but ineffective for others. Indeed, if a window is cor- rupted by the noise of a gray level comparable to that of the edges the method fails, as is shown in Section 7.2. Furthermore, if the size of the analysis window is too large Rg. may be a bad representative of the (k, 1) pixel to be replaced.

In a given situation one must adapt the algorithm by choosing the size of the window or by using other weight- ing functions such as @k(8) = exp( -ka2), where k is a positive constant and 6 the edge value. One can also use the functions @z,(S) = I/[1 + (i3180)2n], where n is an

16 LE NfiGRATE, BEGHDADI, AND DUPOISOT

integer and a0 can be considered a cutoff edge value lo- cated at points for which the function is down to a certain fraction of its maximum value. For example, one can compute a0 such that @zn is down to half of its maximum value.

It can be noted that both @k and QZn functions have the same maximum value, i.e., 1, which is obtained for a zero edge value. It corresponds to points interior to the back- ground or to the object. These pixels are then counted with full weight in the computation of &.

Furthermore, one can note, by analogy to the filtering theory in the frequency domain, that the functions %,, is analogous to the Butterworth low-pass filter of order II [23] and then ?Sa can be interpreted as a cutoff in the spatial domain. This means that points with edge values in the range [-&-,, 6,) are counted with a weighting factor in the range E0.5, 13.

Moreover, if the critical edge value a0 can be estimated in a given window, one could use other functions such as

1 Q(6) =

if 161 I So

0 elsewhere

sin c(7r6/60) @(6)

if 16 1 I a0 =

0 elsewhere

32

Sk :;; :. : .‘.

96 ’ ‘. :.

64

32 ” _,_ ” ,..-.-~ .---.,

E ,.,. ; ; ,. ,‘.“..‘>“-m..-e.- ‘.‘-.-.’ ..̂ :.

Q edge value 323 k

192

E 168 I

k 12s i-W

96 : 64

32

e

E mean edge gray level 323 k

II treated intensity profile 383 k

256

224

One way to choose a0 is to replace it by the average edge value or the median edge value computed in the window centered on the pixel to be transformed.

All these functions can be interpreted in terms of spa-

192

-i 160

Rk 128

96

64 tial domain filtering similarly to the filtering process in the 32

frequency domain. 6

6. A NUMERICAL EXAMPLE FIG. 1. An intensity profile at the different treatment steps (window size, 47 X 47).

To illustrate this algorithm, a numerical example is used. Figure la shows the gray-level profile of a line in a synthetic image composed of different spatial frequen- cies. To show the result of the application of the contour detection operator, the edge value corresponding to the same line is displayed in Fig. lb. As expected the edge value is maximum on the ramps as we use a second-order derivative operator. Moreover a crossover between two consecutive ramps is observed. In fact it is well known that the Laplacian, for example, has value zero on a lin- ear ramp but it has high values on the shoulders at the top and bottom of a ramp.

Before the pixel classification on the basis of their edge value is performed, the mean edge gray level is com- puted. Figure lc displays the mean edge gray level corre- sponding to the image in Fig. la. It can be observed that the mean value of &I is 128. Finally Fig. Id shows the

results obtained after the pixel classification. Now the interclass transitions are more sharpened. The peaks are transformed into plateaus.

Indeed, for a given class of pixels we assign one inner representative gray level. Three regions in Fig. Id are also observed. In the first region we distinguish many plateaus as there are many oscillations in the original image. In the second region one can observe two transi- tion plateaus. Finally, fewer plateaus are obtained in the third region than in the first region.

This numerical example clearly shows the segmenta- tion effect on the basis of the edge value associated with pixels. It is also proved through this example that the proposed method preserves the contour locations.

IMAGE ENHANCEMENT TECHNIQUE ANALYSIS 17

7. RESULTS AND DISCUSSION

To test the validity and the limitations of this method some synthetic and real images are used. For some im- ages only subjective human visual appreciation could be applied for the performance evaluation of the method. For all these images the Sobel operator is used for the computation of the edge value 6ij because it is less noise sensitive than other operators [6, 71.

7.1. Deblurring-Spatial Contrast Enhancement

Figure 2 shows a synthetic periodic square lattice com- posed of two classes of gray levels before and after vari- ous treatment protocols. To test the robustness of the method in separating the two classes even when the con- tours are not well defined, a blurring process is applied to the original image displayed in Fig. 2a. The blurred image shown in Fig. 2b is quite different from the original im- age. However, when the proposed method is applied to the image of Fig. 2b with a 17 x 17 window size, an image (Fig. 2c) similar to that shown in Fig. 2a, where the classes are well separated visually and the contours more sharpened, is obtained. Moreover the same result is ob- tained even for a large window size, here 7 1 x 71, as shown in Fig. 2d. For these two images the square root function is used for contrast amplification. The results obtained clearly show that the method can be used for deblurring purposes.

Another type of synthetic image is used in Fig. 3, which displays an image composed of different spatial frequencies and contour thicknesses. Figures 3a-3d

FIG. 2. Synthetic square lattice image. (a) Original image; (b) image (a) after blurring; (c, d) image (b) after treatment for window sizes 17 x 17 and 71 x 71, respectively.

FIG. 3. Simulated image of wave propagation. (a) Original image; (b, c, d) image (a) after treatment for window sizes 7 x 7, 15 x 15, and 31 X 31, respectively.

show the original and the treated images for a window size of 7 x 7, 15 x 15, and 31 x 31, respectively. No contrast enhancement function was applied to these im- ages and the algorithm was stopped at the fourth step. It can be noted in these results that when a large-sized win- dow is used, the contours are distorted. In fact when the window size far exceeds the detail to be enhanced the representative gray level &, defined by Eq. (4), could be shifted toward another gray level of the same detail as the window, and then it is a bad representative of the pixel being considered.

Another interesting result is the window size effect on the spatial frequencies. Indeed, it is observed that when the window size increases high spatial frequencies are nearly lost whereas the low frequencies are preserved. Outside of these limitations it can be noted that the two classes of gray level are well separated visually. Given these drawbacks, one must carefully choose the size of the analysis window for this sort of image, in which dif- ferent spatial frequencies coexist.

7.2. Noise Effect

There exists a class of local image smoothing tech- niques in which a neighborhood of each pixel is exam- ined, and the pixel is replaced by an average of a selected set of its neighbors [24, 253. Some of these methods are based on local statistics [26] or on the local gradient infor- mation [27]. However, the well-known median filter [6] has attracted much attention.

To compare our filtering method to the median filter we

18 LE NI?GRATE, BEGHDADI. AND DUPOISOT

FIG. 4. Digitized image of a coronarography. (a) Original image; (a’) original image plus noise; (b, b’) image a’ after median filtering and the proposed method, respectively, for a window size of 5 x 5; (c, c’) the same treatments for a window size of 13 x 13.

chose a digitized image of a coronarography corrupted by an additive and location-independent noise as shown in Figs. 4a and 4a’.

The test noisy image in Fig. 4a’ contains two main classes, blood vessels on a heart background. Two diffi- culties arise in this case. The first is due to the different orientations and thicknesses of the blood vessels. The second is the edge values associated with the noise level, which are comparable to those corresponding to some small object details.

The proposed algorithm was used up to the fourth step with &(a ) = l/( 1 + 6 2, as a weighting function in order to obtain a filtering effect. It is compared to the median filter for different sizes of the analysis window. It is clearly shown, in Figs. 4b and 4b’, that the median filter is supe- rior to our method for small window size.

The inefficiency of our approach in this case is essen- tially due to the fact that 2:) is a bad representative of the

noise pixel to be replaced. In fact, the average values computed from Eqs. (2) and (4) are badly estimated for a small-sized analysis window. Thus, R:, could be of the same order as the noise level and the noise pixel would not be removed.

However, for a large window size our technique is su- perior to the median filter, as shown in Figs. 4c and 4~‘. Indeed in the vicinity of the boundaries, where image intensity changes abruptly, the median filter tends to smooth out these changes as well, thus the noise reduc- tion ability is significantly reduced for the median filter.

In contrast, our method preserves the edges and smooths out the noise significantly. This is due to the fact that for a large window size the median gray level may be a bad representative of the pixel to be replaced. In fact, there is no physical reason for choosing the median value as a representative in a given neighborhood. In contrast, in our approach the neighboring pixels are selected with respect to their associated edge values, which are obvi- ously physical measures related to local contrasts. Thus, the main difference from the median filter is that the neighboring pixels are not equally considered in our tech- nique .

To more formally summarize these remarks let us con- sider a pixel (i, j) of gray level X, and its associated edge value aij. It should be noted that

- Xij = Ekl + 6ij

and from Eq. (4) we can write

which yields

-i - - R kl = Ekl + 8kl,

where &[ is an average edge value computed in the cur- rent window of center (k, I). It could be noted that for high edge values, which may correspond to edge pixels, & tends toward zero and then RiI tends toward the mean edge gray level &. Thus, this last expression clearly shows that & is a good representative of the given win- dow for filtering purposes, when the window size is suffi- ciently large, since the edge gray levels are preserved.

In summary, the proposed method is especially effi- cient, compared to the median filter, when large-sized analysis window is used. Indeed, Kittler et al. [281 have derived an expression similar to Eq. (2) in their thresholding method and have clearly shown that for a sufficiently large window the noise effect can be ne- glected. Furthermore, the introduction of the weighting function and the representative gray level make the treat-

IMAGE ENHANCEMENT TECHNIQUE ANALYSIS 19

ment less sensitive to the noise since both the mean edge gray level and the inner gray level are obtained by averag- ing some local quantities of the neighboring pixels.

7.3. Bimodality Analysis

After this qualitative discussion through subjective hu- man vision evaluation, the proposed method is next eval- uated through bimodality measure analysis.

It was shown by Phillips et al. [13] that a good evalua- tion of the separability of two classes is given by bimodal- ity analysis. These authors define a measure of the bimo- dality of a population P by partitioning it into two subpopulations P, and Pz and using the Fisher distance

FD(t) = [a(/.~1 - /~~)~/(a,gi + CQU:)]“~, (10)

where t is the gray-level threshold that may separate the two classes; CY, al, (~2 are the respective sizes of P, PI, P2; and (~1, ~2 and ~1, CT+~ are the means and variances of PI, P2, respectively.

These last parameters depend on the choice of the gray-level threshold t. An optimum partitioning of P cor- responds to a maximum Fisher distance between PI and P2, which occurs for an optimum gray-level threshold t. Phillips et al. have shown that the Fisher distance is a good measure of the bimodality compared with other measures [17, 181.

To analyze the bimodality measure, a real image com- posed of two classes is used. Figure 5 shows a transmis- sion electron micrograph image of thin gold film before and after treatment. In Fig. 5a one can observe light gold clusters of different morphologies separated by dark paths. Figures 5b-5d show the results obtained from the proposed method for 3 x 3,27 x 27, and 71 x 71 window sizes, respectively. For this study the algorithm was stopped at step 4. The results obtained show that using large sizes for the window analysis gives a good visual separability of the two classes. In fact the gray level of the two classes, gold clusters and isolating paths, seems to be more homogeneous than that of the original image. Furthermore the object contours are well sharpened. To point out clearly the separability of the two classes, the gray-level histogram and the bimodality measure of each image are computed and plotted.

The curves shown in Figs.‘6a-6d depict the behavior of the Fisher distance versus the gray level (solid line) and the tone distribution. The gray-level threshold that allows a good partitioning of the population is easily detected when the valley is broad and deep. It can also be noted that the maximum of the Fisher distance increases with the window size. Besides this, it corresponds to the gray level of the points belonging to the valley, which practi- cally implies the choice of the threshold at the bottom of the valley. A plateau can be observed on the bimodality

FIG. 5. Transmission electron micrograph image of thin gold film. (a) Original image; (b, c, d) image (a) after treatment for window sizes 3 x 3, 27 x 27, and 71 x 71, respectively.

curve when a large-sized window is used. For this pla- teau, one can choose any point of this class as a thresh- old.

One can observe that the gray level of the original im- age (Fig. 6a) is nearly bimodal but the valley is not deep, whereas the two modes are well separated when the win- dow size increases. Once the two modes are separated, one can fit the obtained tone distribution by two Gaussian distributions and then compute the misclassification error to detect the optimum gray-level threshold 119-211.

Furthermore, it can be verified that the maximum of the Fisher distance increases with the valley depth as expected since the best gray-level threshold is generally located in the bottom of the valley [22].

To show the window size effect on the bimodality mea- sure, the maximum of the Fisher distance is plotted ver- sus the window size. It can be noted in Fig. 7 that the bimodality increases with the window size. This result is interesting and can be explained. Indeed if one considers a small window where only one homogeneous zone is observed, only one inner representative gray level is ob- tained and then no separability between the two classes is possible. This leads to the low bimodality measure as expressed by Eq. (7), where only pi and (TV or p2 and g2 are nonzero. In contrast, when the window size is large the two classes are well observed and the representative gray level of each class is better estimated than in the first case. This leads to a high bimodality measure.

In fact, in the definition of the Fisher distance, pl and p2 can be considered as the representative gray levels of the two populations, and then one way to increase the

20 LE N&RATE, BEGHDADI. AND DUPOISOT

OP a orzno I3

3235 11.9 B I

P I G X D E A L L s I

1 Y

0 0. 0 255

3609

P I

i L s

0 Lp 0

GRAY LEUELS GRAY LEUELS

7054

P I X E L s

0 ‘l-L 0 255O. GRAY LEUELS

r

or2s0127

l-l 6 6.4

B I n 0 D n L I 1 Y

0236

b

1 21.6

D I n 0 D A L I I Y

0. 255

.A1 .v B I n 0 D n L I T Y 255O.

GRIlY LEUELS

FIG. 6. Gray-level histogram and the Fisher distance (solid line) from images 5a, 5b, SC, and 5d, respectively.

maximum of the Fisher distance is to increase the numer- ator and decrease the denominator in expression (10). This can be achieved by moving the two representative gray levels ~1 and p2 to two opposite sides and by squeez- ing the two modes of the gray-level distribution.

The obtained results, shown in Fig. 6, confirm that our method produces the two effects. Indeed, the two modes are squeezed and well separated. These effects lead to low values for the standard deviations (+I and ~2 and a high value for the difference (pl - pJ2, which corre- spond to a high value for the Fisher distance.

eeo 020 040 060 100

WINDOW SIZE

FIG. 7. Maximum of the Fisher distance versus the window size (the first black point corresponds to the original image).

These results show that human visual perception of bimodality can sometimes be pointed out on a statistical distribution of some features. Here, the feature is the gray-level histogram.

FIG. 8. A picture of parasitoid hymenoptera on beans. (a) Original image; (b) image (a) after treatment for window size 7 X 7; (c) image (a) after treatment with contrast amplification CL,, = C$:‘.

IMAGE ENHANCEMENT TECHNIQUE ANALYSIS 21

wG!SOLY b wa2s319 C

265Q

P I X E L s

0 i 0 255

GRAY LEVELS

FIG. 9. Histograms for the images in Figs. 8a-8c.

Multimodal Case

To test the effectiveness of the method in the case of a picture containing more than two types of regions, we use a real image. Figure 8a shows the original image, where we can observe small dark insects, lighter beans, and a nearly homogeneous background. Figures 8b and 8c are the images processed by the proposed method. For the image in Fig. 8b the algorithm was stopped at the fourth step, whereas the image in Fig. 8c is the result of the application of a contrast enhancement function to the original image. Figure 9 shows the corresponding gray- level histograms. It can be noted in this example that, as the number of region types increases, as shown in Fig. 8a, the peaks become more difficult to distinguish, and segmentation by thresholding becomes difficult. If we threshold to separate the peaks on the original histogram (Fig. 9a) we can segment out the insects. It can also be noted that some regions on the beans have gray levels of the same order as those of the background.

If we apply the method to the image in Fig. 8a the gray- level histogram is improved. Indeed as we classify the pixels on the basis of a local property, the edge value, it yields a much more refined spatial classification, as con- firmed by Figures 8b and 9b.

If a contrast function is applied to the original image, one obtains better spatial classification, as shown in Figs. 8c and 9c. Indeed the gray-level histogram of the treated image (Fig. 8c) now contains more than two peaks. Seg- menting the image using any multithresholding method becomes easier. One can, for instance, use the Isodata clustering algorithm [ 171.

It is proved through this example that the method can be used to detect more than two peaks in a nearly uni- modal histogram corresponding to an image containing more than two region types.

8. CONCLUSION

In this paper, it has been demonstrated that using both a special selective contrast enhancement and bimodality as measures of the separability between two classes, it is possible to discriminate with a simple feature-the gray- level histogram-the different populations that visual perception usually detects.

We have discussed the noise effect problem and have proposed a solution consisting of the use of special weighting functions, which decrease when the edge value increases.

As already shown in an earlier paper, there are many possible methods of contrast enhancement. However, most of these methods are based on histogram modifica- tions rather than on image transformation itself. In con- trast, in our method image enhancement is done by spa- tial averaging, which does not use the gray-level histogram.

We also have shown that it is possible to simulta- neously enhance the contrast and filter the noise.

This technique can be used as posttreatment in a thresholding or a multithresholding process as well as in bimodality detection.

ACKNOWLEDGMENT

The authors thank the referees for their helpful comments.

REFERENCES

1. T. Y. Young and K. Fu (Eds.), Handbook of Pattern Recognition and Image Processing, Academic Press, San Diego, CA, 1986.

2. R. Nevatia, Machine Petyeption, Prentice-Hall, Englewood Cliffs, NJ, 1982.

3. D. J. Ketcham, R. W. Lowe, and J. W. Weber, Real-time image enhancement techniques, in Seminar on Image Processing, pp. I- 6, Hughes Aircraft, Pacific Grove, CA, 1976.

4. R. Hummel, Image enhancement by histogram transformation, Comput. Graphics Image Process. 6, 1977, 184-195.

5. W. Frei, Image enhancement by histogram hyperbolization, Com- put. Graphics Image Process. 6, 1977, 286-294.

6. W. Pratt, Digital Image Processing, Wiley Interscience, New York, 1978.

7. A. Rosenfeld and A. Kak, Digital Picture Processing, 2nd ed. Aca- demic Press, New York, 1982.

8. A. J. McCollum, C. C. Bowman, P. A. Daniels, and B. G. Batche- lor, A histogram modification unit for real-time image enhance- ment, Comput. Vision Graphics Image Process. 42, 1988,387-398.

9. P. A. Chochia, Image enhancement using sliding histograms, Com- put. Vision Graphics Image Process. 44, 1988, 211-229.

22 LE NEGRATE, BEGHDADI, AND DUPOISOT

10. A. Beghdadi and A. Le Negrate, Contrast enhancement technique based on local detection of edges, Comput. Vision Graphics Image Process. 46, 1989, 162-174.

11. J. S. Weszka, R. N. Nagel, and A. Rosenfeld, IEEE Trans. Com- put. 23, 1974, 1322-1326.

12. D. P. Panda and A. Rosenfeld, IEEE Trans. Comput. C-27(9), 1978. 13. T. Y. Phillips, A. Rosenfeld, and A. C. Sher, O(/og n) Bimodality

Analysis, Tech. Rep. 1900, Computer Vision Laboratory, Center for Automation Research, University of Maryland, College Park, August 1987.

14. J. S. Weszka and A. Rosenfeld, IEEE Trans. Syst. Man Cybernet. SMC-9(l), 1979.

15. R. Gordon and R. M. Rangayan, Appl. Opt. 23(4), 1984.

16. A. P. Dhawan, G. Buelloni, and R. Gordon, IEEE Trans. Med. Imaging MI-f+(l), 1986.

17. F. R. D. Velasco, IEEE Trans. Syst. Man Cybernet. 10, 1980,771- 774.

18. S. Dunn, L. Janos, and A. Rosenfeld, Pattern Recognit. Lett. 1, 1983, 169-173.

19. Y. Nakagawa and A. Rosenfeld, Pattern Recognit. 11, 1979, 191- 204.

20.

21. 22.

23.

24.

25.

26.

27.

28.

S. Dunn, D. Harwood, and L. S. Davis, Local Estimation of the Uniform Error Threshold, Tech. Rep. 1279, Computer Vision Lab- oratory, Center for Automation Research, University of Maryland, College Park, April 1983. J. Kittler and J. Illingworth, Pattern Recognit. 19(l), 1986, 41-47.

A. Rosenfeld, Digital Picture Processing, 1st ed., Academic Press, New York, 1976. R. C. Gonzalez and P. Wintz, Digital Image Processing, 2nd ed., Addison-Wesley, New York, 1987. M. Nagao and T. Matsuyama, Edge preserving smoothing, Com- put. Graphics Image Process. 9, 1979, 394-407.

D. C. C. Wang, A. H. Vagnucci, and C. C. Li, Digital image en- hancement: A survey, Comput. Vision Graphics Image Process. 24, 363-381, 1983. J.-S. Lee, ZEEE Trans. Pattern Anal. Mach. Intelligence PAMI- Z(2), 165-168, 1980.

J.-S. Lee, Refined filtering of image noise using local statistics, Comput. Graphics Image Process. 15, 380-389, 1981.

J. Kittler, J. Illingworth, and J. Foglein, Threshold selection based on a simple image statistic, Comput. Vision Graphics Image Pro- cess. 30, 125-147, 1985.