A modular approach on adaptive thresholding for extraction of mammalian cell regions from...

11
A Modular Approach on Adaptive Thresholding for Extraction of Mammalian Cell Regions from Bioelectric Images in Complex Lighting Environments Inder K Purohit, Praveen Sankaran, K. Vijayan Asari, and Mohammad A. Karim Department of Electrical and Computer Engineering Old Dominion University, Norfolk, VA 23529 {ipuro001, psank001, vasari, mkarim}@odu.edu ABSTRACT A modular approach on an adaptive thresholding method for segmentation of cell regions in bioelectric images with complex lighting environments and background conditions is presented in this paper. Preprocessing steps involve low- pass filtering of the image and local contrast enhancement. This image is then adaptively thresholded which produces a binary image. The binary image consists of cell regions and the edges of a metal electrode that show up as bright spots. A local region based approach is used to distinguish between cell regions and the metal electrode tip that cause bright spots. Regional properties such as area are used to separate the cell regions from the non-cell regions. Special emphasis is given on the detection of twins and triplet cells with the help of watershed transformation, which might have been lost if form-factor alone were to be used as the geometrical descriptor to separate the cell and the non-cell regions. Keywords: bioelectric images, image segmentation, adaptive thresholding, cell segmentation, local contrast enhancement, watershed transformation, template matching 1. INTRODUCTION Image segmentation plays a vital part in the field of bio-electrics. The segmentation algorithms are used to detect the cancer cell regions in the bioelectric images under poor lighting and varied background conditions. In order to stem the flow of blood from the deceased cells, the cancer cells are struck with electric pulses which cause their nuclei to shrink 1 . In order to target the harmful cells, they are to be marked with boundaries separately from the background and the other bright spots that might appear in the image. Manually marking the boundaries might turn out to be a very difficult job. For this reason, an automatic cell segmentation algorithm is of importance 2 . A modular approach for carrying out cell segmentation is presented in this paper. The images are first pre-processed for noise removal. The preprocessed image is then adaptively thresholded and then region based techniques are used to differentiate between cell and non-cell regions. The cell regions are then marked with a single pixel width boundary to clearly indicate the presence of the cancer cells in the image and also their location. The proposed procedure is implemented on different mammalian cell images with varied lighting conditions and backgrounds and the results are discussed. Owing to the need for the automation of segmentation algorithm, there have been many attempts to have a good automatic segmentation algorithm 3-5 . Some of them have been more useful than others, but good automatic segmentation remains to be a challenging problem even today. Otsu’s 6 method is a popular segmentation technique and most of the segmentation algorithms today are based on the Otsu’s procedure. However, the Otsu’s method does not give satisfactory results when there is inadequate lighting. This magnifies the importance of a preprocessing step, as the cell images under consideration are poorly lit. The technique discussed in this paper is an effort to have a good segmentation algorithm with an ability to differentiate between the cell regions and the non-cell regions in the image. Also, to make sure that twin and triplet cells are also detected which are generally lost if form-factor is alone used as the geometrical descriptor. The paper is organized into the following sections. Section 2 describes the segmentation algorithm that has been proposed. Section 3 discusses the experimental results. Concluding discussion is given in Section 4. Visual Information Processing XVII, edited by Zia-ur Rahman, Stephen E. Reichenbach, Mark Allen Neifeld, Proc. of SPIE Vol. 6978, 697807, (2008) · 0277-786X/08/$18 · doi: 10.1117/12.777852 Proc. of SPIE Vol. 6978 697807-1 2008 SPIE Digital Library -- Subscriber Archive Copy

Transcript of A modular approach on adaptive thresholding for extraction of mammalian cell regions from...

A Modular Approach on Adaptive Thresholding for Extraction of Mammalian Cell Regions from Bioelectric Images in Complex

Lighting Environments

Inder K Purohit, Praveen Sankaran, K. Vijayan Asari, and Mohammad A. Karim

Department of Electrical and Computer Engineering Old Dominion University, Norfolk, VA 23529

{ipuro001, psank001, vasari, mkarim}@odu.edu

ABSTRACT

A modular approach on an adaptive thresholding method for segmentation of cell regions in bioelectric images with complex lighting environments and background conditions is presented in this paper. Preprocessing steps involve low-pass filtering of the image and local contrast enhancement. This image is then adaptively thresholded which produces a binary image. The binary image consists of cell regions and the edges of a metal electrode that show up as bright spots. A local region based approach is used to distinguish between cell regions and the metal electrode tip that cause bright spots. Regional properties such as area are used to separate the cell regions from the non-cell regions. Special emphasis is given on the detection of twins and triplet cells with the help of watershed transformation, which might have been lost if form-factor alone were to be used as the geometrical descriptor to separate the cell and the non-cell regions. Keywords: bioelectric images, image segmentation, adaptive thresholding, cell segmentation, local contrast enhancement, watershed transformation, template matching

1. INTRODUCTION Image segmentation plays a vital part in the field of bio-electrics. The segmentation algorithms are used to detect the cancer cell regions in the bioelectric images under poor lighting and varied background conditions. In order to stem the flow of blood from the deceased cells, the cancer cells are struck with electric pulses which cause their nuclei to shrink1. In order to target the harmful cells, they are to be marked with boundaries separately from the background and the other bright spots that might appear in the image. Manually marking the boundaries might turn out to be a very difficult job. For this reason, an automatic cell segmentation algorithm is of importance2. A modular approach for carrying out cell segmentation is presented in this paper. The images are first pre-processed for noise removal. The preprocessed image is then adaptively thresholded and then region based techniques are used to differentiate between cell and non-cell regions. The cell regions are then marked with a single pixel width boundary to clearly indicate the presence of the cancer cells in the image and also their location. The proposed procedure is implemented on different mammalian cell images with varied lighting conditions and backgrounds and the results are discussed.

Owing to the need for the automation of segmentation algorithm, there have been many attempts to have a good automatic segmentation algorithm3-5. Some of them have been more useful than others, but good automatic segmentation remains to be a challenging problem even today. Otsu’s6 method is a popular segmentation technique and most of the segmentation algorithms today are based on the Otsu’s procedure. However, the Otsu’s method does not give satisfactory results when there is inadequate lighting. This magnifies the importance of a preprocessing step, as the cell images under consideration are poorly lit. The technique discussed in this paper is an effort to have a good segmentation algorithm with an ability to differentiate between the cell regions and the non-cell regions in the image. Also, to make sure that twin and triplet cells are also detected which are generally lost if form-factor is alone used as the geometrical descriptor. The paper is organized into the following sections. Section 2 describes the segmentation algorithm that has been proposed. Section 3 discusses the experimental results. Concluding discussion is given in Section 4.

Visual Information Processing XVII, edited by Zia-ur Rahman, Stephen E. Reichenbach, Mark Allen Neifeld, Proc. of SPIE Vol. 6978, 697807, (2008) · 0277-786X/08/$18 · doi: 10.1117/12.777852

Proc. of SPIE Vol. 6978 697807-12008 SPIE Digital Library -- Subscriber Archive Copy

2. ADAPTIVE SEGMENTATION TECHNIQUE 2.1 Preprocessing

The cell images under consideration suffer from poor lighting conditions and cluttered background. In order to have a good segmentation result it is necessary that the noise present in the image is removed or reduced considerably. In order to do so, the images are preprocessed by low-pass filtering and then subjected to local contrast enhancement. The segmentation algorithm begins after the visibility of the image is improved through the preprocessing steps. 2.1.1 Low-pass Filtering

Low-pass filtering helps in getting rid of the noisy speckles that appear as bright spots in the image. Fig. 1(a) and 1(b) show typical bioelectric images where the bright round objects are the cells and the vertical bright strip seen on the right side of the image in Fig. 1(b) is the edge of a metal electrode. The metal electrodes are used to apply electric pulses. The image looks very noisy as it does not get illuminated properly. This is due to the light beam expanding, and its reflection. Low-pass filtering this image gets rid of the unwanted noise in the image and helps in achieving a better segmentation result. The image is filtered using a disk kernel. The advantages of considering a disk kernel is that it minimizes the diagonal influence, it provides a sharper cut-off frequency and is less time consuming7. The equation defining a disk kernel is given by Eq. (1) and the working of the filter is shown in Eq. (2).

1(a) 1(b)

Fig. 1 Examples of mammalian cell images Let the disk kernel be defined by f(x, y) and the radius of the kernel by “r”. Then we have,

( ) ( ) ( ){ }( ) ( ) ( ){ }

2 20 0

2 20 02

0 ,( , )

1 ,

if x y x x y y rf x y

if x y x x y y rrπ

⎧ ∈ − + − >⎪⎪= ⎨⎪ ∈ − + − ≤⎪⎩

(1)

If I(x, y) is the original image and it has a size M x N,

( ) ( ) ( )1 1

'

0 0

, , . ,M N

m n

I x y I x m y n f m n− −

= =

= − −∑∑ (2)

2.1.2 Local Contrast Enhancement

Local contrast enhancement is the next stage of the preprocessing steps performed before the actual segmentation algorithm begins. The local contrast enhancement is used to improve the luminance difference between the brighter regions and the background. On performing the local contrast enhancement, the edges of the cells can be clearly seen which helps in getting better segmentation results. For performing the local contrast enhancement, a low-pass version ( )' ,I x y of the input image is obtained as given in Eq. (2). The image ( )' ,I x y is then further low-pass filtered with a

Proc. of SPIE Vol. 6978 697807-2

bigger disk kernel to get the image ( )'' ,I x y . The images ( )' ,I x y and ( )'' ,I x y are shown in Fig. 2(a) and 2(b)

respectively. The difference between the low-pass image ( )' ,I x y and its further low-passed version ( )'' ,I x y is given as,

( ) ( ) ( )' '', , ,dI x y I x y I x y= − (3)

For the local contrast enhancement, first an image dependent parameter “a” is calculated by the following equation,

)),(( yxImean dea −= (4) The local contrast enhancement is then obtained by using the following equation,

( ) ( )' , ,LCE dI I x y aI x y= + (5)

The segmentation algorithm is then carried out on the local contrast enhanced image LCEI which is shown in Fig. 3.

(a) (b) Fig. 2 (a) Low-pass version of the original image with disk kernel radius = 2; (b) Low-pass version of the image in Fig. 2 (a) with disk

kernel radius = 40

Fig. 3 Local Contrast Enhanced Image

2.2 Adaptive Thresholding

Having a fixed threshold for all the images will not work as there are bound to be illumination differences in different images. For this reason, we go for an adaptive thresholding method. In this method, based on the discriminant analysis, the image is partitioned into two classes C0 and C1 which can be classified as objects and background respectively. If the threshold is at a gray level t, then the two classes can be given as C0 = {0, 1...t} and C1 = {t+1,t+2,….L-1} where L is the number of gray levels in the image. Let σB

2 and σT2 be the between class and total variance respectively. An optimum

threshold *t can be obtained by maximizing the between class variance, *

0 1{ max ( )}

i Lt arg η

≤ ≤ −= (6)

where,

Proc. of SPIE Vol. 6978 697807-3

2

2

T

B

σσ

η = (7)

( )220 1 1 0B w wσ µ µ= − (8)

( )21

2

01

Li

T Ti

nM

σ µ−

=

= −∑ (9)

w0 and w1 are the fraction of pixels present in C0 and C1 respectively and are given by,

0 1 00

1t

i

i

nw and w w

M=

= = −∑ (10)

ni is the number of pixels on the i th gray level, and M is the total number of pixels in the image. µ0 and µ1 are the class means for C0 and C1, respectively, and are given by,

0 10 01t T t

andw wµ µ µ

µ µ−

= =−

(11)

where

1

0 0

. .t L

i it T

i i

n ni and i

M Mµ µ

= =

= =∑ ∑ , (12)

The gray level *t as found from Eq. (7) is taken as the threshold. Using *t , the image is further divided into two classes and a new image I (1) is formed such that all the pixels in the original image that are higher than *t are excluded from I.

Hence, the pixels contained in I will have a range C given by {0, 1, 2…., *t }. This Otsu procedure is applied recursively, and a separability factor called the Cumulative limiting factor (CLF) is used to find an appropriate threshold after each iteration. CLF for the ∆th iteration is defined (similar to η) as,

( ) ( )2

2 1B

T

CLF forσσ∆

∆ = ∆ ≥ (13)

Here σB2 (∆) is calculated as in Eq. (8) by taking w0, w1, µ0 and µ1 from the progressive image I (∆). The appropriate

threshold ( )*t ∆ is obtained by maximizing the value of CLF (∆) for the image I (∆).8 The iterative procedure stops when CLF (∆) becomes smaller than an empirically determined value, such as

( ) 2T

T

CLFµ

ασ

∆ ≤ (14)

where α is a constant known as limiting parameter, which is obtained by a number of repeated experiments on the images. The problem here is that Eq. (14) is dependent on the factor α which is found by training the image set. The parameter α is to be different for different groups of images, depending on the camera lighting available and the reflection characteristics of the objects in the image9. This is a challenging issue for the images under consideration. The dependence on the factor α can be removed by calculating the limiting factor from the spatial characteristics of the image. The image has a spatial distribution that is random in nature. By the measurement of the randomness, the spatial data can be analyzed in a number of ways10-11. It is assumed in spatial data analysis that the observations follow a Poisson distribution, whose characteristic feature is that its mean is equal to its variance. A natural test for the Poisson distribution is the ratio of the sample mean, which is called relative variance12-13 which can be computed as:

T

TrV

µσ 2

= (15)

In all the experiments performed, the relative variance was always found to be greater than one. The test for the frequency distribution corresponds to a negative binomial distribution in the test images. Hence, the separability factor for obtaining the optimum threshold is related to the relative variance and is defined as the square root of the inverse of it. That is,

2T

TSFσµ

= (16)

Proc. of SPIE Vol. 6978 697807-4

The cumulative limiting factor is compared with the separability factor and the process of recursive thresholding is stopped when the condition

CLF (∆) < SF (17) is satisfied. Based on this, the algorithm can be stopped at the point where the pixels are present as dense clusters which is a characteristic of the negative binomial distribution. This clustered nature of the pixels causes the object region to be prominent with the background completely thresholded. The optimum threshold found by using the Otsu’s procedure does not give very good results with the contrast enhanced images that are under consideration7. In order to get the correct segmentation results, the threshold is to be modified a little based on experiments as,

8676.1

** ttnew = (18)

With the new threshold value *newt , better segmentation results can be obtained on the contrast enhanced images. The

need for the new threshold value arises since Otsu’s procedure cannot be directly used as there has been a change in the image’s statistics. The segmentation results with the new threshold value are better compared to the results with the threshold found earlier. The image output after applying the thresholding algorithm is given in Fig. 4 (a) and the plot for the between-class variance (y-axis) against the gray-levels (x-axis) is shown in Fig. 4 (b). It can be seen from Fig. 4 (a) that some of the tiny speckles are also picked up along with the cells. For some images even the metallic strip can be segmented along with the cells, as seen in Fig. 4 (c). A method is to be devised to get rid of all the unwanted regions of the image that have been segmented along with the cells. It is also necessary to make sure that most of the cells are detected while the unwanted regions in the binary image are removed. It can be seen from the plot in Fig.4 (b) that the between class variance is maximized for the given image at a gray level which is approximately 80. 2.3 Erosion and Dilation

Erosion and Dilation are considered to be the two basic operations in mathematical morphology, as all the other morphologic operations can be broken down into these two basic operations14. The erosion and dilation of the image is performed in order to smoothen the boundary of the cells in the segmented image. The cell boundaries are originally smooth but the noise in the image and the level of non-uniform reflection throughout the image will reduce the smoothness of the cell boundary. The erosion is performed on the binary image with a disk kernel followed by performing dilation twice with a smaller disk kernel than the one used for erosion. Fig. 5 (a) shows the binary image after erosion and Fig. 5 (b) and 5 (c) show the binary image after the two dilation processes. These steps make sure that the boundaries of the cells are smooth again and the effect of the noise is negated. The small “holes” (missing pixels) that might be present inside the connected regions are then filled out. 2.4 Template Matching

Since all the cells in the image are circular in shape, the process of separating the cell regions from the non-cell region starts with a template matching step. However, it should also be taken into consideration that not all cells may be perfectly circular and also that there might be twins and triplet cells which will alter the appearance of the cells. Also, there might be small circular non-cell regions in the image which are to be detected and separated from the cells. First a circular template is created. The diameter of the circular template depends on the highest number of rows or columns out of all the connected regions in the image. The diameter of the template is thus given as,

( )max .i id row col= (19) where the subscript i ranges from 1 to the total number of connected regions in the image.

This means that the diameter of the template will be the highest row or column that can be found in any of the connected regions. In order to incorporate twins and triplets as cell regions, a factor β is used in the template matching step which allows for some imperfect circular shapes (due to two or more cells clustering together) also to be considered as circles. The value of β ranges from 0 to 1, higher the value, more imperfect the shapes are allowed to be. The template thus created is run over the binary image and tested for the presence of all the circular regions in the image. Usually there are smaller non-cell circular regions present in an image which get detected along with the cells. In order to remove such regions, area is used as threshold to filter out the smaller unwanted connected regions. Only the regions with area

Proc. of SPIE Vol. 6978 697807-5

BUD

BUD

400

300

200

IOU

U 60 lOU ISO 200 260 300

Gray-levels

(in terms of pixels) over a certain threshold are considered as cell regions and if the area of the connected region is found to be below this threshold then that region is discarded as a non-cell region and is not considered in the final segmentation. An image with the circles detected and with area used as threshold is shown in Fig. 6. Here it can be seen that the twins and triplet cells have been considered as cells while the smaller non-circular portions are removed.

(a) (b)

Fig. 4 (Continued)

(c)

Fig. 4 (a) Binary image after threshold; (b) Plot for Between-class variance vs. Gray-levels; (c) Binary image with metallic plate segmented as well ,

(a) (b)

Proc. of SPIE Vol. 6978 697807-6

(c)

Fig. 5 (a) After erosion with a disk kernel; (b) After dilation with a smaller disk kernel; (c) After the second dilation process using the same kernel as used for the first dilation

2.5 Boundary Tracking

In order to detect twins as two separate cells, watershed transform is used for boundary tracking. The watershed transform is a well known mathematical morphological process used for image segmentation15. It is an efficient and accurate way of partitioning the connected regions of the image, in our case the connected cells. The watershed transformation can be defined as given: If we consider the image to be a topographical relief, with the height of each point being directly proportional to the gray-level at that point and if we consider rain water falling on the terrain, then the watershed are the lines that separate the catchment basins that are formed16. In order to mark out the cell regions separately, a single pixel width boundary is to be drawn around the connected regions. The boundaries marked after the watershed transformation is shown in Fig. 7. It can be seen here that two or more connected cell regions are detected as separate cells. There are a few cases of over segmentation but more importantly a majority of the cells are being picked out and are detected for the final segmentation.

Fig. 6 Image after detecting circles and using area as threshold

Fig. 7 Boundaries marked with watershed transformation

Proc. of SPIE Vol. 6978 697807-7

The final step is to superimpose the image with the boundaries marked, on top of the original image in order to clearly show the cell regions that have been detected in the mammalian cell image. The final segmented image is shown in Fig. 8. The boundaries of the cell regions can clearly be seen here while most of the non-cell regions are not segmented.

3. EXPERIMENTAL RESULTS The proposed algorithm has been tested using the MATLAB 7.1 software. The experiments were done on a PC with Pentium 4 processor (2.4 GHz clock, 1.0 GB RAM) in Windows XP platform. The mammalian cell images used for testing the algorithms have been provided by the Center for Bio-electrics at Old Dominion University. The cell regions are to be marked in order to apply a strong electric pulse, manually determining the position of these cells might be a cumbersome job. So, the bioelectric samples have been provided in order to determine a procedure to automate the process of the detection and segmentation of the cell regions as the electric pulses are to be applied only to the cancer cells while attempt is to be made that none of the healthy cells are touched and none of the cancer cells go unnoticed. Some more examples of the input-output pairs by using the proposed algorithm are shown in Fig. 9. The algorithm gives good results for different backgrounds and under varied lighting conditions. As seen from the first pair of images in Fig. 9, there are a couple of twin cells and the metallic strip is also visible quite evidently. Most of the cells have been detected well, while the metallic strip is not a part of the final segmentation. Similarly from the other pairs of images (2nd, 3rd,4th pairs) it can be seen that the cell regions have been picked up quite well while the other non-cell regions such as the metallic plates and the tiny particles have not been segmented along with the cell regions. However, in the 5th input-output pair it is seen that there have been a couple of false-positives. Some of the non-cell regions on the left side of the image have also been picked up as cell regions. This is because the appearance of these regions is very close to the cell regions and also the area of these regions is almost equal to the area of the cell regions. Another aspect that needs attention is in picking up the cells which are very poorly lit such as the ones pointed out by the white arrows in the 6th input-output pair (in the output image). These cells are not illuminated quite enough to have a significant difference with the background. As a result, these cells go unnoticed even after performing the contrast enhancement.

The effectiveness of the proposed algorithm is shown with the help of the images in Fig. 10. The image in Fig. 10(a) is the input; where as image in Fig. 10(b) is the segmented image from the proposed algorithm. Fig. 10(c) and 10(d) show the segmentation results based on other algorithms2,7. It can be seen from Fig. 10 that the proposed algorithm is able to detect more number of cell regions compared to the other two algorithms. As seen in Fig. 10(a), there is a cell very close to the metallic electrode. While running the other algorithms, this cell is not detected. The cell being so close to the metallic plate is removed from the final segmentation after using the form-factor since it is considered to be a part of the plate. However, since the proposed algorithm does not depend on form-factor to determine the circular shapes of the objects and due to the erosion and dilation kernels used, such cells are also determined as cell regions and segmented along with the other cells.

4. CONCLUSIONS The proposed segmentation algorithm has been successfully used for the segmentation of cell regions in bioelectric images with poor lighting environments and complex background conditions. The local contrast enhancement helps in improving the visual difference between the cell regions and the background and thereby improves the thresholding process. The twins and triplet cells are successfully detected. The metallic plates and other non-cell regions in the image are removed from the final segmentation based on template matching and regional properties of the connected regions such as area. With the help of watershed transformation, the connected cells are segmented as two or more separate cells.

Proc. of SPIE Vol. 6978 697807-8

Fig. 8 Final Segmented Image

Fig. 9 (Continued)

Proc. of SPIE Vol. 6978 697807-9

Fig. 9 Input-Output pairs for different mammalian cell images under varied lighting and background conditions. The images in the

first column are the input images and the images in the second column are output images.

(a) (b)

(c) (d)

Fig. 10 (a) Input image; (b) Segmentation result from the proposed algorithm; (c) Segmentation result with technique in [7] and (d) Segmentation result with technique in [2].

Proc. of SPIE Vol. 6978 697807-10

ACKNOWLEDGEMENTS

The authors of this paper would like to thank the Center for Bio-Electrics at Old Dominion University for providing us with the mammalian cell images used for testing the algorithm.

REFERENCES [1] W. Frey, K. Baumung, J.F. Kolb, N. Chen, J. White, M.A. Morrison, S.J. Beebe, K.H. Schoenbach, “Real-time

Imaging of the Membrane Charging of Mammalian Cells Exposed to Nanosecond Pulsed Electric Fields”, Conf. Rec. of the 26th Intern. Power Modulator Conf., PMC’04, San Francisco, CA, 216-219, (2004).

[2] Praveen Sankaran and K. Vijayan Asari, “Adaptive thresholding based mammalian cell segmentation for cell-destruction activity verification,” IEEE International Workshop on Applied Imagery and Pattern Recognition, AIPR – 2006, Washington DC, (2006).

[3] N. R. Pal and S. K. Pal, “A review on image segmentation techniques,” Pattern Recognition, 26, 1277-1294, (1993). [4] Glasbey, C.A, “An analysis of histogram-based thresholding algorithms,” Computer Vision, Graphics, and Image

Processing. Graphical Models and Image Processing, 55 (6), 532-537, (1993). [5] R. M. Haralick and L.G. Shapiro, “Survey: Image Segmentation Techniques,” Computer Vision Graphics, Image

Processing, 29, 100-132, (1985). [6] N. Otsu, “A Threshold Selection Method from Gray-Level Histograms”, IEEE Transactions on Systems, Man, and

Cybernetics, 9, 62-66, (1979). [7] Li Tao and K. Vijayan Asari, “A heuristic approach for the extraction of region and boundary of mammalian cells in

bio-electric images,” Proceedings of the IS&T/SPIE Symposium on Electronic Imaging: Image Processing: Algorithms and Systems V, San Jose, CA, 6064, 60640C-1-12, (2006).

[8] S. Kumar, K.V. Asari, and D. Radhakrishnan, "Real-time automatic extraction of lumen region and boundary from endoscopic images," IEE Journal of Medical & Biological Engineering & Computing, 37 (5), 600-604, (1999).

[9] H. Tian, T. Srikanthan, and K. Vijayan Asari, “An automatic segmentation algorithm for the extraction of lumen region and boundary from endoscopic images”, IEE Journal of Medical & Biological Engineering & Computing, 39, 8-14, (2001).

[10] G. J. G. Upton, B. Fingleton, Spatial Data by Example, Volume 1, Point Pattern and Quantitative Data, Wiley, (1985).

[11] A. Rogers, Statistical Analysis of Spatial Dispersion, The Quadrant Method, Methuen Inc., (1974). [12] A. R. Clapham, “Over Dispersion in Grassland Communities and the Use of Statistical Methods in Plant Ecology,”

Journal of Ecology, 24, 232-251, (1936). [13] P. L. Rosin, “Thresholding for Change Detection,” ICCV ’98, 274-279, (1998). [14] J. Serra, Image Analysis and Mathematical Morphology, Vol 1. Academic Press, (1982). [15] S. Beucher and F. Meyer, “The morphological approach to segmentation: The watershed transform,” in

Mathematical Morphology in Image Processing, E. R. Dougherty, Ed. New York: Marcel Dekker, 12, 433–481, (1993).

[16] V. Grau, A. U. J. Mewes, M. Alcañiz, R. Kikinis, and S. K. Warfield, “Improved Watershed Transform for Medical Image Segmentation Using Prior Information”, IEEE Transactions on Medical Imaging, 23, 447-458, (2004).

Proc. of SPIE Vol. 6978 697807-11