Completely automated segmentation approach for breast ultrasound images using multiple-domain...

14
d Original Contribution COMPLETELYAUTOMATED SEGMENTATION APPROACH FOR BREAST ULTRASOUND IMAGES USING MULTIPLE-DOMAIN FEATURES JUAN SHAN, H. D. CHENG, and YUXUAN WANG Department of Computer Science, Utah State University, Logan, UT, USA (Received 18 November 2010; revised 29 September 2011; in final form 26 October 2011) Abstract—Lesion segmentation is a challenging task for computer aided diagnosis systems. In this article, we propose a novel and fully automated segmentation approach for breast ultrasound (BUS) images. The major contributions of this work are: an efficient region-of-interest (ROI) generation method is developed and new features to characterize lesion boundaries are proposed. After a ROI is located automatically, two newly proposed lesion features (phase in max-energy orientation and radial distance), combined with a traditional intensity-and- texture feature, are utilized to detect the lesion by a trained artificial neural network. The proposed features are tested on a database of 120 images and the experimental results prove their strong distinguishing ability. Compared with other breast ultrasound segmentation methods, the proposed method improves the TP rate from 84.9% to 92.8%, similarity rate from 79.0% to 83.1% and reduces the FP rate from 14.1% to 12.0%, using the same database. In addition, sensitivity analysis demonstrates the robustness of the proposed method. (E-mail: [email protected]) Ó 2012 World Federation for Ultrasound in Medicine & Biology. Key Words: Breast ultrasound segmentation, CAD (computer-aided diagnosis), Region-of-interest (ROI), Phase feature, Distance feature. INTRODUCTION Breast cancer is the second leading death cause for women all over the world and more than 8% women will suffer this disease during their lifetime (Cheng et al. 2010). In 2009, there were reported approximately 192,370 newly diagnosed cases and 40,170 deaths in the United States (Jemal et al. 2009). Ultrasound is a useful diagnostic tool for breast cancer. It is widely used in clinic due to the fact that it is noninvasive, prac- tically harmless and cost effective (Drukker et al. 2002). However, the number of unnecessary biopsies is still high. Less than 30% of the masses referred to biopsy are actually malignant (Taylor et al. 2002), i.e., more than 70% of the biopsies are unnecessary. Since unnecessary biopsy hurts patients both mentally and physically, finding a method that can diagnose breast cancer with high accuracy is meaningful and urgent. Segmentation is one of the most important compo- nents and most difficult tasks for a computer-aided diag- nosis (CAD) system. Many features for discriminating benign and malignant tumors are based on contour, shape and texture of the lesion (ACR BI-RADS lexicon (American College of Radiology 2000)). These features can be calculated only after segmentation has been done. Thus, an accurate and effective segmentation method is essential. However, breast ultrasound (BUS) images always have poor quality caused by speckle noise, low- contrast and shadow effect. Many existing segmentation methods for BUS images employed model-based methods to detect the lesion boundary. Commonly used models are level set (Chang et al. 2005; Liu et al. 2009), active contours (Chang et al. 2003; Chen et al. 2003; Madabhushi and Metaxas 2003) and Markov random fields (MRF) (Boukerroui et al. 2003; Xiao et al. 2002). In most model-based approaches, an energy function is formulated and the segmentation task is transformed as finding the minimum (or maximum) of the energy func- tion. Since both image information and relationships between neighboring pixels can be integrated into the energy function, model-based approaches can handle noise while utilizing image information (Liu et al. 2009). However, speckle noise is not the only problem for BUS images. Other problems such as low-contrast and blur edges need to be considered as well. A major drawback of the existing model-based methods is that calculating Address correspondence to: H. D. Cheng, Ph.D., Department of Computer Science, Utah State University, Logan, UT, 84322 USA. E-mail: [email protected] 262 Ultrasound in Med. & Biol., Vol. 38, No. 2, pp. 262–275, 2012 Copyright Ó 2012 World Federation for Ultrasound in Medicine & Biology Printed in the USA. All rights reserved 0301-5629/$ - see front matter doi:10.1016/j.ultrasmedbio.2011.10.022

Transcript of Completely automated segmentation approach for breast ultrasound images using multiple-domain...

Ultrasound in Med. & Biol., Vol. 38, No. 2, pp. 262–275, 2012Copyright � 2012 World Federation for Ultrasound in Medicine & Biology

Printed in the USA. All rights reserved0301-5629/$ - see front matter

asmedbio.2011.10.022

doi:10.1016/j.ultr

d Original Contribution

COMPLETELYAUTOMATED SEGMENTATION APPROACH FOR BREASTULTRASOUND IMAGES USING MULTIPLE-DOMAIN FEATURES

JUAN SHAN, H. D. CHENG, and YUXUAN WANG

Department of Computer Science, Utah State University, Logan, UT, USA

(Received 18 November 2010; revised 29 September 2011; in final form 26 October 2011)

ACompuE-mail

Abstract—Lesion segmentation is a challenging task for computer aided diagnosis systems. In this article, wepropose a novel and fully automated segmentation approach for breast ultrasound (BUS) images. The majorcontributions of this work are: an efficient region-of-interest (ROI) generation method is developed and newfeatures to characterize lesion boundaries are proposed. After a ROI is located automatically, two newly proposedlesion features (phase in max-energy orientation and radial distance), combined with a traditional intensity-and-texture feature, are utilized to detect the lesion by a trained artificial neural network. The proposed features aretested on a database of 120 images and the experimental results prove their strong distinguishing ability.Compared with other breast ultrasound segmentation methods, the proposed method improves the TP ratefrom 84.9% to 92.8%, similarity rate from 79.0% to 83.1% and reduces the FP rate from 14.1% to 12.0%, usingthe same database. In addition, sensitivity analysis demonstrates the robustness of the proposed method. (E-mail:[email protected]) � 2012 World Federation for Ultrasound in Medicine & Biology.

Key Words: Breast ultrasound segmentation, CAD (computer-aided diagnosis), Region-of-interest (ROI), Phasefeature, Distance feature.

INTRODUCTION

Breast cancer is the second leading death cause forwomen all over the world and more than 8% womenwill suffer this disease during their lifetime (Chenget al. 2010). In 2009, there were reported approximately192,370 newly diagnosed cases and 40,170 deaths inthe United States (Jemal et al. 2009). Ultrasound isa useful diagnostic tool for breast cancer. It is widelyused in clinic due to the fact that it is noninvasive, prac-tically harmless and cost effective (Drukker et al.2002). However, the number of unnecessary biopsies isstill high. Less than 30% of the masses referred to biopsyare actually malignant (Taylor et al. 2002), i.e., more than70% of the biopsies are unnecessary. Since unnecessarybiopsy hurts patients both mentally and physically,finding a method that can diagnose breast cancer withhigh accuracy is meaningful and urgent.

Segmentation is one of the most important compo-nents and most difficult tasks for a computer-aided diag-nosis (CAD) system. Many features for discriminating

ddress correspondence to: H. D. Cheng, Ph.D., Department ofter Science, Utah State University, Logan, UT, 84322 USA.: [email protected]

262

benign and malignant tumors are based on contour, shapeand texture of the lesion (ACR BI-RADS lexicon(American College of Radiology 2000)). These featurescan be calculated only after segmentation has been done.Thus, an accurate and effective segmentation method isessential. However, breast ultrasound (BUS) imagesalways have poor quality caused by speckle noise, low-contrast and shadow effect. Many existing segmentationmethods for BUS images employedmodel-basedmethodsto detect the lesion boundary. Commonly used models arelevel set (Chang et al. 2005; Liu et al. 2009), activecontours (Chang et al. 2003; Chen et al. 2003;Madabhushi and Metaxas 2003) and Markov randomfields (MRF) (Boukerroui et al. 2003; Xiao et al. 2002).In most model-based approaches, an energy function isformulated and the segmentation task is transformed asfinding the minimum (or maximum) of the energy func-tion. Since both image information and relationshipsbetween neighboring pixels can be integrated into theenergy function, model-based approaches can handlenoise while utilizing image information (Liu et al. 2009).However, speckle noise is not the only problem for BUSimages. Other problems such as low-contrast and bluredges need to be considered as well. A major drawbackof the existing model-based methods is that calculating

Segmentation approach for breast ultrasound images d J. SHAN et al. 263

energy functions and reformulating the models are alwaystime-consuming, especially for complicated BUS images.While image resolution keeps increasing, the efficiency ofa segmentation method becomes more and more impor-tant. The second drawback is that the segmentation accu-racy is not good enough for real clinic application. Also,the pre-labeled regions-of-interest (ROIs) or manuallyinitialized contours are required by most model-basedmethods, which impede the complete automation ofa CAD system. At last, many segmentation methods donot provide a fair comparison with others using a commondatabase. To solve the above problems, a novel segmenta-tion method is proposed.

The newly proposed segmentation method for BUSimages is composed of three steps: ROI generation,multi-domain feature extraction and classification usingan artificial neural network (ANN). The first noveltyand advantage of the proposed method is that it achievescomplete automation and accurate performance at thesame time. For medical image segmentation, there isalways a trade-off between automation and accuracy.High accuracy requires more human intervention andrealizing complete automation is always at the expenseof accuracy. The proposed method successfully resolvesthis contradiction. Second, a novel automatic ROI gener-ation method is developed. Since the ROI can accuratelylocate the lesion and cut off most of the background, boththe efficiency and accuracy of the proposed segmentationapproach are improved. Third, we propose novel multi-domain features to characterize lesion’s boundaries.Intensity-and-texture-based features have been widelyused to detect lesion contour. However, such featureswill not work well when higher segmentation accuracyis required. Considering the empirical domain knowl-edge, we propose two new features in frequency domainand spatial domain: PMO (phase in max-energy orienta-tion) and RD (radial distance). Fourth, both low-level(region growing) and high-level (ANN) methods areincorporated. The output of low-level method serves asthe intermediate result and the output of the high-levelmethod is the final lesion contour. Fifth, without utilizingcomplex models or energy functions, the proposedmethod is relatively fast. The only time-consuming partis the training of ANN. However, in real application,training only needs to be carried out once. After theANN is trained, the processing speed is very fast.

MATERIALS AND METHODS

The database used in this study contains 120 BUSimages. The images were collected subsequently withoutexcluding any data by the Second Affiliated Hospital ofHarbin Medical University (Harbin, China), usinga VIVID 7 (GE, Horten, Norway) with a 5–14MHz linear

probe. The aperture of transducer is 4 cm. The averagesize of the images is 500 3 400 pixels. The size of thelesions ranged from 0.6 to 6.5 cm and median size is1.0 cm. Among them, 58 cases are benign and 62 casesare malignant validated by pathology. Informed consentis obtained from all patients in this study. The studyprotocol is approved by the Institutional EthicsCommittee of the university. For each case, the manualdelineation of the lesion boundary by an experiencedradiologist is served as the reference standard.

Automatic ROI generationSince BUS images contain many different structures

(tissues, fat, muscles, etc.) and the lesion area is usuallya small part of the entire image, finding the region-of-interest (ROI) is quite helpful for improving the speedand accuracy of segmentation. There are two typical ROIdefinitions: some papers defined ROI as an initial contourof the lesion (Liu et al. 2009; Madabhushi and Metaxas2003), while others defined ROI as a rectangle regioncontaining both lesion and some background information(Joo et al. 2004; Yap et al. 2008). In this article, we usethe second definition and develop an automatic ROIgeneration method consisting of two parts: automaticseed point selection and region growing. Region growingis chosen because of its simplicity and fast speed. Theaccuracy of region growing method is not high enoughfor BUS images. However, the aim of ROI generation isonly roughly locating the lesion rather than finding theaccurate boundary. Therefore, region growing fits thepurpose very well.

Automatic seed point selection. A seed point is thestarting point for region growing. Its selection is impor-tant for the segmentation result. If a seed point is outsidethe lesion area, the final segmentation result would not becorrect. Due to the low quality of BUS images, most ofregion growing methods require the seed point to beselected manually. Several works have studied automaticseed point selection (Jung et al. 2005; Madabhushi andMetaxas 2003; Poonguzhali and Ravindran 2006).However, most of the methods took into account mainlythe statistics texture features (e.g., the lesion is darkerthan the surrounding tissues and more homogeneousthan other regions) and the spatial features are oftenneglected. Neglecting spatial information makes thepossibility of failed seed point selection quite high.

We adapt the automatic seed point selection methoddeveloped in our previous work (Shan et al. 2008), whichconsiders both texture features and spatial characteristics.The five steps of the automatic seed point selectionalgorithm are:

Step 1: Speckle reduction. Speckle reducing aniso-tropic diffusion (SRAD) (Yu and Acton 2002) can process

Fig. 1. Flowchart of the automatic threshold selection scheme.

264 Ultrasound in Medicine and Biology Volume 38, Number 2, 2012

the noisy image iteratively with adaptive weighted filters,reduce noise and preserve edges. Given an image I0(x, y),the output image I(x, y; t) is evolved according to the PDE:

�vIðx; y; tÞ=vt5 div½cðqÞVIðx; y; tÞ�

Iðx; y; 0Þ5 I0ðx; yÞ (1)

where V is the gradient operator and div is the divergenceoperator. A discrete form of eqn (1) is given by

It1Dts 5 Its1

Dt

jhsjXp˛hs

cðqÞ�Itp2Its

�(2)

where Its is the discretely sampled image, s denotes thepixel position in a discrete two-dimensional grid, Dt isthe time step, hs represents the spatial neighborhood ofpixel s and jhsj is the number of pixels in the window(usually is four). The diffusion coefficient is determined by

cðqÞ5 1

11½q2ðx; y; tÞ2q20 ðtÞ�=½q20ðtÞð11q20ðtÞÞ�(3)

where q(x, y; t) is the instantaneous coefficient of varia-tion determined by

qðx; y; tÞ5ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffið1=2ÞðjVIj=IÞ22ð1=42Þ�V2I=I

�2�11ð1=4Þ�V2I=I

�2vuut (4)

and the speckle scale function q0(t) is given by

q0ðtÞ5ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffivar½zðtÞ�pzðtÞ (5)

In the above equations, t is the iteration time and z(t)is the most homogeneous area in the image at iteration t.In this work, we set the iteration number as 5 to balancespeckle reduction effect and efficiency of the seed pointselection.

Step 2: Iterative threshold selection. We develop anautomatic threshold selection algorithm to separate imageI into background and foreground. Here, foregroundmeanspossible lesion regions and background means the otherparts of the image other than lesions. The threshold selec-tion algorithm needs no human intervention and is self-adapted for different images. The procedure is describedas below. First, normalize the image I into [0, 1] and letI5 12I. Calculate all the local minima of the image histo-gram. Second, let d be equal to the current local minimumof the histogram (from the smallest to the biggest of thelocal minima). Binarize the de-speckled image usingthreshold d (lesion becomes white and background isblack) and save it as image Ib. Third, if the ratio of thenumber of foreground pixels and the number of back-ground pixels is less than 0.1, let d equal the next local

minimum. Continue the process until the ratio is no lessthan 0.1 which is chosen by experiments. Fourth, performdilation and erosion on Ib to remove noise. Fifth, find allthe connected components in Ib. If none of the connectedcomponents intersects with the image center region (awindow about 1/2 size of the entire image and located atthe image center), let d equal the next local minimum andgo to the second step. Sixth, continue until there is a con-nected component intersects with the center window. Re-turn the current local minimum d as the final threshold. Ablock diagram of the above algorithm is given in Figure 1.

Step 3: Delete the boundary-connected regions. Afterbinarization, we find all the connected components. Eachconnected component represents a possible lesion region.Besides the real lesion, there are some regions connectedwith the boundary and they always have big areas. Ifa boundary region does not intersect with the centerwindow (a window about 1/2 size of the entire imageand located at the image center), this region is deletedfrom the lesion candidate list.

Step 4: Rank the regions. Let the remaining con-nected components be the candidate regions. Thefollowing scoring formula is employed to rank all thecandidate regions.

Sn 5

ffiffiffiffiffiffiffiffiffiffiArea

pn

disðCn;C0Þ$varðCnÞ; n5 1;.; k (6)

where k is the number of candidate regions, Arean is thenumber of pixels in the region, Cn is the center of theregion, C0 is the center of the image and dis(a,b) is the

Fig. 2. (a)Original image. (b) Seedpoint selection result. (c)Regiongrowing result. (d)Rectangular region-of-interest (ROI).

Segmentation approach for breast ultrasound images d J. SHAN et al. 265

Euclidean distance between points a and b. We formulateeqn (6) based on the knowledge of the radiologistsworking with BUS imaging and the rules they used todetect breast lesions. The one with the highest score isconsidered as the lesion region.

Step 5: Determine the seed point. Suppose theminimum rectangle containing the winning region is[xmin,xmax; ymin,ymax]. For most cases, the center of thewinning region ((xmin 1 xmax)/2, (ymin 1 ymax)/2) couldbe considered as a seed point. However, there are casesthat the lesion shapes are irregular and the center pointsmight be outside of the lesions. For such cases, a seedpoint is selected by the following rule to make sure thatthe seed point is inside the lesion:

�xseed 5 ðxmin1xmaxÞ=2yseed 5 fcyjðxseed; yÞ˛lesion regiong (7)

Region growing. Region growing is a popularsegmentation method which starts from a selected seedpoint and adds pixels into the region by comparing theirintensities with the global or local thresholds. Thegrowing procedure stops until there are no more pixelscan be added into the region. The method works wellfor noise-free images. However, for BUS images it cannotguarantee a good performance. Here, we use regiongrowing to obtain a preliminary contour serving as theintermediate result for ROI generation. How to obtainthe final segmentation result will be described later.

Let R represent the set containing all the pixels in theregion and p be a pixel in R. At the beginning of region

growing, set R contains only the seed point S0. A pixelv can be included in R, if d p ∊ R and it satisfies thefollowing condition:

�GðvÞ#max

M

b2;minðb13m;MÞ

��and fNðpÞXNðvÞsBg

(8)

whereG(v) is the intensity value of pixel v,m is the inten-sity mean of region R, M is the intensity mean of thewhole image and b1 and b2 are the parameters tuningthe thresholds. N denotes the type of connectivity of theneighborhood. In this work, we use 8-connectivity. Theabove operations are repeated until no more pixels satis-fying eqn (8). By experiments, we determine b15 1.5 andb2 5 1.6, respectively.

Based on the region growing result, a rectangularROI is selected from the original image. To make surethat the lesion is completely covered by ROI and a properportion of the background is included, we let the rectan-gular region have a 50-pixel expansion in both verticaland horizontal directions of the region growing result.For example, suppose the size of the original image is500 3 600 as shown in Figure 2a. Seed point selectionresult is shown in Figure 2b. Lesion area obtained fromthe region growing is within the region [150:200,250:350] in Figure 2c, then the rectangular region[100:250, 200:400] is finally chosen as the ROI (shownas the white rectangle in Fig. 2d). The reason we needsuch expansion is to include some surrounding tissues,which will serve as background to outline the lesion inthe following steps.

Fig. 4. The shortest distance to lesion boundary for pixel p(i, j).

Fig. 3. (a) Region-of-interest (ROI). (b) ROI after de-speckle and histogram equalization. (c) Phase in max-energy orien-tation (PMO) image. (d) PMO image after median filter.

266 Ultrasound in Medicine and Biology Volume 38, Number 2, 2012

Multi-domain feature extractionIt is supposed that each pixel in theROI belongs to one

of the two classes: lesion or background. To distinguishthem, two newly proposed features, phase in max-orientation (PMO)and radial distance (RD) and a frequentlyused texture-and-intensity feature joint probability (JP) areextracted. The three features are discussed in details below.

Traditional texture-and-intensity-based feature.Bothtexture and intensity have a high specificity for character-izing masses in BUS images (Madabhushi and Metaxas2003). Since both malignant and benign tumors are darkerthan the glandular tissue or fat (Cheng et al. 2010; Leucht2000; Stavros et al. 1995), we can use image processingtechnique to distinguish lesions from the surroundingtissues and fat by intensity difference. Besides,homogeneity is another important feature for lesions(Madabhushi and Metaxas 2003), which can be describedby the following texture feature:

Tv 5GðvÞ21

N

XN21

i5 0

WiðvÞ (9)

where Tv is the local variance of pixel v, G(v) is the pixelintensity andWi(v) is the intensity of the N nearest neigh-bors of v (N 5 8).

We assign each pixel a probability of belonging toa lesion based on its intensity and texture. Assume thatthe intensity feature and texture feature are independent(Madabhushi and Metaxas 2003); then the joint proba-bility (JP) can be calculated by

JP5 PðintensityÞ$PðtextureÞ (10)

where P(intensity) is the intensity probability andP(texture) is the texture probability. To obtain P(intensity)and P(texture), we choose 10 images from the imagedatabase randomly to generate the probability distributionfunctions (pdfs) for intensity and texture, respectively. Foreach image, the lesion is cropped according to the manualdelineation by the radiologist. Then the cropped lesionregions are used to generate the corresponding pdfs. These10 images will not be used for training or testing.

Energy-based phase feature. The classical edgedetection approach considered edges as the points withhigh intensity gradients. The local phase provides analternative way to characterize the structure of an imagein the frequency domain. It characterizes different inten-sity features in terms of the intensity profile shape ratherthan the intensity derivative and the phase has beensuggested as a more robust feature for acoustic boundarydetection (Noble and Boukerroui 2006).

In this article, we propose a novel energy-basedphase feature PMO. Considering that the edge directionvaries in a BUS image, we filter the image in thefrequency domain along six orientations (0�, 30�, 60�,90�, 120�, 150�) and extract phase information in theorientation with the maximum energy. The six regionsprovide an even and thoroughly coverage of the wholespectrum (0–360�). We choose six orientations becauseit could achieve a good balance between the efficiencyand accuracy. If more orientations are chosen, the speedof the algorithm could be slow and if fewer orientationsare chosen, the accuracy could be low.

To calculate PMO matrix, the ROI is transferred tothe frequency domain and filtered along six orientationsby the following angle filters

Fnðu; vÞ5 exp2

atan

2u2u0v2v0

�2qn

�2

2ðp=1:2NÞ2 (11)

Fig. 5. (a) Region-of-interest (ROI). (b) Binary image generated by artificial neural network (ANN). (c) Binary imageafter postprocessing.

Segmentation approach for breast ultrasound images d J. SHAN et al. 267

where qn (n5 1,., 6) corresponds to the six orientations(0�, 30�, 60�, 90�, 120�, 150�), (u0, v0) is the origin of thefrequency domain and N is the number of orientations.Then, the images are transferred back to the spatialdomain. We save the phase matrix as PHk and energymatrix as Ek along each orientation. PMO is obtainedfrom the phase matrix when the corresponding energyis maximum, i.e., for every pixel (i, j),

PMOði; jÞ5PHkði; jÞ; k5 argmaxElði; jÞl5 1.6

(12)

The above idea springs from the fact that when anedge is filtered along the six orientations, the phase infor-mation from the orientation nearest to the edge’s directionprovides the best description of the edge. For example, ifan edge is along the orientation of 32�, the phase informa-tion from orientation 30� will characterize the edge bestcomparing with the phase information along the otherfive orientations. This procedure makes the proposedphase feature characterize the lesion contour better thanusing the average phase feature along all the directionsor phase feature along one direction only.

PMO is a robust feature to characterize edges.However, it strengthens not only the lesion contour butalso other edge information. Since we are only interestedin the lesion contour, we need to fade out the informationother than the lesion’s boundary. We multiply the PMOfeature image by the ROI after de-speckle to reduce theedges inside the lesion and process the PMO image witha 53 5median filter to reduce the edges outside the lesion.Figure 3 shows an example of the above processing toobtain the final PMO image. Figure 3a is the ROI image.Figure 3b is the ROI after speckle reduction by SRAD(Yu and Acton 2002). Figure 3c is the PMO image andFigure 3d is the image after median filtering.

Distance feature. The last feature extracted is theradial distance (RD) to the lesion contour. For two pixelswith both similar JP features and similar PMO features, ifone pixel is near the lesion region and another is far away

from the lesion region, the one near the lesion region ismore likely belonging to the lesion than the one faraway. However, neither JP feature nor PMO feature candifferentiate such cases. Therefore, a feature givingdifferent weights to pixels based on their distance to thelesion contour is proposed. The nearer the pixel to theboundary is, the higher weight the pixel gets. By thisway, pixels on or near the lesion contour are emphasizedand the pixels far away from the boundary are faded out.

Here, the problem is how to locate the lesion’scontour. We use the rough contour generated by regiongrowing method in the ROI generation section. Althoughthe rough contour from region growing is only an inter-mediate result, it can be used for RD feature calculation.For every pixel inside the lesion, we assign its distanced(i, j) as 0; for every pixel outside the lesion, we assignits d(i, j) as the shortest distance to the lesion boundary.As shown in Figure 4, for pixel p, the shortest distanceto lesion boundary is the Euclidean distance betweenpixel p and pixel q, where q is a boundary pixel nearestto p. All the distances are normalized into range [0, 1]and subtracted from 1. Mathematically, the RD featureis calculated by

For c p(i, j) ˛ image,

RDði; jÞ512

dði; jÞmaxdði; jÞ

�b

(13)

dði; jÞ5�

0 if pði; jÞ˛lesionkpði; jÞ2qði; jÞk if pði; jÞ;lesion

(14)

where the exponent b controls the decay rate and is set asb 5 5 by experiment. q(i, j) is the nearest pixel on theboundary to p(i, j) and k ∙ k is two-dimensional (2-D)Euclidean distance.

ClassificationTo identify whether a pixel belongs to the lesion or

background, we employ a machine learning method to

Fig. 6. Areas corresponding to true positive (TP), false positive(FP) and false negative (FN) regions.

268 Ultrasound in Medicine and Biology Volume 38, Number 2, 2012

classify the pixels based on the extracted features. Twotypes of popular classifiers: artificial neural network(ANN) and support vector machine (SVM), are utilizedand compared. Artificial neural network (Freeman andSkapura 1992) is a mathematical model that imitates theproperties of biologic nervous system and the functionsof adaptive biologic learning. Support vector machine(Cristianini and Shawe-Taylor 2000) is a supervisedlearning technique that seeks an optimal hyperplane toseparate two classes of samples. We use the Matlab (TheMathWorks, Inc., Natick, MA, USA) ANN and libsvm(Chang and Lin 2001). The performance comparison ofthese two methods will be discussed in result section.

Training and testing. Both ANN and SVM aretrained using 10-fold cross-validation. ANN is a generalmultilayer perceptron (MLP) neural network using backpropagation learning rule. We choose Matlab defaulttransfer function, tan-sigmoid, for the hidden layers andpure linear (purelin) for the output layer. Training isstopped when the predefined maximum epoch (50) isreached. Ten images are randomly selected from the setof 120 images to generate the pdfs for intensity andtexture features. To make the pdfs independent fromtraining and testing sets, these 10 images are not used

Fig. 7. (a) Original image. (b) Region-of-interest (ROI). (c) Joinorientation (PMO) feature image. (e) Radial distance (RD) featu

segmented r

in the training and testing processes. The rest of 110images are randomly divided into 10 subsets evenly. Ineach round of the cross-validation, nine subsets areused for training and the remaining one is used for testing.This procedure is repeated 10 times so that each subset isused for the test once. The average performance of the 10runs is reported for all the following experiments in thenext section.

We use each pixel of the image as a training sample.There are 99 images in every training subset and eachimage (actually, the ROI from the original image) hasabout 200 3 200 pixels. We randomly pick 10% of thepixels in each image and then combine the pixels fromthe 99 images together to generate a training set. Thesame process repeats for all the other training subsets.

In the test process, every test subset contains11 images. Every pixel of the test image is input into thetrained network. The output of the network indicates ifa pixel belongs to lesion or background. Thus, after all thepixels in a test image are classified, a binary image is ob-tained, in which lesion is white and background is black.

Postprocessing. The generated binary imagecontains noise and needs post-processing. The postpro-cessing includes: (1) remove unconnected small regionsaround the lesion; (2) fill small holes inside the lesion(if there is any); and (3) refine the lesion contour bydilation and erosion. The contour is refined by erodingwith a structure element of a square (9 3 9) and thendilating with a structure element of a disk (diameter5 3)twice. This process guarantees that the contour gets rid oflittle sharp branches. After postprocessing, the imagecontains only the lesion region and the contour becomessmoother. Figure 5 shows an example. Figure 5a is theROI image, Figure 5b is the binary image generated bythe trained ANN and Figure 5c is the binary image afterpostprocessing.

t probability (JP) feature image. (d) Phase in max-energyre image. (f) Automatic segmentation result. (g)Manuallyesult.

Fig. 8. (a) Original image. (b) Region-of-interest (ROI). (c) Joint probability (JP) feature image. (d) Phase in max-energyorientation (PMO) feature image. (e) Radial distance (RD) feature image. (f) Automatic segmentation result. (g)Manually

segmented result.

Segmentation approach for breast ultrasound images d J. SHAN et al. 269

Evaluation metricsWe compare the proposed method with a popular

BUS segmentation method (Madabhushi and Metaxas2003) and a recently published method (Liu et al.2010). Madabhushi and Metaxas’ method is a well-developed hybrid automatic segmentation method forBUS images, using the deformable model snake. Themethod (Liu et al. 2010) is a level-set based method,combining both global statistical information and localedge information. Liu et al. have compared their methodwith geodesic/geometric active contour (GAC) model,active contours without edges (ACWE) model andmethod in (Chang et al. 2005). Experiment shows thatthe method (Liu et al. 2010) produces much bettersegmentation results than the other three methods, byusing comprehensive evaluation metrics. In this work,the comparison between different methods is conductedon the same database and the following evaluationmetrics are employed.

Fig. 9. (a) Original image. (b) Region-of-interest (ROI). (c) Joinorientation (PMO) feature image. (e) Radial distance (RD) featu

segmented r

Area error metrics. Area error metrics can evaluatethe percentage of how much the lesion region is correctlycovered by the generated lesion region and how much iswrongly covered. The true positive (TP) area ratio, thefalse positive (FP) area ratio, the false negative (FN)area ratio and the similarity (SI) (Udupa et al. 2002;Liu et al. 2009) will be calculated.

TP Area Ratio5jAmXAajjAmj (15)

FP Area Ratio5jAmWAa2Amj

jAmj (16)

FN Area Ratio5 1� TP Area Ratio5jAmWAa2Aaj

jAmj(17)

t probability (JP) feature image. (d) Phase in max-energyre image. (f) Automatic segmentation result. (g)Manuallyesult.

Table 1. Ten-fold cross-validation results of different ANN topologies and SVMs

Classifiers Area error metrics Boundary error metrics

Type Structure TP (%) FP (%) SI (%) Avg. HD Avg. MD

ANN 1st layer: 6, 2nd layer: 6 92.7 13.1 82.7 23.1 6.11st layer: 8, 2nd layer: 8 93.3 15.4 81.9 23.7 6.81st layer: 6, 2nd layer: 0 92.8 12.0 83.1 22.8 5.81st layer: 10, 2nd layer: 0 92.3 12.5 82.7 22.9 6.0

SVM C 5 10, g 5 0.01 93.1 14.2 82.0 22.6 6.4C 5 10, g 5 0.1 92.4 12.1 83.0 22.3 6.0C 5 5, g 5 0.1 92.4 12.9 82.1 22.4 6.1C 5 2, g 5 0.1 92.4 13.2 81.8 22.3 6.1

TP5 true positive; FP5 false positive; SI5 similarity; HD5Hausdorff distance; MD5mean absolute distance; ANN5 artificial neural network;SVM 5 support vector machine.

270 Ultrasound in Medicine and Biology Volume 38, Number 2, 2012

SI5jAmXAajjAmWAaj (18)

Aa is the pixel set of the automatically generatedlesion region by the proposed method and Am is the pixelset of the manually outlined lesion region by the radiolo-gist. Figure 6 shows the areas corresponding to TP, FP andFN. While both TP and FP give evaluation from a speci-fied aspect, SI can provide an overall evaluation on howsimilar the generated lesion region by the proposedapproach is to the radiologist’s delineation. Mathemati-cally, there is no need to list FN area ratio if TP area ratiois given since FN 5 1–TP.

Boundary error metrics. We use Hausdorff distance(HD) and the mean absolute distance (MD) (Madabhushiand Metaxas 2003) to analyze the difference between thecontour generated by computer and the contour markedby the radiologist. We denote the manually delineatedboundary as Q 5 {q1, q2,., qh} and the computersegmentation result as P 5 {p1, p2, ., pa}, where eachelement ofQ or P is a point on the corresponding contour.We find the distances of every point in P to all points inQand define the shortest distance of pj to the contour Q ascpj ˛P; j5 1;.;a, we find

d�pj;Q

�5 min

w

pj2qw ; w5 1;.; h (19)

Table 2. Ten-fold cross-validation result

Features

Area error metrics

TP (%) FP (%) SI (%)

JP 86.2 17.2 76.1PMO 87.3 16.3 77.7RD 91.7 13.3 81.2RD&JP 92.1 13.1 82.4RD&PMO 92.2 12.6 83.0JP,RD&PMO 92.8 12.0 83.1

TP 5 true positive; FP 5 false positive; SI 5 similarity; HD 5 HausdoPMO 5 max-energy orientation; RD 5 radial distance.

where k $ k is 2-D Euclidean distance, a and h are thenumbers of boundary pixels on contour P and Q, respec-tively. HD and MD are defined as

HD5 maxj

d�pj;Q

�; j5 1;.;a (20)

MD5

Xaj5 1

d�pj;Q

a(21)

HD measures the worst possible disagreementbetween two contours while MD measures the averagedisagreement over two outlines. The correspondingnormalized errors Norm.HD and Norm.MD (Madabhushiand Metaxas 2003) are computed by

Norm:HD5HD

h; Norm:MD5

MD

h(22)

RESULTS

We first show three examples of the segmentationresults in Figures 7–9. For each figure, (a) is the originalimage, (b) is the ROI, (c) is the JP feature image, (d) isthe PMO feature image, (e) is the RD feature image, (f)is the automatic segmentation result and (g) is the

s of different feature combinations

Boundary error metrics

Avg. HD Avg. MD Norm.HD Norm.MD

35.5 8.4 6.6 1.635.2 8.1 6.9 1.624.4 6.3 4.4 1.223.2 6.0 4.3 1.123.1 5.9 4.2 1.122.8 5.8 4.1 1.1

rff distance; MD 5 mean absolute distance; JP 5 joint probability;

Table 3. Performance of Madabhushi and Metaxas’ method (2003), Liu’s method (2010) and the proposed method

Methods

Area error metrics Boundary error metrics

TP (%) FP (%) SI (%) Avg.HD Avg.MD Norm.HD Norm.MD

Madabhushi andMetaxas’method

84.9 14.1 79.0 29.0 7.1 4.7 1.3

Liu’s method 92.0 18.2 81.7 39.0 7.7 7.5 1.5Proposed method 92.8 12.0 83.1 22.8 5.8 4.1 1.1

TP 5 true positive; FP 5 false positive; SI 5 similarity; HD 5 Hausdorff distance; MD 5 mean absolute distance.

Segmentation approach for breast ultrasound images d J. SHAN et al. 271

manually delineation by the radiologist. Figure 7 showsa case with a small lesion in the noisy background. Theproposed algorithm can successfully locate the ROI andfind the contour precisely without containing any non-lesion region. In Figure 8, a big lesionwith angularmarginis segmented by the proposed method. The algorithm cancatch the detail contour without smoothing the angularinformation. Figure 9 shows a low-contrast BUS imageand the boundary of the lesion is blurry. Our method isable to deal with such case well. The experiments demon-strate that the proposed algorithm can segment the lesionsof various sizes and shapes accurately; even when thebackground is quite complex, the lesionmargin is tortuousand the image quality is poor.

Next, we carry out the statistical analysis and evalu-ation on the whole dataset.

Classifier determinationThe first experiment is to compare the performances

of ANN and SVM. Different topologies of ANN are

Fig. 10. ROC analysis of three methods.

tested. Among tested topologies, the ANN with threeinput nodes, six neurons in the hidden layer and oneoutput node achieves the best performance. For SVM,we utilize the implementation of libsvm (Chang and Lin2001). Radial basis function (RBF) is chosen as the kerneland different parameter combinations (cost parameter Cand g of the radial basis kernel) are tried, using thesame training and testing data with all the three featuresfor ANN. Some results of ANN and SVM are listed inTable 1. Both area error and boundary error metrics areused. There is no big difference between the perfor-mances of the two classifiers. However, in our experi-ments, ANN (9.5 s) runs faster than SVM (18.1 s).Therefore, we choose the ANN (three input nodes, sixneurons in the hidden layer and one output node), asthe classifier for the further study.

Features discriminability testThe second experiment is designed to test the

discriminability of the proposed features to separate lesionand background. We use the stepwise feature selection toprocess the feature set and use both area error andboundary error metrics to evaluate the segmentationperformance of different feature combinations.

Table 2 lists the segmentation performance usingdifferent feature combinations. The rank of the distin-guishing ability of the three features is: RD . PMO .JP. Therefore, RD is added to the feature set first.

Then, we add the second feature. The improvementby adding PMO feature is more than the improvement byadding JP, with TP rate increases to 92.2%, FP rate dropsto 12.6% and similarity rate increases to 83.0%. Thismeans that more lesion region (true positive area) andless non-lesion region (false positive area) are covered.Increased similarity rate indicates that the overall

Table 4. Running time of Madabhushi and Metaxas’method (2003), Liu’s method (2010) and the proposed

method

MethodAverage running time on

one case

Madabhushi and Metaxas’ method 22.1 sLiu’s method 60.4 sProposed method 9.5 s

Table 5. Performance for different values of b

b

Area error metrics Boundary error metrics

TP% FP% SI% Avg. HD Avg. MD Norm.HD Norm.MD

3 92.6 11.9 83.0 22.9 6.0 4.2 1.14 92.7 12.1 82.8 22.7 6.1 4.1 1.15 92.8 12.0 83.1 22.8 5.8 4.1 1.16 91.0 11.5 82.0 24.8 6.6 4.6 1.37 92.4 12.3 82.2 23.0 6.2 4.3 1.2

TP5 true positive; FP5 false positive; SI5 similarity; HD5Haus-dorff distance; MD 5 mean absolute distance.

Table 6. Standard deviations of tuning parameters andstructures

Parameters or structures in variation

Standard deviation

HD MD

b 0.9 0.3Sampling percentage 0.3 0.2b1 6.0 2.7b2 7.4 2.3ANNs with different topologies 0.4 0.4SVMs with different parameters 0.2 0.1ANNs and SVMs 0.5 0.3

HD 5 Hausdorff distance; MD 5 mean absolute distance; ANN 5artificial neural network; SVM 5 support vector machine.

272 Ultrasound in Medicine and Biology Volume 38, Number 2, 2012

segmentation results are more accurate. All the boundaryerror decreases (average HD error is reduced to 23.1 andaverage MD is reduced to 5.9), meaning that the auto-matic segmentation contours are closer to the manualdelineations. Therefore, PMO is the second feature addedto the feature set.

The last step uses all the three features to train thenetwork. Result shows that with the complete featureset, both area and boundary metrics results have beenimproved: TP rate increases to 92.8%, FP rate drops to12.0%, similarity rate increases to 83.1%; HD error isreduced to 22.8 and MD error is reduced to 5.8. Sincethe performance is the best using the complete featureset, all the three features are utilized in the followingexperiments.

Comparison with other methodsThe performance of the snake method (Madabhushi

and Metaxas 2003), the level-set method (Liu et al. 2010)and the proposed method are listed in Table 3, respec-tively. The proposed method outperforms both the snakeand level-set methods. Especially, the overall evaluationSI rate is greatly improved from 79.0% (snake) and81.7% (level-set) to 83.1%, which means the averagelesion contour generated by the proposed method ismuch closer to the radiologist’s manual delineation thanthat generated by the other two methods. Hausdorffdistance (HD) is decreased from 29.0 pixels (snake) and39.0 pixels (level-set) to 22.8 pixels and mean absolutedistance (MD) is decreased from 7.1 pixels and 7.7 pixelsto 5.8 pixels, which also shows that the proposed methodproduces more accurate lesion contours. A receiver oper-

Table 7. Performance for differ

Sampling percentage

Area error metrics

TP% FP% SI%

10% 92.8 12.0 83.15% 92.7 12.3 82.83% 92.7 12.6 82.62% 92.6 12.6 82.6

TP 5 true positive; FP 5 false positive; SI 5 similarity; HD 5 Hausdorff

ating characteristic (ROC) analysis of the performance ofthe three methods is given in Figure 10.

We also compare the computational time of the threemethods. For a machine learning method, the most time-consuming step is always the training. Fortunately,training can be performed off-line and only needs to bedone once. In this work, the training time for the wholedataset is about 8 min, while the averaged processingtime for each test case is only 9.5 s, using a 3.0 GHzPentium processor. The computational time of the snakeand level-set methods are 22.1 s and 60.4 s, respectively(Table 4). Faster processing time of the proposed methodmakes the real-time application more feasible.

Sensitivity analysisTo evaluate segmentation methods, three aspects:

precision (reproducibility), accuracy (agreement withtruth) and efficiency (time taken), will be considered(Udupa et al. 2002). We have already evaluated the accu-racy and efficiency of the proposed method in the abovesection. To assess reproducibility, it needs to choosea figure of merit (FOM), repeat segmentation withconsidering possible variations and determine thechanges in FOM via statistical analysis. In this section,we will repeat the proposed segmentation method byvarying its parameters and employ HD and MD as theFOMs. Standard deviations of the FOMs will be calcu-lated to measure the method’s reproducibility.

Effect of different values for b. The first test is onparameter b for RD calculation. RD is calculated based

ent sampling percentages

Boundary error metrics

Avg. HD Avg. MD Norm.HD Norm.MD

22.8 5.8 4.1 1.123.0 6.1 4.2 1.123.5 6.3 4.3 1.123.4 6.3 4.3 1.1

distance; MD 5 mean absolute distance.

Table 8. Performance for various values of b1 whenb2 5 1.6

b1

Area error metrics Boundary error metrics

TP% FP% SI% Avg. HD Avg. MD Norm.HD Norm.MD

1.3 92.5 12.2 82.7 23.2 6.3 4.2 1.11.4 92.7 12.2 82.9 22.7 6.1 4.1 1.11.5 92.8 12.0 83.1 22.8 5.8 4.1 1.11.6 90.4 41.4 79.7 32.2 10.2 7.1 2.31.7 91.2 42.3 78.8 34.9 11.6 7.4 2.3

TP 5 true positive; FP 5 false positive; SI 5 similarity; HD 5Hausdorff distance; MD 5 mean absolute distance.

Fig. 11. A failure case: (a) Original image. (b) The manualdelineation of a radiologist. (c) The contour generated by the

proposed method.

Segmentation approach for breast ultrasound images d J. SHAN et al. 273

on the distance to lesion boundary [see eqn (13)]. For thepixels outside the lesion, their RD values decrease as theirdistances to lesion boundary increase. Parameterb controls the decay rate. We vary b by up to 40% andcompute the corresponding area error and boundary errormetrics, which are listed in Table 5. The standard devia-tions of HD and MD are only 0.9 and 0.3 pixels (Table 6,1st row). The small standard deviations show that bytuning different b values, the average change of the auto-matically detected contours is less than 1 pixel. Suchresult proves that the method is robust to parameter b.

Effect of different sampling percentage. To generatethe training sets, 10% of pixels in every training image arerandomly selected. Now, we want to test if the method isinsensitive to the percentage. By varying the percentage,the size of the training sets will change. Table 7 andTable 6 (2nd row) list the performance and standard devia-tion of using different sampling percentages. The standarddeviations are only 0.3 pixel for HD and 0.2 pixel for MD,meaning that by tuning different sampling percentages, theaverage change of the automated detected contour is lessthan half a pixel. Thus, we can conclude that the methodis very robust to sampling percentage within the range2%–10%. In addition, if the sampling percentage isreduced to 2%, both the training and testing will speed up.

Variations in b1 and b2. Parameters b1 and b2 are thethresholds for region growing algorithm. Different regiongrowing results will lead to different ROIs and differentRD features since ROI and RD features are obtained

Table 9. Performance for various values of b2 whenb1 5 1.5

b2

Area error metrics Boundary error metrics

TP% FP% SI% Avg. HD Avg. MD Norm.HD Norm.MD

1.4 90.4 25.3 77.3 41.5 10.9 7.0 1.91.5 90.2 18.4 80.0 34.6 8.8 5.3 1.51.6 92.8 12.0 83.1 22.8 5.8 4.1 1.11.7 90.4 12.6 80.7 27.1 6.1 4.3 1.11.8 90.2 12.1 80.9 27.6 6.1 4.4 1.1

TP 5 true positive; FP 5 false positive; SI 5 similarity; HD 5Hausdorff distance; MD 5 mean absolute distance.

based on the region growing result. By tuning differentb1 and b2, the size of ROI and the values of RD maychange as well. We calculate the corresponding area errorand boundary error metrics (Table 8 for tuning b1 andTable 9 for tuning b2). The standard deviations of HDand MD are listed in Table 6 (3rd row and 4th row). Thestandard deviations of HD and MD are 6.0 and 2.7 pixelsif tuning b1 and 7.4 and 2.3 pixels if tuning b2. The aboveresult indicates that the method is sensitive to parametersb1 and b2. Therefore, choosing proper values of b1 and b2is quite important for achieving good performance.

Sensitivity to classifiers with different topologiesand parameters. In the Classifier determination section,

274 Ultrasound in Medicine and Biology Volume 38, Number 2, 2012

we list the experimental results using ANNs withdifferent topologies and SVMs with different parameters(Table 1). The standard deviations of HD and MD aregiven in Table 6 (5th–7th rows). The small standard devi-ations demonstrate the robustness of the feature set: i.e.,the 5th row of Table 6 indicates that the feature set isrobust to ANNs with different topologies; the 6th rowshows that the feature set is robust to SVMs with differentparameters; and the last row shows that the feature set isrobust to different classifiers (ANN and SVM).

Failure casesFigure 11 gives a failure case of the proposed

method. Figure 11a is the original image, Figure 11b isthe manual delineation of a radiologist (reference stan-dard) and Figure 11c is the output of the proposedmethod.The reason of such a failure is that the surrounding tissuesor shadows havevery similar intensity as that of the lesion.Therefore, the algorithm cannot distinguish them well.The failure cases caused by intensity similarity arechallenging to automatic lesion boundary detection andCAD systems.

SUMMARY

In this article, we propose a novel segmentationmethod for BUS images. The proposed method iscompletely automatic requiring no human intervention.At the same time, it achieves high segmentation accuracy.Compared with other segmentation algorithms for BUSimages, the proposed method produces more accuratelesion contours (SI rate is improved from 79.0% to83.1%) with an efficient processing time (average 9.5 sfor each case).

The better performance is due to two reasons, whichare also the novelties and major contribution of this work.(1) The automatically generated ROI enables the algo-rithm to focus more on the lesion region. Getting rid ofthe complicated background not only speeds up theprocess, but also makes segmentation results more accu-rate. (2) The newly proposed features PMO and RD canbetter distinguish the pixels inside and outside the lesion.PMO is a robust feature to characterize the edges. RDemphasizes pixels near lesion boundary by assigningthem a bigger weight based on their distance to the lesioncontour. Experimental results show that PMO and RDhave stronger distinguishing ability to separate the lesionfrom the background, especially, when PMO and RD arecombined.

Sensitivity analysis shows that the method is veryrobust to most of its parameters (standard deviations arewithin 1 pixel), except parameters b1 and b2; i.e., thechoice of b1 and b2 is important to the performance.

The overall experimental results demonstrate thatthe proposed automatic segmentation method for BUSimages is accurate, efficient and robust.

REFERENCES

American College of Radiology. ACR standards 2000–2001. Reston,VA: American College of Radiology; 2000.

Boukerroui D, Baskurt A, Noble JA, Basset O. Segmentation of ultra-sound images— multiresolution 2D and 3D algorithm based onglobal and local statistics. Pattern Recog Lett 2003;24:779–790.

ChangCC,LinCJ. LIBSVM:A library for support vectormachines. 2001.Software available at: http://www.csie.ntu.edu.tw/�cjlin/libsvm. Ac-cessed June 2010.

Chang RF, Wu WJ, Moon WK, Chen DR. Automatic ultrasoundsegmentation and morphology based diagnosis of solid breasttumors. Breast Cancer Res Treatment 2005;89:179–185.

Chang RF, Wu WJ, Moon WK, Chen WM, Lee W, Chen DR. Segmen-tation of breast tumor in three-dimensional ultrasound images usingthree-dimensional discrete active contour model. Ultrasound MedBiol 2003;29:1571–1581.

Chen DR, Chang RF,WuWJ, MoonWK,WuWJ. 3-D breast ultrasoundsegmentation using active contour model. Ultrasound Med Biol2003;29:1017–1026.

Cheng HD, Shan J, Ju W, Guo YH, Zhang L. Automated breast cancerdetection and classification using ultrasound images: A survey.Pattern Recog 2010;43:299–317.

Cristianini N, Taylor JS. An introduction to support vector machines.Cambridge, UK: Cambridge University Press; 2000.

Drukker K, Giger ML, Horsch K, Kupinski MA, Vyborny CJ,Mendelson EB. Computerized lesion detection on breast ultrasound.Med Phys 2002;29:1438–1446.

Freeman JA, Skapura DM. Networks: Algorithms, applications, andprogramming techniques. New York: Addison-Wesley; 1992.

Jemal A, Siegel R, Ward E, Hao Y, Xu J, Thun MJ. Cancer Statistics2009. CA Cancer J Clin 2009;59:225–249.

Joo S, YangYS,MoonWK,KimHC. Computer-aided diagnosis of solidbreast nodules: Use of an artificial neural network based on multiplesonographic features. IEEE Trans Med Imaging 2004;23:1292–1300.

Jung IS, Thapa D, Wang GN. Automatic segmentation and diagnosis ofbreast lesions using morphologymethod based on ultrasound. FuzzySyst Knowledge Discov 2005;3614:1079–1088.

Leucht W, Leucht D. Teaching atlas of breast ultrasound. New York:Thieme Medical; 2000:24–38.

Liu B, Cheng HD, Huang J, Tian JW, Liu J, Tang XL. Automatedsegmentation of ultrasonic breast lesions using statistical textureclassification and active contour based on probability distance.Ultrasound Med Biol 2009;35:1309–1324.

Liu B, Cheng HD, Huang JH, Tian JW, Tang XL, Liu JF. Probabilitydensity difference-based active contour for ultrasound imagesegmentation. Pattern Recog 2010;43:2028–2042.

Madabhushi A, Metaxas DN. Combining low-, high-level and empiricaldomain knowledge for automated segmentation of ultrasonic breastlesions. IEEE Trans Med Imaging 2003;22:155–169.

Noble JA, Boukerroui D. Ultrasound image segmentation: A survey.IEEE Trans Med Imaging 2006;25:987–1010.

Poonguzhali S, Ravindran G. A complete automatic region growingmethod for segmentation of masses on ultrasound images. Confer-ence on Biomedical and Pharmaceutical Engineering International.2006:88–92.

Shan J, Cheng HD, Wang YX. A novel automatic seed point selectionalgorithm for breast ultrasound images. Proceeding of IEEE Interna-tional Conference on Pattern Recognition (ICPR’08), Dec. 8-11,Florida, USA, 2008.

Stavros AT, Thickman D, Rapp CL, Dennis MA, Parker SH, Sisney GA.Solid breast nodules: Use of sonography to distinguish betweenbenign and malignant lesions. Radiology 1995;196:123–134.

Taylor KJ, Merritt C, Piccoli C, Schmidt R, Rouse G, Fornage B,Rubin E, Smith DG, Winsberg F, Goldberg B, Mendelson E.

Segmentation approach for breast ultrasound images d J. SHAN et al. 275

Ultrasound as a complement to mammography and breast examina-

tion to characterize breast masses. Ultrasound Med Biol 2002;28:

19–26.Udupa JK, LeBlancVR, Schmidt H, Imielinska C, Saha PK, Grevera GJ,

Zhuge Y, Currie LM, Moholt P, Jin Y. A methodology for evaluating

image segmentation algorithms. Medical Imaging 2002: Image pro-

cessing. Proc SPIE 2002;4684:266–277.

Xiao GF, Brady M, Noble JA, Zhang YY. Segmentation of ultrasoundB-mode images with intensity inhomogeneity correction. IEEETrans Med Imaging 2002;21:48–57.

Yap MH, Edirisinghe EA, Bez HE. A novel algorithm for initial lesiondetection in ultrasound breast images. JAppl ClinMed Phys 2008;9:2741.

Yu YJ, Acton ST. Speckle reducing anisotropic diffusion. IEEE TransImage Processing 2002;11:1260–1270.