Multithreshold Segmentation by Using an Algorithm Based on the Behavior of Locust Swarms

26
Research Article Multithreshold Segmentation by Using an Algorithm Based on the Behavior of Locust Swarms Erik Cuevas, 1,2 Adrián González, 1 Fernando Fausto, 1 Daniel Zaldívar, 1,2 and Marco Pérez-Cisneros 3 1 Departamento de Electr´ onica, Universidad de Guadalajara, CUCEI, Avenida Revoluci´ on 1500, 44430 Guadalajara, JAL, Mexico 2 Centro Tapat´ ıo Educativo AC, Vicente Guerrero 232, Colonia Zapopan Centro, 45100 Zapopan, JAL, Mexico 3 Centro Universitario de Tonal´ a, Avenida Nuevo Perif´ erico No. 555 Ejido San Jos´ e Tatepozco, 48525 Tonal´ a, JAL, Mexico Correspondence should be addressed to Erik Cuevas; [email protected] Received 7 October 2014; Revised 25 May 2015; Accepted 10 June 2015 Academic Editor: Bogdan Dumitrescu Copyright © 2015 Erik Cuevas et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. As an alternative to classical techniques, the problem of image segmentation has also been handled through evolutionary methods. Recently, several algorithms based on evolutionary principles have been successfully applied to image segmentation with interesting performances. However, most of them maintain two important limitations: (1) they frequently obtain suboptimal results (misclassifications) as a consequence of an inappropriate balance between exploration and exploitation in their search strategies; (2) the number of classes is fixed and known in advance. is paper presents an algorithm for the automatic selection of pixel classes for image segmentation. e proposed method combines a novel evolutionary method with the definition of a new objective function that appropriately evaluates the segmentation quality with respect to the number of classes. e new evolutionary algorithm, called Locust Search (LS), is based on the behavior of swarms of locusts. Different to the most of existent evolutionary algorithms, it explicitly avoids the concentration of individuals in the best positions, avoiding critical flaws such as the premature convergence to suboptimal solutions and the limited exploration-exploitation balance. Experimental tests over several benchmark functions and images validate the efficiency of the proposed technique with regard to accuracy and robustness. 1. Introduction Image segmentation [1] consists in grouping image pixels based on some criteria such as intensity, color, and texture and still represents a challenging problem within the field of image processing. Edge detection [2], region-based segmen- tation [3], and thresholding methods [4] are the most popular solutions for image segmentation problems. Among such algorithms, thresholding is the simplest method. It works by considering threshold (points) values to adequately separate distinct pixels regions within the image being processed. In general, thresholding methods are divided into two types depending on the number of threshold values, namely, bilevel and multilevel. In bilevel thresholding, only a threshold value is required to separate the two objects of an image (e.g., foreground and background). On the other hand, multilevel thresholding divides pixels into more than two homogeneous classes that require several threshold values. e thresholding methods use a parametric or non- parametric approach [5]. In parametric approaches [6, 7], it is necessary to estimate the parameters of a probability density function that is capable of modelling each class. A nonparametric technique [811] employs a given criteria such as the between-class variance or the entropy and error rate, in order to determine optimal threshold values. A common method to accomplish parametric thresh- olding is the modeling of the image histogram through a Gaussian mixture model [12] whose parameters define a set of pixel classes (threshold points). erefore, each pixel that belongs to a determined class is labeled according to its corresponding threshold points with several pixel groups gathering those pixels that share a homogeneous grayscale level. Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 2015, Article ID 805357, 25 pages http://dx.doi.org/10.1155/2015/805357

Transcript of Multithreshold Segmentation by Using an Algorithm Based on the Behavior of Locust Swarms

Research ArticleMultithreshold Segmentation by Using an Algorithm Based onthe Behavior of Locust Swarms

Erik Cuevas12 Adriaacuten Gonzaacutelez1 Fernando Fausto1

Daniel Zaldiacutevar12 and Marco Peacuterez-Cisneros3

1Departamento de Electronica Universidad de Guadalajara CUCEI Avenida Revolucion 150044430 Guadalajara JAL Mexico2Centro Tapatıo Educativo AC Vicente Guerrero 232 Colonia Zapopan Centro 45100 Zapopan JAL Mexico3Centro Universitario de Tonala Avenida Nuevo Periferico No 555 Ejido San Jose Tatepozco 48525 Tonala JAL Mexico

Correspondence should be addressed to Erik Cuevas erikcuevascuceiudgmx

Received 7 October 2014 Revised 25 May 2015 Accepted 10 June 2015

Academic Editor Bogdan Dumitrescu

Copyright copy 2015 Erik Cuevas et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

As an alternative to classical techniques the problem of image segmentation has also been handled through evolutionarymethods Recently several algorithms based on evolutionary principles have been successfully applied to image segmentation withinteresting performances However most of themmaintain two important limitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriate balance between exploration and exploitation in their search strategies (2)the number of classes is fixed and known in advanceThis paper presents an algorithm for the automatic selection of pixel classes forimage segmentation The proposed method combines a novel evolutionary method with the definition of a new objective functionthat appropriately evaluates the segmentation quality with respect to the number of classesThe new evolutionary algorithm calledLocust Search (LS) is based on the behavior of swarms of locusts Different to the most of existent evolutionary algorithms itexplicitly avoids the concentration of individuals in the best positions avoiding critical flaws such as the premature convergence tosuboptimal solutions and the limited exploration-exploitation balance Experimental tests over several benchmark functions andimages validate the efficiency of the proposed technique with regard to accuracy and robustness

1 Introduction

Image segmentation [1] consists in grouping image pixelsbased on some criteria such as intensity color and textureand still represents a challenging problem within the field ofimage processing Edge detection [2] region-based segmen-tation [3] and thresholdingmethods [4] are themost popularsolutions for image segmentation problems

Among such algorithms thresholding is the simplestmethod It works by considering threshold (points) valuesto adequately separate distinct pixels regions within theimage being processed In general thresholding methods aredivided into two types depending on the number of thresholdvalues namely bilevel andmultilevel In bilevel thresholdingonly a threshold value is required to separate the two objectsof an image (eg foreground and background) On theother hand multilevel thresholding divides pixels into more

than two homogeneous classes that require several thresholdvalues

The thresholding methods use a parametric or non-parametric approach [5] In parametric approaches [6 7]it is necessary to estimate the parameters of a probabilitydensity function that is capable of modelling each class Anonparametric technique [8ndash11] employs a given criteria suchas the between-class variance or the entropy and error rate inorder to determine optimal threshold values

A common method to accomplish parametric thresh-olding is the modeling of the image histogram through aGaussian mixture model [12] whose parameters define aset of pixel classes (threshold points) Therefore each pixelthat belongs to a determined class is labeled according toits corresponding threshold points with several pixel groupsgathering those pixels that share a homogeneous grayscalelevel

Hindawi Publishing CorporationMathematical Problems in EngineeringVolume 2015 Article ID 805357 25 pageshttpdxdoiorg1011552015805357

2 Mathematical Problems in Engineering

The problem of estimating the parameters of a Gaussianmixture that better model an image histogram has beencommonly solved through the Expectation Maximization(EM) algorithm [13 14] or gradient-based methods suchas Levenberg-Marquardt LM [15] Unfortunately EM algo-rithms are very sensitive to the choice of the initial values[16]meanwhile gradient-basedmethods are computationallyexpensive and may easily get stuck within local minima [17]

As an alternative to classical techniques the problemof Gaussian mixture identification has also been handledthrough evolutionarymethods In general they have demon-strated to deliver better results than those based on clas-sical approaches in terms of accuracy and robustness [18]Under these methods an individual is represented by acandidate Gaussian mixture model Just as the evolutionprocess unfolds a set of evolutionary operators are appliedin order to produce better individuals The quality of eachcandidate solution is evaluated through an objective functionwhose final result represents the similarity between themixture model and the histogram Some examples of theseapproaches involve optimization methods such as ArtificialBee Colony (ABC) [19] Artificial Immune Systems (AIS)[20] Differential Evolution (DE) [21] ElectromagnetismOptimization (EO) [22] Harmony Search (HS) [23] andLearning Automata (LA) [24] Although these algorithmsown interesting results they present two important limita-tions (1) They frequently obtain suboptimal approximationsas a consequence of a limited balance between explorationand exploitation in their search strategies (2) They are basedon the assumption that the number of Gaussians (classes) inthe mixture is preknown and fixed otherwise they cannotwork The cause of the first limitation is associated with theirevolutionary operators employed to modify the individualpositions In such algorithms during their evolution theposition of each agent for the next iteration is updatedyielding an attraction towards the position of the bestparticle seen so far or towards other promising individualsTherefore as the algorithmevolves these behaviors cause thatthe entire population rapidly concentrates around the bestparticles favoring the premature convergence and damagingthe appropriate exploration of the search space [25 26]The second limitation is produced as a consequence of theobjective function that evaluates the similarity between themixture model and the histogram Under such an objectivefunction the number of Gaussians functions in the mixtureis fixed Since the number of threshold values (Gaussianfunctions) used for image segmentation varies depending onthe image the best threshold number and values are obtainedby an exhaustive trial and error procedure

On the other hand bioinspired algorithms represent afield of research that is concerned with the use of biologyas a metaphor for producing optimization algorithms Suchapproaches use our scientific understanding of biologicalsystems as an inspiration that at some level of abstractioncan be represented as optimization processes

In the last decade several optimization algorithms havebeen developed by a combination of deterministic rules andrandomness mimicking the behavior of natural phenomenaSuch methods include the social behavior of bird flocking

and fish schooling such as the Particle Swarm Optimization(PSO) algorithm [27] and the emulation of the differentialevolution in species such as the Differential Evolution (DE)[28] Although PSO and DE are the most popular algorithmsfor solving complex optimization problems they presentserious flaws such as premature convergence and difficulty toovercome local minima [29 30]The cause for such problemsis associated with the operators that modify individual posi-tions In such algorithms during the evolution the positionof each agent for the next iteration is updated yielding anattraction towards the position of the best particle seen so far(in case of PSO) or towards other promising individuals (incase of DE) As the algorithm evolves these behaviors causethat the entire population rapidly concentrates around thebest particles favoring the premature convergence and dam-aging the appropriate exploration of the search space [31 32]

Recently the collective intelligent behavior of insector animal groups in nature has attracted the attention ofresearchersThe intelligent behavior observed in these groupsprovides survival advantages where insect aggregations ofrelatively simple and ldquounintelligentrdquo individuals can accom-plish very complex tasks using only limited local informationand simple rules of behavior [33] Locusts (Schistocercagregaria) are a representative example of such collaborativeinsects [34] Locust is a kind of grasshopper that can changereversibly between a solitary and a social phase with clearbehavioral differences among both phases [35] The twophases show many differences regarding the overall levelof activity and the degree to which locusts are attracted orrepulsed among them [36] In the solitary phase locustsavoid contact to each other (locust concentrations) Asconsequence they distribute throughout the space exploringsufficiently over the plantation [36] On the other hand inthe social phase locusts frantically concentrate around thoseelements that have already found good food sources [37]Under such a behavior locusts attempt to efficiently findbetter nutrients by devastating promising areas within theplantation

This paper presents an algorithm for the automaticselection of pixel classes for image segmentation The pro-posed method combines a novel evolutionary method withthe definition of a new objective function that appropri-ately evaluates the segmentation quality with regard to thenumber of classes The new evolutionary algorithm calledLocust Search (LS) is based on the behavior presented inswarms of locusts In the proposed algorithm individualsemulate a group of locusts which interact to each otherbased on the biological laws of the cooperative swarmThe algorithm considers two different behaviors solitaryand social Depending on the behavior each individual isconducted by a set of evolutionary operators which mimicsdifferent cooperative conducts that are typically found in theswarm Different to most of existent evolutionary algorithmsthe behavioral model in the proposed approach explicitlyavoids the concentration of individuals in the current bestpositions Such fact allows avoiding critical flaws such asthe premature convergence to suboptimal solutions andthe incorrect exploration-exploitation balance In order toautomatically define the optimal number of pixel classes

Mathematical Problems in Engineering 3

(Gaussian functions in themixture) a new objective functionhas been also incorporated The new objective function isdivided into two parts The first part evaluates the qualityof each candidate solution in terms of its similarity withregard to the image histogram The second part penalizesthe overlapped area among Gaussian functions (classes)Under these circumstances Gaussian functions that do notldquopositivelyrdquo participate in the histogram approximation couldbe easily eliminated in the final Gaussian mixture model

In order to illustrate the proficiency and robustnessof the proposed approach several numerical experimentshave been conducted Such experiments are divided intotwo sections In the first part the proposed LS methodis compared to other well-known evolutionary techniquesover a set of benchmark functions In the second partthe performance of the proposed segmentation algorithmis compared to other segmentation methods which are alsobased on evolutionary principles The results in both casesvalidate the efficiency of the proposed technique with regardto accuracy and robustness

This paper is organized as follows in Section 2 basicbiological issues of the algorithm analogy are introduced andexplained Section 3 describes the novel LS algorithm and itscharacteristics A numerical study on different benchmarkfunction is presented in Section 4 while Section 5 presentsthe modelling of an image histogram through a Gaussianmixture Section 6 exposes the LS segmentation algorithmand Section 7 the performance of the proposed segmentationalgorithm Finally Section 8 draws some conclusions

2 Biological Fundamentals andMathematical Models

Social insect societies are complex cooperative systems thatself-organize within a set of constraints Cooperative groupsare good at manipulating and exploiting their environmentdefending resources and breeding yet allowing task special-ization among groupmembers [38 39] A social insect colonyfunctions as an integrated unit that not only possesses theability to operate at a distributed manner but also undertakesa huge construction of global projects [40] It is important toacknowledge that global order for insects can arise as a resultof internal interactions among members

Locusts are a kind of grasshoppers that exhibit twoopposite behavioral phases solitary and social (gregarious)Individuals in the solitary phase avoid contact to eachother (locust concentrations) As consequence they dis-tribute throughout the space while sufficiently exploring theplantation [36] In contrast locusts in the gregarious phasegather into several concentrations Such congregations maycontain up to 1010 members cover cross-sectional areas ofup to 10 km2 and a travelling capacity up to 10 km perday for a period of days or weeks as they feed causing adevastating crop loss [41]Themechanism to switch from thesolitary phase to the gregarious phase is complex and hasbeen a subject of significant biological inquiry Recently aset of factors has been implicated to include geometry of thevegetation landscape and the olfactory stimulus [42]

Only few works [36 37] that mathematically modelthe locust behavior have been published Both approachesdevelop two different minimal models with the goal ofreproducing the macroscopic structure and motion of agroup of locusts Considering that themethod in [36] focuseson modelling the behavior for each locust in the group itsfundamentals have been employed to develop the algorithmthat is proposed in this paper

21 Solitary Phase This section describes how each locustrsquosposition is modified as a result of its behavior under thesolitary phase Considering that x119896

119894represents the current

position of the 119894th locust in a group of 119873 different elementsthe new position x119896+1

119894is calculated by using the following

model

x119896+1119894

= x119896119894+Δx119894 (1)

with Δx119894corresponding to the change of position that is

experimented by x119896119894as a consequence of its social interaction

with all other elements in the groupTwo locusts in the solitary phase exert forces on each

other according to basic biological principles of attractionand repulsion (see eg [36]) Repulsion operates quitestrongly over a short length scale in order to avoid concentra-tions Attraction is weaker and operates over a longer lengthscale providing the social force that is required to maintainthe grouprsquos cohesion Therefore the strength of such socialforces can be modeled by the following function

119904 (119903) = 119865 sdot 119890minus119903119871

minus 119890minus119903 (2)

Here 119903 is a distance 119865 describes the strength of attractionand 119871 is the typical attractive length scale We have scaled thetime and the space coordinates so that the repulsive strengthand length scale are both represented by the unityWe assumethat 119865 lt 1 and 119871 gt 1 so that repulsion is stronger and featuresin a shorter-scale while attraction is applied in a weaker andlonger-scale both facts are typical for social organisms [21]The social force exerted by locust 119895 over locust 119894 is

s119894119895= 119904 (119903119894119895) sdot d119894119895 (3)

where 119903119894119895= x119895minus x119894 is the distance between the two locusts

and d119894119895= (x119895minus x119894)119903119894119895is the unit vector pointing from x

119894to

x119895The total social force on each locust can be modeled as the

superposition of all of the pairwise interactions

S119894=

119873

sum

119895=1119895 =119894

s119894119895 (4)

The change of positionΔx119894is modeled as the total social force

experimented by x119896119894as the superposition of all of the pairwise

interactions Therefore Δx119894is defined as follows

Δx119894= S119894 (5)

In order to illustrate the behavioral model under the solitaryphase Figure 1 presents an example assuming a population of

4 Mathematical Problems in Engineering

1198261

11982621198263

11985221

1198523211985231

1198522311985213

1198521211985711198572

1198573

Figure 1 Behavioral model under the solitary phase

three different members (119873 = 3) which adopt a determinedconfiguration in the current iteration 119896 As a consequenceof the social forces each element suffers an attraction orrepulsion to other elements depending on the distance amongthem Such forces are represented by s12 s13 s21 s23 s31and s32 Since x1 and x2 are too close the social forces s12and s13 present a repulsive nature On the other hand as thedistances x1 minus x3 and x2 minus x3 are quite long the socialforces s13 s23 s31 and s32 between x1 harr x3 and x2 harr x3all belong to the attractive nature Therefore the change ofposition Δx1 is computed as the vector resultant between s12and s13 (Δx1 = s12+s13) is S1The valuesΔx2 andΔx3 are alsocalculated accordingly

In addition to the presented model [36] some studies[43ndash45] suggest that the social force s

119894119895is also affected by

the dominance of the involved individuals x119894and x

119895in the

pairwise process Dominance is a property that relativelyqualifies the capacity of an individual to survive in relationto other elements in a group The locustrsquos dominance isdetermined by several characteristics such as size chemicalemissions and location with regard to food sources Undersuch circumstances the social force ismagnified orweakeneddepending on the most dominant individual that is involvedin the repulsion-attraction process

22 Social Phase In this phase locusts frantically concen-trate around the elements that have already found good foodsources They attempt to efficiently find better nutrients bydevastating promising areas within the plantation In orderto simulate the social phase the food quality index Fq

119894is

assigned to each locust x119894of the group as such index reflects

the quality of the food source where x119894is located

Under this behavioral model each of the 119873 elementsof the group is ranked according to its corresponding foodquality index Afterwards the 119887 elements featuring the bestfood quality indexes are selected (119887 ≪ 119873) Considering aconcentration radius 119877

119888that is created around each selected

element a set of 119888 new locusts is randomly generated inside119877119888 As a result most of the locusts will be concentrated

around the best 119887 elements Figure 2 shows a simple exampleof the behavioral model under the social phase In theexample the configuration includes eight locusts (119873 = 8)just as it is illustrated by Figure 2(a) that also presents the foodquality index for each locust A food quality index near to oneindicates a better food sourceTherefore Figure 2(b) presentsthe final configuration after the social phase assuming 119887 = 2

3 The Locust Search (LS) Algorithm

In this paper some behavioral principles drawn froma swarmof locusts have been used as guidelines for developing anew swarm optimization algorithm The LS assumes thatentire search space is a plantation where all the locustsinteract to each other In the proposed approach eachsolution within the search space represents a locust positioninside the plantation Every locust receives a food qualityindex according to the fitness value of the solution that issymbolized by the locustrsquos position As it has been previouslydiscussed the algorithm implements two different behavioralschemes solitary and social Depending on each behavioralphase each individual is conducted by a set of evolutionaryoperators which mimics the different cooperative operationsthat are typically found in the swarm The proposed methodformulates the optimization problem in the following form

maximizeminimize 119891 (l) l = (1198971 119897119899) isin R119899

subject to l isin S(6)

where 119891 R119899 rarr R is a nonlinear function whereas S = l isinR119899 | 119897119887

119889le 119897119889le 119906119887119889 119889 = 1 119899 is a bounded feasible search

space which is constrained by the lower (119897119887119889) and upper (119906119887

119889)

limitsIn order to solve the problem formulated in (6) a

population L119896 (l1198961 l119896

2 l119896

119873) of 119873 locusts (individuals) is

evolved inside the LS operation from the initial point (119896 =

0) to a total 119892119890119899 number of iterations (119896 = 119892119890119899) Eachlocust l119896

119894(119894 isin [1 119873]) represents an 119899-dimensional

vector 1198971198961198941 119897119896

1198942 119897119896

119894119899 where each dimension corresponds to

a decision variable of the optimization problem to be solvedThe set of decision variables constitutes the feasible searchspace S = l119896

119894isin R119899 | 119897119887

119889le 119897119896

119894119889le 119906119887119889 where 119897119887

119889and 119906119887

119889

correspond to the lower and upper bounds for the dimension119889 respectively The food quality index that is associated witheach locust l119896

119894(candidate solution) is evaluated through an

objective function 119891(l119896119894) whose final result represents the

fitness value of l119896119894 In the LS algorithm each iteration of the

evolution process consists of two operators (A) solitary and(B) social Beginning by the solitary stage the set of locusts isoperated in order to sufficiently explore the search space Onthe other hand the social operation refines existent solutionswithin a determined neighborhood (exploitation) subspace

31 Solitary Operation (A) One of the most interestingfeatures of the proposed method is the use of the solitaryoperator to modify the current locust positions Under thisapproach locusts are displaced as a consequence of thesocial forces produced by the positional relations among theelements of the swarm Therefore near individuals tend torepel each other avoiding the concentration of elements inregions On the other hand distant individuals tend to attractto each other maintaining the cohesion of the swarm A cleardifference to the original model in [20] considers that socialforces are alsomagnified or weakened depending on themostdominant (best fitness values) individuals that are involved inthe repulsion-attraction process

Mathematical Problems in Engineering 5

1

2

3

4

5

6

7

8

Fq4 = 09

Fq8 = 04

Fq3 = 01 Fq1 = 02

Fq6 = 10

Fq7 = 03

Fq5 = 02

Fq2 = 03

(a)

4

6

1

5 3

2

78

Rc

(b)

Figure 2 Behavioral model under the social phase (a) Initial configuration and food quality indexes (b) final configuration after theoperation of the social phase

In the solitary phase a new position p119894(119894 isin [1 119873]) is

produced by perturbing the current locust position l119896119894with a

change of position Δl119894(p119894= l119896119894+ Δl119894) The change of position

Δl119894is the result of the social interactions experimented by l119896

119894

as a consequence of its repulsion-attraction behavioralmodelSuch social interactions are pairwise computed among l119896

119894and

the other 119873 minus 1 individuals in the swarm In the originalmodel social forces are calculated by using (3) However inthe proposedmethod it is modified to include the best fitnessvalue (the most dominant) of the individuals involved inthe repulsion-attraction process Therefore the social forcethat is exerted between l119896

119895and l119896119894 is calculated by using the

following new model

s119898119894119895= 120588 (l119896

119894 l119896119895) sdot 119904 (119903

119894119895) sdot d119894119895+ rand (1 minus 1) (7)

where 119904(119903119894119895) is the social force strength defined in (2) and d

119894119895=

(l119896119895minus l119896119894)119903119894119895is the unit vector pointing from l119896

119894to l119896119895 Besides

rand(1 minus1) is a randomly generated number between 1 andminus1 Likewise 120588(l119896

119894 l119896119895) is the dominance function that calcu-

lates the dominance value of the most dominant individualfrom l119896

119895and l119896119894 In order to operate 120588(l119896

119894 l119896119895) all the individuals

from L119896 (l1198961 l119896

2 l119896

119873) are ranked according to their fitness

values The ranks are assigned so that the best individualreceives the rank 0 (zero) whereas the worst individualobtains the rank 119873 minus 1 Therefore the function 120588(l119896

119894 l119896119895) is

defined as follows

120588 (l119896119894 l119896119895) =

119890minus(5sdotrank(l119896

119894)119873) if rank (l119896

119894) lt rank (l119896

119895)

119890minus(5sdotrank(l119896

119895)119873) if rank (l119896

119894) gt rank (l119896

119895)

(8)

1

08

06

04

02

00 20 40 60 80 100

120588(119845k i119845k j)

Rank (119845ki )

Figure 3 Behavior of 120588(l119896119894 l119896119895) considering 100 individuals

where the function rank(120572) delivers the rank of the 120572-individual According to (8) 120588(l119896

119894 l119896119895) yields a value within

the interval (1 0) Its maximum value of one in 120588(l119896119894 l119896119895) is

reached when either individual l119896119895or l119896119894is the best element of

the population L119896 regarding their fitness values On the otherhand a value close to zero is obtained when both individualsl119896119895and l119896119894possess quite bad fitness values Figure 3 shows the

behavior of 120588(l119896119894 l119896119895) considering 100 individuals In the figure

it is assumed that l119896119894represents one of the 99 individuals with

ranks between 0 and 98whereas l119896119895is fixed to the element with

the worst fitness value (rank 99)Under the incorporation of 120588(l119896

119894 l119896119895) in (7) social forces

are magnified or weakened depending on the best fitnessvalue (the most dominant) of the individuals involved in therepulsion-attraction process

6 Mathematical Problems in Engineering

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(a)

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(b)

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(c)

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(d)

Figure 4 Examples of different distributions (a) Initial condition (b) distribution after applying 25 (c) 50 and (d) 100 operationsThe greenasterisk represents the minimum value so far

Finally the total social force on each individual l119896119894is mod-

eled as the superposition of all of the pairwise interactionsexerted over it

S119898119894=

119873

sum

119895=1119895 =119894

s119898119894119895 (9)

Therefore the change of positionΔl119894is considered as the total

social force experimented by l119896119894as the superposition of all of

the pairwise interactions Thus Δl119894is defined as follows

Δl119894= S119898119894 (10)

After calculating the new positions P (p1 p2 p119873)of the population L119896 (l1198961 l

119896

2 l119896

119873) the final positions

F (f1 f2 f119873) must be calculated The idea is to admitonly the changes that guarantee an improvement in thesearch strategy If the fitness value of p

119894(119891(p119894)) is better than

l119896119894(119891(l119896119894)) then p

119894is accepted as the final solution Otherwise

l119896119894is retainedThis procedure can be resumed by the following

statement (considering a minimization problem)

f119894=

p119894

if 119891 (p119894) lt 119891 (l119896

119894)

l119896119894

otherwise(11)

In order to illustrate the performance of the solitary oper-ator Figure 4 presents a simple example with the solitaryoperator being iteratively applied A population of 50 dif-ferent members (119873 = 50) is assumed which adopt aconcentrated configuration as initial condition (Figure 4(a))As a consequence of the social forces the set of elementstends to distribute throughout the search space Examplesof different distributions are shown in Figures 4(b) 4(c)and 4(d) after applying 25 50 and 100 different solitaryoperations respectively

32 Social Operation (B) The social procedure representsthe exploitation phase of the LS algorithm Exploitation is

Mathematical Problems in Engineering 7

the process of refining existent individuals within a smallneighborhood in order to improve their solution quality

The social procedure is a selective operation which isapplied only to a subset E of the final positions F (whereE sube F) The operation starts by sorting F with respect tofitness values storing the elements in a temporary populationB = b1 b2 b119873 The elements in B are sorted so thatthe best individual receives the position b1 whereas the worstindividual obtains the location b

119873 Therefore the subset E

is integrated by only the first 119892 locations of B (promisingsolutions) Under this operation a subspace 119862

119895is created

around each selected particle f119895isin E The size of 119862

119895depends

on the distance 119890119889which is defined as follows

119890119889=

sum119899

119902=1 (119906119887119902 minus 119897119887119902)

119899sdot 120573 (12)

where 119906119887119902and 119897119887

119902are the upper and lower bounds in the

119902th dimension and 119899 is the number of dimensions of theoptimization problem whereas 120573 isin [0 1] is a tuning factorTherefore the limits of 119862

119895can be modeled as follows

119906119904119904119902

119895= 119887119895119902+ 119890119889

119897119904119904119902

119895= 119887119895119902minus 119890119889

(13)

where 119906119904119904119902119895and 119897119904119904119902

119895are the upper and lower bounds of the 119902th

dimension for the subspace 119862119895 respectively

Considering the subspace 119862119895around each element f

119895isin

E a set of ℎ new particles (Mℎ119895

= m1119895m2119895 mℎ

119895) are

randomly generated inside bounds fixed by (13) Once theℎ samples are generated the individual l119896+1

119895of the next

population L119896+1 must be created In order to calculate l119896+1119895

the best particlem119887119890119904119905

119895 in terms of its fitness value from the ℎ

samples (wherem119887119890119904119905119895

isin [m1119895m2119895 mℎ

119895]) is compared to f

119895

Ifm119887119890119904119905119895

is better than f119895according to their fitness values l119896+1

119895

is updated with m119887119890119904119905119895

otherwise f119895is selected The elements

of F that have not been processed by the procedure (f119908notin E)

transfer their corresponding values to L119896+1 with no changeThe social operation is used to exploit only prominent

solutions According to the proposed method inside eachsubspace 119862

119895 ℎ random samples are selected Since the

number of selected samples at each subspace is very small(typically ℎ lt 4) the use of this operator substantially reducesthe number of fitness function evaluations

In order to demonstrate the social operation a numericalexample has been set by applying the proposed process to asimple function Such function considers the interval of minus3 le1198891 1198892 le 3 whereas the function possesses one global maximaof value 81 at (0 16) Notice that 1198891 and 1198892 correspond tothe axis coordinates (commonly 119909 and 119910) For this examplea final position population F of six 2-dimensional members(119873 = 6) is assumed Figure 5 shows the initial configurationof the proposed example with the black points representinghalf of the particles with the best fitness values (the first threeelements of B 119892 = 3) whereas the grey points (f2 f4 f6 notin E)correspond to the remaining individuals From Figure 5 it

0

0

0

d1

d2

minus3

minus2

minus1

minus2

minus2

1198396

11983931198391

1198394

1198395

1198392 119846211198461

1

11984623

11984613

11984625

11984615

C1

C5

C3

2

2

2

2

22

2

0

0 0

0

minus2

minus2

minus2

minus4

minus4minus6

4

44

6

68

3

2

1

0

minus3 minus2 minus1 3210

Figure 5 Operation of the social procedure

can be seen that the social procedure is applied to all blackparticles (f5 = b1 f3 = b2 and f1 = b3 f5 f3 f1 isin

E) yielding two new random particles (ℎ = 2) which arecharacterized by white points m1

1 m21 m

13 m

23 m

15 and m2

5for each black point inside of their corresponding subspaces1198621 1198623 and 1198625 Considering the particle f3 in Figure 7 theparticle m2

3 corresponds to the best particle (m1198871198901199041199053 ) from thetwo randomly generated particles (according to their fitnessvalues) within 1198623 Therefore the particlem1198871198901199041199053 will substitutef3 in the individual l119896+13 for the next generation since it holdsa better fitness value than f3(119891(f3)lt119891(m1198871198901199041199053 ))

The LS optimization procedure is defined over a boundedsearch space S Search points that do not belong to sucharea are considered to be infeasible However during theevolution process some candidate solutions could fall outsidethe search space In the proposed approach such infeasiblesolutions are arbitrarily placed with a random position insidethe search space S

33 Complete LS Algorithm LS is a simple algorithm withonly seven adjustable parameters the strength of attraction119865 the attractive length 119871 number of promising solutions 119892the population size 119873 the tuning factor 120573 the number ofrandom samples ℎ and the number of generations 119892119890119899 Theoperation of LS is divided into three parts initialization of thesolitary and social operations In the initialization (119896 = 0) thefirst population L0 (l01 l

02 l

0119873) is produced The values

11989701198941 119897

01198942 119897

0119894119899 of each individual l119896

119894and each dimension

119889 are randomly and uniformly distributed between theprespecified lower initial parameter bound 119897119887

119889and the upper

initial parameter bound 119906119887119889

1198970119894119895= 119897119887119889+ rand sdot (119906119887

119889minus 119897119887119889)

119894 = 1 2 119873 119889 = 1 2 119899(14)

In the evolution process the solitary (A) and social (B) oper-ations are iteratively applied until the number of iterations119896 = 119892119890119899 has been reached The complete LS procedure isillustrated in Algorithm 1

8 Mathematical Problems in Engineering

(1) Input 119865 119871 119892119873 119892119890119899 ℎ and 120573(2) Initialize L0 (119896 = 0)(3) until (119896 = 119892119890119899)(4) F larr SolitaryOperation(L119896) Solitary operator (Section 31)(5) L119896+1 larr SocialOperation(L119896 F) Social operator (Section 32)(6) 119896 = k + 1(7) end until

Algorithm 1 Locust Search (LS) algorithm

34 Discussion about the LS Algorithm Evolutionary algo-rithms (EA) have been widely employed for solving complexoptimization problems These methods are found to bemore powerful than conventional methods that are basedon formal logics or mathematical programming [46] Inthe EA algorithm search agents have to decide whetherto explore unknown search positions or to exploit alreadytested positions in order to improve their solution qualityPure exploration degrades the precision of the evolutionaryprocess but increases its capacity to find new potentially solu-tions On the other hand pure exploitation allows refiningexistent solutions but adversely drives the process to localoptimal solutions Therefore the ability of an EA to find aglobal optimal solution depends on its capacity to find a goodbalance between the exploitation of found-so-far elementsand the exploration of the search space [47]

Most of swarm algorithms and other evolutionary algo-rithms tend to exclusively concentrate the individuals in thecurrent best positions Under such circumstances such algo-rithms seriously limit their exploration-exploitation capaci-ties

Different to most of existent evolutionary algorithmsin the proposed approach the modeled behavior explicitlyavoids the concentration of individuals in the current bestpositions Such fact allows not only to emulate the coop-erative behavior of the locust colony in a good realisticway but also to incorporate a computational mechanism toavoid critical flaws that are commonly present in the popularevolutionary algorithms such as the premature convergenceand the incorrect exploration-exploitation balance

It is important to emphasize that the proposed approachconducts two operators (solitary and social) within a singleiteration Such operators are similar to those that are usedby other evolutionary methods such as ABC (employed beesonlooker bees and scout bees) AIS (clonal proliferationoperator affinity maturation operator and clonal selectionoperator) andDE (mutation crossover and selection) whichare all executed in a single iteration

4 Numerical Experiments overBenchmark Functions

A comprehensive set of 13 functions all collected from [48ndash50] has been used to test the performance of the LS approachas an optimization method Tables 11 and 12 present thebenchmark functions used in our experimental study Such

functions are classified into two different categories uni-modal test functions (Table 11) andmultimodal test functions(Table 12) In these tables 119899 is the function dimension 119891optis the minimum value of the function with S being a subsetof 119877119899 The optimum locations (xopt) for functions in Tables11 and 12 are in [0]119899 except for 1198915 11989112 and 11989113 with xopt in[1]119899 and 1198918 in [42096]

119899 A detailed description of optimumlocations is given in Tables 11 and 12

We have applied the LS algorithm to 13 functions whoseresults have been compared to those produced by the ParticleSwarmOptimization (PSO) method [27] and the DifferentialEvolution (DE) algorithm [28] both considered as the mostpopular algorithms formany optimization applications In allcomparisons the population has been set to 40 (119873 = 40)individualsThemaximum iteration number for all functionshas been set to 1000 Such stop criterion has been selectedto maintain compatibility to similar works reported in theliterature [48 49]

The parameter setting for each of the algorithms in thecomparison is described as follows

(1) PSO in the algorithm 1198881 = 1198882 = 2 while the inertiafactor (120596) is decreased linearly from 09 to 02

(2) DE the DERand1 scheme is employed The param-eter settings follow the instructions in [28 51] Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(3) In LS 119865 and 119871 are set to 06 and 119871 respectivelyBesides 119892 is fixed to 20 (1198732) ℎ = 2 120573 = 06 whereas119892119890119899 and119873 are configured to 1000 and 40 respectivelyOnce such parameters have been experimentallydetermined they are kept for all experiments in thissection

In the comparison three indexes are considered the averagebest-so-far solution (ABS) the standard deviation (SD)and the number of function evaluations (NFE) The firsttwo indexes assess the accuracy of the solution whereasthe last one measures the computational cost The averagebest-so-far solution (ABS) expresses the average value ofthe best function evaluations that have been obtained from30 independent executions The standard deviation (SD)indicates the dispersion of the ABS values Evolutionarymethods are in general complex pieces of software withseveral operators and stochastic branches Therefore it isdifficult to conduct a complexity analysis fromadeterministicperspective Under such circumstances it ismore appropriate

Mathematical Problems in Engineering 9

Table 1 Minimization results from the benchmark functions test inTable 11 with 119899 = 30 Maximum number of iterations = 1000

PSO DE LS

1198911

ABS 166 times 10minus1 627 times 10minus3 455 times 10minus4

SD 379 times 10minus1 168 times 10minus1 698 times 10minus4

NFE 28610 20534 16780

1198912

ABS 483 times 10minus1 202 times 10minus1 541 times 10minus3

SD 159 times 10minus1 066 145 times 10minus2

NFE 28745 21112 16324

1198913

ABS 275 572 times 10minus1 161 times 10minus3

SD 101 015 132 times 10minus3

NFE 38320 36894 20462

1198914

ABS 184 011 105 times 10minus2

SD 087 005 663 times 10minus3

NFE 37028 36450 21158

1198915

ABS 307 239 411 times 10minus2

SD 042 036 274 times 10minus3

NFE 39432 37264 21678

1198916

ABS 636 651 588 times 10minus2

SD 074 087 167 times 10minus2

NFE 38490 36564 22238

1198917

ABS 614 012 271 times 10minus2

SD 073 002 118 times 10minus2

NFE 37274 35486 21842

to use the number of function evaluations (NFE) just asit is used in the literature [52 53] to evaluate and assessthe computational effort (time) and the complexity amongoptimizers It represents how many times an algorithm usesthe objective function to evaluate the objective (fitness)function until the best solution of a determined executionhas been found Since the experiments require 30 differentexecutions the NFE index corresponds to the averaged valueobtained from these executions A small NFE value indicatesthat less time is needed to reach the global optimum

41 Unimodal Test Functions Functions 1198911 to 1198917 are uni-modal functions The results for unimodal functions over30 runs are reported in Table 1 considering the averagebest-so-far solution (ABS) the standard deviation (SD) andthe number of function evaluations (NFE) According tothis table LS provides better results than PSO and DE forall functions in terms of ABS and SD In particular thetest yields the largest performance difference in functions1198914ndash1198917 Such functions maintain a narrow curving valley that

is hard to optimize in case the search space cannot beexplored properly and the direction changes cannot be keptup with [54] For this reason the performance differencesare directly related to a better trade-off between explorationand exploitation that is produced by LS operators In thepractice a main goal of an optimization algorithm is to finda solution as good as possible within a small time windowThe computational cost for the optimizer is represented byits NFE values According to Table 1 the NFE values thatare obtained by the proposed method are smaller than its

Table 2 119901 values produced by Wilcoxonrsquos test that compares LSversus PSO and DE over the ldquoaverage best-so-farrdquo values fromTable 1

LS versus PSO DE1198911

183 times 10minus4 173 times 10minus2

1198912

385 times 10minus3 183 times 10minus4

1198913

173 times 10minus4 623 times 10minus3

1198914

257 times 10minus4 521 times 10minus3

1198915

473 times 10minus4 183 times 10minus3

1198916

639 times 10minus5 215 times 10minus3

1198917

183 times 10minus4 221 times 10minus3

counterparts LowerNFE values aremore desirable since theycorrespond to less computational overload and thereforefaster results In the results perspective it is clear that PSOand DE needmore than 1000 generations in order to producebetter solutions However this number of generations isconsidered in the experiments aiming for producing a visiblecontrast among the approaches If the number of generationshas been set to an exaggerated value then all methods wouldconverge to the best solution with no significant troubles

A nonparametric statistical significance proof known asWilcoxonrsquos rank sum test for independent samples [55 56]has been conducted with an 5 significance level over theldquoaverage best-so-farrdquo data of Table 1 Table 2 reports the 119901values produced byWilcoxonrsquos test for the pairwise compari-son of the ldquoaverage best-so-farrdquo of two groups Such groupsare formed by LS versus PSO and LS versus DE As a nullhypothesis it is assumed that there is no significant differencebetween mean values of the two algorithms The alternativehypothesis considers a significant difference between theldquoaverage best-so-farrdquo values of both approaches All 119901 valuesreported in the table are less than 005 (5 significancelevel) which is a strong evidence against the null hypothesisindicating that the LS results are statistically significant andthat it has not occurred by coincidence (ie due to the normalnoise contained in the process)

42 Multimodal Test Functions Multimodal functions pos-sess many local minima which make the optimization adifficult task to be accomplished In multimodal functionsthe results reflect the algorithmrsquos ability to escape fromlocal optima We have applied the algorithms over functions1198918 to 11989113 where the number of local minima increasesexponentially as the dimension of the function increasesThe dimension of such functions is set to 30 The resultsare averaged over 30 runs with performance indexes beingreported in Table 3 as follows the average best-so-far solution(ABS) the standard deviation (SD) and the number of func-tion evaluations (NFE) Likewise 119901 values of the Wilcoxonsigned-rank test of 30 independent runs are listed in Table 4From the results it is clear that LS yields better solutionsthan others algorithms for functions 1198919 11989110 11989111 and 11989112 interms of the indexes ABS and SD However for functions 1198918and 11989113 LS produces similar results to DE The Wilcoxonrank test results that are presented in Table 6 confirm that

10 Mathematical Problems in Engineering

Table 3Minimization results from the benchmark functions test inTable 12 with 119899 = 30 Maximum number of iterations = 1000

PSO DE LS

1198918

ABS minus67 times 103 minus126 times 104 minus126 times 104

SD 63 times 102 37 times 102 11 times 102

NFE 38452 35240 21846

1198919

ABS 148 401 times 10minus1 249 times 10minus3

SD 139 51 times 10minus2 48 times 10minus4

NFE 37672 34576 20784

11989110

ABS 147 466 times 10minus2 215 times 10minus3

SD 144 127 times 10minus2 318 times 10minus4

NFE 39475 37080 21235

11989111

ABS 1201 115 147 times 10minus4

SD 312 006 148 times 10minus5

NFE 38542 34875 22126

11989112

ABS 687 times 10minus1 374 times 10minus1 558 times 10minus3

SD 707 times 10minus1 155 times 10minus1 418 times 10minus4

NFE 35248 30540 16984

11989113

ABS 187 times 10minus1 181 times 10minus2 178 times 10minus2

SD 574 times 10minus1 166 times 10minus2 164 times 10minus3

NFE 36022 31968 18802

Table 4 119901 values produced byWilcoxonrsquos test comparing LS versusPSO and DE over the ldquoaverage best-so-farrdquo values from Table 3

LS versus PSO DE1198918

183 times 10minus4 00611198919

117 times 10minus4 241 times 10minus4

11989110

143 times 10minus4 312 times 10minus3

11989111

625 times 10minus4 114 times 10minus3

11989112

234 times 10minus5 715 times 10minus4

11989113

473 times 10minus4 0071

LS performed better than PSO and DE considering fourproblems 119891

9ndash11989112 whereas from a statistical viewpoint there

is no difference between results from LS and DE for 1198918 and11989113 According to Table 3 the NFE values obtained by theproposed method are smaller than those produced by otheroptimizers The reason of this remarkable performance isassociated with its two operators (i) the solitary operatorallows a better particle distribution in the search spaceincreasing the algorithmrsquos ability to find the global optimaand (ii) the use of the social operation provides a simpleexploitation operator that intensifies the capacity of findingbetter solutions during the evolution process

5 Gaussian Mixture Modelling

In this section the modeling of image histograms throughGaussian mixture models is presented Let one consider animage holding 119871 gray levels [0 119871 minus 1] whose distribution

is defined by a histogram ℎ(119892) represented by the followingformulation

ℎ (119892) =

119899119892

119873119901 ℎ (119892) gt 0

Np =119871minus1sum

119892=0119899119892

119871minus1sum

119892=0ℎ (119892) = 1

(15)

where 119899119892denotes the number of pixels with gray level 119892

and Np the total number of pixels in the image Under suchcircumstances ℎ(119892) can be modeled by using a mixture 119901(119909)of Gaussian functions of the form

119901 (119909) =

119870

sum

119894=1

119875119894

radic2120587120590119894

exp[minus (119909 minus 120583

119894)2

21205902119894

] (16)

where119870 symbolizes the number of Gaussian functions of themodel whereas 119875

119894is the a priori probability of function 119894 120583

119894

and 120590119894represent the mean and standard deviation of the 119894th

Gaussian function respectively Furthermore the constraintsum119870

119894=1 119875119894 = 1must be satisfied by themodel In order to evaluatethe similarity between the image histogram and a candidatemixture model the mean square error can be used as follows

119869 =1119899

119871

sum

119895=1[119901 (119909119895) minus ℎ (119909

119895)]

2+120596 sdot

1003816100381610038161003816100381610038161003816100381610038161003816

(

119870

sum

119894=1119875119894)minus 1

1003816100381610038161003816100381610038161003816100381610038161003816

(17)

where 120596 represents the penalty associated with the constrainsum119870

119894=1 119875119894 = 1 Therefore 119869 is considered as the objective func-tion which must be minimized in the estimation problem Inorder to illustrate the histogram modeling through a Gaus-sian mixture Figure 6 presents an example assuming threeclasses that is 119870 = 3 Considering Figure 6(a) as the imagehistogram ℎ(119909) the Gaussian mixture 119901(119909) that is shown inFigure 6(c) is produced by adding the Gaussian functions1199011(119909) 1199012(119909) and 1199013(119909) in the configuration presented inFigure 6(b) Once themodel parameters that bettermodel theimage histogram have been determined the final step is todefine the threshold values 119879

119894(119894 isin [1 119870]) which can be

calculated by simple standard methods just as it is presentedin [19ndash21]

6 Segmentation Algorithm Based on LS

In the proposed method the segmentation process isapproached as an optimization problem Computationaloptimization generally involves two distinct elements (1) asearch strategy to produce candidate solutions (individualsparticles insects locust etc) and (2) an objective functionthat evaluates the quality of each selected candidate solutionSeveral computational algorithms are available to performthe first element The second element where the objectivefunction is designed is unquestionably the most criticalIn the objective function it is expected to embody the full

Mathematical Problems in Engineering 11

Freq

uenc

y

Gray level

h(x)

(a)

Gray level

Freq

uenc

y

p1(x)p2(x) p3(x)

(b)

Freq

uenc

y

Gray level

p(x)

(c)

Figure 6 Histogram approximation through a Gaussian mixture (a) Original histogram (b) configuration of the Gaussian components1199011(119909) 1199012(119909) and 1199013(119909) and (c) final Gaussian mixture 119901(119909)

complexity of the performance biases and restrictions of theproblem to be solved In the segmentation problem candidatesolutions represent Gaussian mixtures The objective func-tion 119869 (17) is used as a fitness value to evaluate the similaritypresented between the Gaussian mixture and the imagehistogram Therefore guided by the fitness values (119869 values)a set of encoded candidate solutions are evolved using theevolutionary operators until the best possible resemblancecan be found

Over the last decade several algorithms based on evo-lutionary and swarm principles [19ndash22] have been proposedto solve the problem of segmentation through a Gaussianmixture model Although these algorithms own certain goodperformance indexes they present two important limitations(1) They frequently obtain suboptimal approximations asa consequence of an inappropriate balance between explo-ration and exploitation in their search strategies (2) Theyare based on the assumption that the number of Gaussians(classes) in the mixture is preknown and fixed otherwisethey cannot work

In order to eliminate such flaws the proposed approachincludes (A) a new search strategy and (B) the definitionof a new objective function For the search strategy the LSmethod (Section 4) is adopted Under LS the concentrationof individuals in the current best positions is explicitlyavoided Such fact allows reducing critical problems such asthe premature convergence to suboptimal solutions and theincorrect exploration-exploitation balance

61 New Objective Function 119869new Previous segmentation

algorithms based on evolutionary and swarm methods use(17) as objective function Under these circumstances each

candidate solution (Gaussian mixture) is only evaluated interms of its approximation with the image histogram

Since the proposed approach aims to automatically selectthe number ofGaussian functions119870 in the finalmixture119901(119909)the objective function must be modified The new objectivefunction 119869new is defined as follows

119869new

= 119869 + 120582 sdot119876 (18)

where 120582 is a scaling constant The new objective function isdivided into two partsThe first part 119869 evaluates the quality ofeach candidate solution in terms of its similarity with regardto the image histogram (17) The second part 119876 penalizes theoverlapped area among Gaussian functions (classes) with 119876being defined as follows

119876 =

119870

sum

119894=1

119870

sum

119895=1119895 =119894

119871

sum

119897=1min (119875

119894sdot 119901119894(119897) 119875119895sdot 119901119895(119897)) (19)

where 119870 and 119871 represent the number of classes and the graylevels respectively The parameters 119901

119894(119897) and 119901

119895(119897) symbolize

the Gaussian functions 119894 and 119895 respectively that are to beevaluated on the point 119897 (gray level) whereas the elements119875119894and 119875

119895represent the a priori probabilities of the Gaussian

functions 119894 and 119895 respectively Under such circumstancesmixtures with Gaussian functions that do not ldquopositivelyrdquoparticipate in the histogram approximation are severelypenalized

Figure 7 illustrates the effect of the new objective function119869new in the evaluation of Gaussian mixtures (candidatesolutions) From the image histogram (Figure 7(a)) it isevident that two Gaussian functions are enough to accurately

12 Mathematical Problems in Engineering

Gray level

Freq

uenc

y h(x)

(a)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)Freq

uenc

y

(b)

Gray level

Freq

uenc

y p1(x)

p2(x)

p3(x)

p4(x)

Q12 Q23 Q34

(c)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)

Q12

Q23

Q34

Freq

uenc

y(d)

Figure 7 Effect of the new objective function 119869new in the evaluation of Gaussian mixtures (candidate solutions) (a) Original histogram (b)Gaussian mixture considering four classes (c) penalization areas and (d) Gaussian mixture of better quality solution

approximate the original histogramHowever if theGaussianmixture is modeled by using a greater number of functions(eg four as it is shown in Figure 7(b)) the original objectivefunction 119869 is unable to obtain a reasonable approximationUnder the new objective function 119869

new the overlapped areaamong Gaussian functions (classes) is penalized Such areasin Figure 7(c) correspond to 11987612 11987623 and 11987634 where11987612 represents the penalization value produced between theGaussian function 1199011(119909) and 1199012(119909) Therefore due to thepenalization theGaussianmixture shown in Figures 7(b) and7(c) provides a solution of low quality On the other hand theGaussian mixture presented in Figure 7(d) maintains a lowpenalty thus it represents a solution of high quality FromFigure 7(d) it is easy to see that functions 1199011(119909) and 1199014(119909)can be removed from the final mixture This eliminationcould be performed by a simple comparison with a thresholdvalue 120579 since 1199011(119909) and 1199014(119909) have a reduced amplitude(1199011(119909) asymp 1199012(119909) asymp 0) Therefore under 119869new it is possibleto find the reduced Gaussian mixture model starting from aconsiderable number of functions

Since the proposed segmentation method is conceivedas an optimization problem the overall operation can bereduced to solve the formulation of (20) by using the LSalgorithm

minimize 119869new

(x) = 119869 (x) + 120582 sdot 119876 (x)

x = (1198751 1205831 1205901 119875119870 120583119870 120590119870) isin R3sdot119870

subject to 0 le 119875119889le 1 119889 isin (1 119870)

0 le 120583119889le 255

0 le 120590119889le 25

(20)

where 119875119889 120583119889 and 120590

119889represent the probability mean and

standard deviation of the class 119889 It is important to remarkthat the new objective function 119869

new allows the evaluationof a candidate solution in terms of the necessary numberof Gaussian functions and its approximation quality Undersuch circumstances it can be used in combination with anyother evolutionary method and not only with the proposedLS algorithm

62 Complete Segmentation Algorithm Once the new searchstrategy (LS) and objective function (119869new) have been definedthe proposed segmentation algorithm can be summarized byAlgorithm 2The new algorithm combines operators definedby LS and operations for calculating the threshold values

(Line 1) The algorithm sets the operative parameters 119865119871 119892 119892119890119899 119873 119870 120582 and 120579 They rule the behavior of thesegmentation algorithm (Line 2) Afterwards the populationL0 is initialized considering 119873 different random Gaussianmixtures of119870 functionsThe idea is to generate an119873-randomGaussian mixture subjected to the constraints formulatedin (20) The parameter 119870 must hold a high value in orderto correctly segment all images (recall that the algorithmis able to reduce the Gaussian mixture to its minimumexpression) (Line 3)Then the Gaussianmixtures are evolvedby using the LS operators and the new objective function119869newThis process is repeated during119892119890119899 cycles (Line 8) Afterthis procedure the best Gaussian mixture l119892119890119899

119887119890119904119905is selected

according to its objective function 119869new (Line 9) Then

the Gaussian mixture l119892119890119899119887119890119904119905

is reduced by eliminating thosefunctions whose amplitudes are lower than 120579 (119901

119894(119909) le 120579)

(Line 10) Then the threshold values 119879119894from the reduced

model are calculated (Line 11) Finally the calculated 119879119894

values are employed to segment the image Figure 8 shows aflowchart of the complete segmentation procedure

Mathematical Problems in Engineering 13

(1) Input 119865 119871 119892 119892119890119899119873 119870 120582 and 120579(2) Initialize L0 (119896 = 0)(3) until (119896 = 119892119890119899)(4) F larr SolitaryOperation(L119896) Solitary operator (Section 31)(5) L119896+1 larr SocialOperation(L119896 F) Social operator (Section 32)(6) 119896 = k + 1(7) end until(8) Obtain I119892119890119899

119887119890119904119905

(9) Reduce I119892119890119899119887119890119904119905

(10) Calculate the threshold values 119879119894from the reduced model

(11) Use 119879119894to segment the image

Algorithm 2 Segmentation LS algorithm

The proposed segmentation algorithm permits to auto-matically detect the number of segmentation partitions(classes) Furthermore due to its remarkable search capaci-ties the LSmethodmaintains a better accuracy than previousalgorithms based on evolutionary principles However theproposed method presents two disadvantages first it isrelated to its implementation which in general is morecomplex than most of the other segmentators based onevolutionary basics The second refers to the segmentationprocedure of the proposed approach which does not considerany spatial pixel characteristics As a consequence pixelsthat may belong to a determined region due to its positionare labeled as a part of another region due to its graylevel intensity Such a fact adversely affects the segmentationperformance of the method

7 Segmentation Results

This section analyses the performance of the proposedsegmentation algorithmThe discussion is divided into threeparts the first one shows the performance of the proposedLS segmentation algorithm while the second presents a com-parison between the proposed approach and others segmen-tation algorithms that are based on evolutionary and swammethods The comparison mainly considers the capacities ofeach algorithm to accurately and robustly approximate theimage histogram Finally the third part presents an objectiveevaluation of segmentation results produced by all algorithmsthat have been employed in the comparisons

71 Performance of LS Algorithm in Image SegmentationThis section presents two experiments that analyze theperformance of the proposed approach Table 5 presentsthe algorithmrsquos parameters that have been experimentallydetermined and kept for all the test images through allexperiments

First Image The first test considers the histogram shownby Figure 9(b) while Figure 9(a) presents the original imageAfter applying the proposed algorithm just as it has beenconfigured according to parameters in Table 5 a minimummodel of four classes emerges after the start from Gaussian

Table 5 Parameter setup for the LS segmentation algorithm

119865 119871 119892 119892119890119899 119873 119870 120582 120579

06 02 20 1000 40 10 001 00001

Table 6 Results of the reduced Gaussian mixture for the first andthe second image

Parameters First image Second image1198751 0004 00321205831 181 1211205901 82 291198752 00035 000151205832 699 4511205902 184 2491198753 001 00061205833 949 13011205903 89 1791198754 0007 0021205834 1631 16721205904 299 101

mixtures of 10 classes Considering 30 independent exe-cutions the averaged parameters of the resultant Gaussianmixture are presented in Table 6 One final Gaussian mixture(ten classes) which has been obtained by LS is presentedin Figure 10 Furthermore the approximation of the reducedGaussian mixture is also visually compared with the originalhistogram in Figure 10 On the other hand Figure 11 presentsthe segmented image after calculating the threshold points

Second Image For the second experiment the image inFigure 12 is testedThemethod aims to segment the image byusing a reduced Gaussian mixture model obtained by the LSapproach After executing the algorithm according to param-eters from Table 5 the resulting averaged parameters of theresultant Gaussian mixture are presented in Table 6 In orderto assure consistency the results are averaged considering 30independent executions Figure 13 shows the approximationquality that is obtained by the reduced Gaussian mixturemodel in (a) and the segmented image in (b)

14 Mathematical Problems in Engineering

Input parameters

ImageI(xy)

from the reduced model

YesLS method

No

k = k + 1

k gt gen

Reduce 119845genbest

Calculate the threshold values Ti

Use Ti to segment the image

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

119813 larr SolitaryOperation (119819k)

119819k+1 larr SocialOperation (119819k 119813)

Initialize 1198190 (k = 0)

F L g gen N K 120582 and 120579

Figure 8 Flowchart of the complete segmentation procedure

(a)

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray level

Freq

uenc

y

(b)

Figure 9 (a) Original first image used on the first experiment and (b) its histogram

Mathematical Problems in Engineering 15

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray levelFr

eque

ncy

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

Figure 10 Gaussian mixture obtained by LS for the first image

Figure 11 Image segmented with the reduced Gaussian mixture

(a)

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

(b)

Figure 12 (a) Original second image used on the first experiment and (b) its histogram

72 Histogram Approximation Comparisons This sectiondiscusses the comparison between the proposed segmenta-tion algorithm and other evolutionary-segmentation meth-ods that have been proposed in the literature The set ofmethods used in the experiments includes the 119869 + ABC[19] 119869 + AIS [20] and 119869 + DE [21] These algorithmsconsider the combination between the original objectivefunction 119869 (17) and an evolutionary technique such asArtificial Bee Colony (ABC) the Artificial Immune Systems(AIS) and the Differential Evolution (DE) [21] respectively

Since the proposed segmentation approach considers thecombination of the new objective function 119869

new (18) andthe LS algorithm it will be referred to as 119869

new+ LS

The comparison focuses mainly on the capacities of eachalgorithm to accurately and robustly approximate the imagehistogram

In the experiments the populations have been set to 40(119873 = 40) individuals The maximum iteration number for allfunctions has been set to 1000 Such stop criterion has beenconsidered to maintain compatibility to similar experiments

16 Mathematical Problems in Engineering

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

(a) (b)

Figure 13 (a) Segmentation result obtained by LS for the first image and (b) the segmented image

(1) (2)

(3) (4)

(a)

0015

0005001

0020025

0035003

0045004

005

00 50 100 150 200 250

Gray level

Freq

uenc

y

(b)

Figure 14 (a) Synthetic image used in the comparison and (b) its histogram

that are reported in the literature [18] The parameter settingfor each of the segmentation algorithms in the comparison isdescribed as follows

(1) 119869 + ABC [19] in the algorithm its parameters areconfigured as follows the abandonment limit = 100120572 = 005 and limit = 30

(2) 119869 + AIS [20] it presents the following parameters ℎ =120119873

119888= 80 120588 = 10 119875

119903= 20 119871 = 22 and 119879

119890= 001

(3) 119869 + DE [21] the DERand1 scheme is employed Theparameter settings follow the instructions in [21]Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(4) In 119869new +LS the method is set according to the valuesdescribed in Table 5

In order to conduct the experiments a synthetic image isdesigned to be used as a reference in the comparisons Themain idea is to know in advance the exact number of classes(and their parameters) that are contained in the image sothat the histogram can be considered as a ground truth The

Table 7 Employed parameters for the design of the reference image

119875119894

120583119894

120590119894

(1) 005 40 8(2) 004 100 10(3) 005 160 8(4) 0027 220 15

synthetic image is divided into four sections Each sectioncorresponds to a different class which is produced by settingeach gray level pixel 119875V119894 to a value that is determined by thefollowing model

119875V119894= 119890minus((119909minus1205831)

221205902119894) (21)

where 119894 represents the section whereas 120583119894and 120590119894are themean

and the dispersion of the gray level pixels respectively Thecomparative study employs the image of 512 times 512 that isshown in Figure 14(a) and the algorithmrsquos parameters thathave been presented in Table 7 Figure 14(b) illustrates theresultant histogram

Mathematical Problems in Engineering 17

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2Class 3Class 4

Class 5Class 6Class 7Class 8

(a)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(b)

Figure 15 Convergence results (a) Convergence of the following methods 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes (b) Convergence of the proposed method (reduced Gaussian mixture)

(a) (b)

Figure 16 Segmentation results obtained by (a) several methods including 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes and (b) the proposed method (reduced Gaussian mixture)

In the comparison the discussion focuses on the follow-ing issues first of all accuracy second convergence andthird computational cost

Convergence This section analyzes the approximation con-vergence when the number of classes that are used bythe algorithm during the evolution process is different tothe actual number of classes in the image Recall that theproposed approach automatically finds the reduced Gaussianmixture which better adapts to the image histogram

In the experiment the methods 119869 + ABC 119869 + AISand 119869 + DE are executed considering Gaussian mixturescomposed of 8 functions Under such circumstances thenumber of classes to be detected is higher than the actualnumber of classes in the image On the other hand the

proposed algorithm maintains the same configuration ofTable 5Therefore it can detect and calculate up to ten classes(119870 = 10)

As a result the techniques 119869 + ABC 119869 + AIS and 119869 +DE tend to overestimate the image histogram This effectcan be seen in Figure 15(a) where the resulting Gaussianfunctions are concentrated within actual classes Such abehavior is a consequence of the evaluation that is consideredby the original objective function 119869 which privileges onlythe approximation between the Gaussian mixture and theimage histogram This effect can be graphically illustrated byFigure 16(a) that shows the pixel misclassification producedby the wrong segmentation of Figure 14(a) On the otherhand the proposed approach obtains a reduced Gaussianmixture model which allows the detection of each class from

18 Mathematical Problems in Engineering

006

004

005

002

003

001

00 50 100 150 200 250

Gray level

Freq

uenc

y

(a)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

(b)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

HistogramClass 1Class 2

Class 3Class 4

(c)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(d)

Figure 17 Approximation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

the actual histogram (see Figure 15(b)) As a consequencethe segmentation is significantly improved by eliminating thepixel misclassification as it is shown by Figure 16(b)

It is evident from Figure 15 that the techniques 119869 + ABC119869 + AIS and 119869 + DE all need an a priori knowledge of thenumber of classes that are contained in the actual histogramin order to obtain a satisfactory result On the other handthe proposed algorithm is able to find a reduced Gaussianmixture whose classes coincide with the actual number ofclasses that are contained in the image histogram

Accuracy In this section the comparison among the algo-rithms in terms of accuracy is reported Most of the reportedcomparisons [19ndash26] are concerned about comparing theparameters of the resultant Gaussian mixtures by using realimages Under such circumstances it is difficult to considera clear reference in order to define a meaningful errorTherefore the image defined in Figure 14 has been used in theexperiments because its construction parameters are clearlydefined in Table 7

Since the parameter values from Table 7 act as groundtruth a simple averaged difference between them and thevalues that are computed by each algorithm could be used as

comparison error However as each parametermaintains dif-ferent active intervals it is necessary to express the differencesin terms of percentage Therefore if Δ120573 is the parametricdifference and 120573 the ground truth parameter the percentageerror Δ120573 can be defined as follows

Δ120573 =Δ120573

120573sdot 100 (22)

In the segmentation problem each Gaussian mixture repre-sents a 119870-dimensional model where each dimension corre-sponds to a Gaussian function of the optimization problemto be solved Since each Gaussian function possesses threeparameters 119875

119894 120583119894 and 120590

119894 the complete number of parameters

is 3 sdot 119870 dimensions Therefore the final error 119864 produced bythe final Gaussian mixture is

119864 =1

119870 sdot 3

119870sdot3sum

V=1Δ120573V (23)

where 120573V isin (119875119894 120583119894 120590119894)In order to compare accuracy the algorithms 119869 + ABC

119869 + AIS 119869 + DE and the proposed approach are all executedover the image shown by Figure 14(a) The experiment aims

Mathematical Problems in Engineering 19

(a) (b)

(c) (d)

Figure 18 Segmentation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

to estimate the Gaussian mixture that better approximatesthe actual image histogram Methods 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures composed of 4functions (119870 = 4) In case of the 119869new + LS method althoughthe algorithm finds a reduced Gaussian mixture of fourfunctions it is initially set with ten functions (119870 = 10)Table 8 presents the final Gaussian mixture parameters andthe final error 119864 The final Gaussian mixture parametershave been averaged over 30 independent executions in orderto assure consistency A close inspection of Table 8 revealsthat the proposed method is able to achieve the smallesterror (119864) in comparison to the other algorithms Figure 16presents the histogram approximations that are produced byeach algorithm whereas Figure 17 shows their correspondentsegmented images Both illustrations present the mediancase obtained throughout 30 runs Figure 18 exhibits that 119869 +ABC 119869 + AIS and 119869 + DE present different levels of misclas-sifications which are nearly absent in the proposed approachcase

Computational Cost The experiment aims to measure thecomplexity and the computing time spent by the 119869 + ABC

Table 8 Results of the reduced Gaussian mixture in terms ofaccuracy

Algorithm Gaussian function 119875119894

120583119894

120590119894

119864

J + ABC

(1) 0052 445 64

1179(2) 0084 9812 1287(3) 0058 16350 894(4) 0025 21884 175

J + AIS

(1) 0072 3101 614

2201(2) 0054 8852 1221(3) 0039 14921 914(4) 0034 24841 1384

J + DE

(1) 0041 3574 786

1357(2) 0036 9057 1197(3) 0059 14847 901(4) 0020 20134 1302

119869new + LS

(1) 0049 4012 75

398(2) 0041 10204 104(3) 0052 16866 83(4) 0025 11092 152

20 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 19 Images employed in the computational cost analysis

the 119869 + AIS the 119869 + DE and the 119869new + LS algorithm whilecalculating the parameters of the Gaussian mixture in bench-mark images (see Figures 19(a)ndash19(d)) 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures that are composedof 4 functions (119870 = 4) In case of the 119869new + LS methodalthough the algorithm finds a reduced Gaussian mixture offour functions despite being initialized with ten functions(119870 = 10) Table 9 shows the averaged measurements after30 experiments It is evident that the 119869 + ABC and 119869 +

DE are the slowest to converge (iterations) and the 119869 + AISshows the highest computational cost (time elapsed) becauseit requires operators which demand long times On the otherhand the 119869new + LS shows an acceptable trade-off betweenits convergence time and its computational cost Thereforealthough the implementation of 119869new + LS in general requiresmore code than most of other evolution-based segmentatorssuch a fact is not reflected in the execution time FinallyFigure 19 below shows the segmented images as they are

Mathematical Problems in Engineering 21

(a) (b) (c) (d)

Figure 20 Experimental set used in the evaluation of the segmentation results

Table 9 Iterations and time requirements of the 119869 + ABC the 119869 +AIS the 119869 + DE and the 119869new + LS algorithm as they are applied tosegment benchmark images (see Figure 17)

Iterations (a) (b) (c) (d)Time elapsed

119869 + ABC 855 833 870 997272 s 270 s 273 s 31 s

119869 + AIS 725 704 754 812178 s 161 s 141 s 201 s

119869 + DE 657 627 694 742125 s 112 s 145 s 188 s

119869new + LS 314 298 307 402

098 s 084 s 072 s 102 s

Table 10 Evaluation of the segmentation results in terms of the ROSindex

Number of classesImage

119873119877= 4

(a)119873119877= 3

(b)119873119877= 4

(c)119873119877= 4

(d)119869 + ABC 0534 0328 0411 0457119869 + AIS 0522 0321 0427 0437119869 + DE 0512 0312 0408 0418119869new + LS 0674 0401 0514 0527

generated by each algorithm It can be seen that the proposedapproach generate more homogeneous regions whereas 119869 +ABC 119869 + AIS and 119869 + DE present several artifacts that areproduced by an incorrect pixel classification

73 Performance Evaluation of the Segmentation Results Thissection presents an objective evaluation of segmentationresults that are produced by all algorithms in the compar-isons The ill-defined nature of the segmentation problemmakes the evaluation of a candidate algorithm difficult[57] Traditionally the evaluation has been conducted byusing some supervised criteria [58] which are based on thecomputation of a dissimilarity measure between a segmen-tation result and a ground truth image Recently the use ofunsupervised measures has substituted supervised indexesfor the objective evaluation of segmentation results [59]They

Table 11 Unimodal test functions

Test function S 119891opt

1198911(x) =119899

sum

119894=11199092119894

[minus100 100]119899 0

1198912(x) =119899

sum

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 +

119899

prod

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 [minus10 10]119899 0

1198913(x) =119899

sum

119894=1(

119894

sum

119895=1119909119895)

2

[minus100 100]119899 0

1198914 (x) = max119894

10038161003816100381610038161199091198941003816100381610038161003816 1 le 119894 le 119899 [minus100 100]119899 0

1198915(x) =119899minus1sum

119894=1[100 (119909

119894+1 minus 1199092119894)2+ (119909119894minus 1)2] [minus30 30]119899 0

1198916(x) =119899

sum

119894=1(119909119894+ 05)2 [minus100 100]119899 0

1198917(x) =119899

sum

119894=1119894119909

4119894+ rand(0 1) [minus128 128]119899 0

enable the quantification of the quality of a segmentationresult without a priori knowledge (ground truth image)

Evaluation Criteria In this paper the unsupervised indexROS proposed by Chabrier et al [60] has been used toobjectively evaluate the performance of each candidate algo-rithm This index evaluates the segmentation quality interms of the homogeneity within segmented regions andthe heterogeneity among the different regions ROS can becomputed as follows

ROS =119863 minus 119863

2 (24)

where 119863 quantifies the homogeneity within segmentedregions Similarly 119863 measures the disparity among theregions A segmentation result 1198781 is considered better than 1198782if ROS

1198781gt ROS

1198782 The interregion homogeneity character-

ized by119863 is calculated considering the following formulation

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot

120590119888

(sum119873119877

119897=1 120590119897) (25)

where 119873119877represents the number of partitions in which the

image has been segmented 119877119888symbolizes the number of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

2 Mathematical Problems in Engineering

The problem of estimating the parameters of a Gaussianmixture that better model an image histogram has beencommonly solved through the Expectation Maximization(EM) algorithm [13 14] or gradient-based methods suchas Levenberg-Marquardt LM [15] Unfortunately EM algo-rithms are very sensitive to the choice of the initial values[16]meanwhile gradient-basedmethods are computationallyexpensive and may easily get stuck within local minima [17]

As an alternative to classical techniques the problemof Gaussian mixture identification has also been handledthrough evolutionarymethods In general they have demon-strated to deliver better results than those based on clas-sical approaches in terms of accuracy and robustness [18]Under these methods an individual is represented by acandidate Gaussian mixture model Just as the evolutionprocess unfolds a set of evolutionary operators are appliedin order to produce better individuals The quality of eachcandidate solution is evaluated through an objective functionwhose final result represents the similarity between themixture model and the histogram Some examples of theseapproaches involve optimization methods such as ArtificialBee Colony (ABC) [19] Artificial Immune Systems (AIS)[20] Differential Evolution (DE) [21] ElectromagnetismOptimization (EO) [22] Harmony Search (HS) [23] andLearning Automata (LA) [24] Although these algorithmsown interesting results they present two important limita-tions (1) They frequently obtain suboptimal approximationsas a consequence of a limited balance between explorationand exploitation in their search strategies (2) They are basedon the assumption that the number of Gaussians (classes) inthe mixture is preknown and fixed otherwise they cannotwork The cause of the first limitation is associated with theirevolutionary operators employed to modify the individualpositions In such algorithms during their evolution theposition of each agent for the next iteration is updatedyielding an attraction towards the position of the bestparticle seen so far or towards other promising individualsTherefore as the algorithmevolves these behaviors cause thatthe entire population rapidly concentrates around the bestparticles favoring the premature convergence and damagingthe appropriate exploration of the search space [25 26]The second limitation is produced as a consequence of theobjective function that evaluates the similarity between themixture model and the histogram Under such an objectivefunction the number of Gaussians functions in the mixtureis fixed Since the number of threshold values (Gaussianfunctions) used for image segmentation varies depending onthe image the best threshold number and values are obtainedby an exhaustive trial and error procedure

On the other hand bioinspired algorithms represent afield of research that is concerned with the use of biologyas a metaphor for producing optimization algorithms Suchapproaches use our scientific understanding of biologicalsystems as an inspiration that at some level of abstractioncan be represented as optimization processes

In the last decade several optimization algorithms havebeen developed by a combination of deterministic rules andrandomness mimicking the behavior of natural phenomenaSuch methods include the social behavior of bird flocking

and fish schooling such as the Particle Swarm Optimization(PSO) algorithm [27] and the emulation of the differentialevolution in species such as the Differential Evolution (DE)[28] Although PSO and DE are the most popular algorithmsfor solving complex optimization problems they presentserious flaws such as premature convergence and difficulty toovercome local minima [29 30]The cause for such problemsis associated with the operators that modify individual posi-tions In such algorithms during the evolution the positionof each agent for the next iteration is updated yielding anattraction towards the position of the best particle seen so far(in case of PSO) or towards other promising individuals (incase of DE) As the algorithm evolves these behaviors causethat the entire population rapidly concentrates around thebest particles favoring the premature convergence and dam-aging the appropriate exploration of the search space [31 32]

Recently the collective intelligent behavior of insector animal groups in nature has attracted the attention ofresearchersThe intelligent behavior observed in these groupsprovides survival advantages where insect aggregations ofrelatively simple and ldquounintelligentrdquo individuals can accom-plish very complex tasks using only limited local informationand simple rules of behavior [33] Locusts (Schistocercagregaria) are a representative example of such collaborativeinsects [34] Locust is a kind of grasshopper that can changereversibly between a solitary and a social phase with clearbehavioral differences among both phases [35] The twophases show many differences regarding the overall levelof activity and the degree to which locusts are attracted orrepulsed among them [36] In the solitary phase locustsavoid contact to each other (locust concentrations) Asconsequence they distribute throughout the space exploringsufficiently over the plantation [36] On the other hand inthe social phase locusts frantically concentrate around thoseelements that have already found good food sources [37]Under such a behavior locusts attempt to efficiently findbetter nutrients by devastating promising areas within theplantation

This paper presents an algorithm for the automaticselection of pixel classes for image segmentation The pro-posed method combines a novel evolutionary method withthe definition of a new objective function that appropri-ately evaluates the segmentation quality with regard to thenumber of classes The new evolutionary algorithm calledLocust Search (LS) is based on the behavior presented inswarms of locusts In the proposed algorithm individualsemulate a group of locusts which interact to each otherbased on the biological laws of the cooperative swarmThe algorithm considers two different behaviors solitaryand social Depending on the behavior each individual isconducted by a set of evolutionary operators which mimicsdifferent cooperative conducts that are typically found in theswarm Different to most of existent evolutionary algorithmsthe behavioral model in the proposed approach explicitlyavoids the concentration of individuals in the current bestpositions Such fact allows avoiding critical flaws such asthe premature convergence to suboptimal solutions andthe incorrect exploration-exploitation balance In order toautomatically define the optimal number of pixel classes

Mathematical Problems in Engineering 3

(Gaussian functions in themixture) a new objective functionhas been also incorporated The new objective function isdivided into two parts The first part evaluates the qualityof each candidate solution in terms of its similarity withregard to the image histogram The second part penalizesthe overlapped area among Gaussian functions (classes)Under these circumstances Gaussian functions that do notldquopositivelyrdquo participate in the histogram approximation couldbe easily eliminated in the final Gaussian mixture model

In order to illustrate the proficiency and robustnessof the proposed approach several numerical experimentshave been conducted Such experiments are divided intotwo sections In the first part the proposed LS methodis compared to other well-known evolutionary techniquesover a set of benchmark functions In the second partthe performance of the proposed segmentation algorithmis compared to other segmentation methods which are alsobased on evolutionary principles The results in both casesvalidate the efficiency of the proposed technique with regardto accuracy and robustness

This paper is organized as follows in Section 2 basicbiological issues of the algorithm analogy are introduced andexplained Section 3 describes the novel LS algorithm and itscharacteristics A numerical study on different benchmarkfunction is presented in Section 4 while Section 5 presentsthe modelling of an image histogram through a Gaussianmixture Section 6 exposes the LS segmentation algorithmand Section 7 the performance of the proposed segmentationalgorithm Finally Section 8 draws some conclusions

2 Biological Fundamentals andMathematical Models

Social insect societies are complex cooperative systems thatself-organize within a set of constraints Cooperative groupsare good at manipulating and exploiting their environmentdefending resources and breeding yet allowing task special-ization among groupmembers [38 39] A social insect colonyfunctions as an integrated unit that not only possesses theability to operate at a distributed manner but also undertakesa huge construction of global projects [40] It is important toacknowledge that global order for insects can arise as a resultof internal interactions among members

Locusts are a kind of grasshoppers that exhibit twoopposite behavioral phases solitary and social (gregarious)Individuals in the solitary phase avoid contact to eachother (locust concentrations) As consequence they dis-tribute throughout the space while sufficiently exploring theplantation [36] In contrast locusts in the gregarious phasegather into several concentrations Such congregations maycontain up to 1010 members cover cross-sectional areas ofup to 10 km2 and a travelling capacity up to 10 km perday for a period of days or weeks as they feed causing adevastating crop loss [41]Themechanism to switch from thesolitary phase to the gregarious phase is complex and hasbeen a subject of significant biological inquiry Recently aset of factors has been implicated to include geometry of thevegetation landscape and the olfactory stimulus [42]

Only few works [36 37] that mathematically modelthe locust behavior have been published Both approachesdevelop two different minimal models with the goal ofreproducing the macroscopic structure and motion of agroup of locusts Considering that themethod in [36] focuseson modelling the behavior for each locust in the group itsfundamentals have been employed to develop the algorithmthat is proposed in this paper

21 Solitary Phase This section describes how each locustrsquosposition is modified as a result of its behavior under thesolitary phase Considering that x119896

119894represents the current

position of the 119894th locust in a group of 119873 different elementsthe new position x119896+1

119894is calculated by using the following

model

x119896+1119894

= x119896119894+Δx119894 (1)

with Δx119894corresponding to the change of position that is

experimented by x119896119894as a consequence of its social interaction

with all other elements in the groupTwo locusts in the solitary phase exert forces on each

other according to basic biological principles of attractionand repulsion (see eg [36]) Repulsion operates quitestrongly over a short length scale in order to avoid concentra-tions Attraction is weaker and operates over a longer lengthscale providing the social force that is required to maintainthe grouprsquos cohesion Therefore the strength of such socialforces can be modeled by the following function

119904 (119903) = 119865 sdot 119890minus119903119871

minus 119890minus119903 (2)

Here 119903 is a distance 119865 describes the strength of attractionand 119871 is the typical attractive length scale We have scaled thetime and the space coordinates so that the repulsive strengthand length scale are both represented by the unityWe assumethat 119865 lt 1 and 119871 gt 1 so that repulsion is stronger and featuresin a shorter-scale while attraction is applied in a weaker andlonger-scale both facts are typical for social organisms [21]The social force exerted by locust 119895 over locust 119894 is

s119894119895= 119904 (119903119894119895) sdot d119894119895 (3)

where 119903119894119895= x119895minus x119894 is the distance between the two locusts

and d119894119895= (x119895minus x119894)119903119894119895is the unit vector pointing from x

119894to

x119895The total social force on each locust can be modeled as the

superposition of all of the pairwise interactions

S119894=

119873

sum

119895=1119895 =119894

s119894119895 (4)

The change of positionΔx119894is modeled as the total social force

experimented by x119896119894as the superposition of all of the pairwise

interactions Therefore Δx119894is defined as follows

Δx119894= S119894 (5)

In order to illustrate the behavioral model under the solitaryphase Figure 1 presents an example assuming a population of

4 Mathematical Problems in Engineering

1198261

11982621198263

11985221

1198523211985231

1198522311985213

1198521211985711198572

1198573

Figure 1 Behavioral model under the solitary phase

three different members (119873 = 3) which adopt a determinedconfiguration in the current iteration 119896 As a consequenceof the social forces each element suffers an attraction orrepulsion to other elements depending on the distance amongthem Such forces are represented by s12 s13 s21 s23 s31and s32 Since x1 and x2 are too close the social forces s12and s13 present a repulsive nature On the other hand as thedistances x1 minus x3 and x2 minus x3 are quite long the socialforces s13 s23 s31 and s32 between x1 harr x3 and x2 harr x3all belong to the attractive nature Therefore the change ofposition Δx1 is computed as the vector resultant between s12and s13 (Δx1 = s12+s13) is S1The valuesΔx2 andΔx3 are alsocalculated accordingly

In addition to the presented model [36] some studies[43ndash45] suggest that the social force s

119894119895is also affected by

the dominance of the involved individuals x119894and x

119895in the

pairwise process Dominance is a property that relativelyqualifies the capacity of an individual to survive in relationto other elements in a group The locustrsquos dominance isdetermined by several characteristics such as size chemicalemissions and location with regard to food sources Undersuch circumstances the social force ismagnified orweakeneddepending on the most dominant individual that is involvedin the repulsion-attraction process

22 Social Phase In this phase locusts frantically concen-trate around the elements that have already found good foodsources They attempt to efficiently find better nutrients bydevastating promising areas within the plantation In orderto simulate the social phase the food quality index Fq

119894is

assigned to each locust x119894of the group as such index reflects

the quality of the food source where x119894is located

Under this behavioral model each of the 119873 elementsof the group is ranked according to its corresponding foodquality index Afterwards the 119887 elements featuring the bestfood quality indexes are selected (119887 ≪ 119873) Considering aconcentration radius 119877

119888that is created around each selected

element a set of 119888 new locusts is randomly generated inside119877119888 As a result most of the locusts will be concentrated

around the best 119887 elements Figure 2 shows a simple exampleof the behavioral model under the social phase In theexample the configuration includes eight locusts (119873 = 8)just as it is illustrated by Figure 2(a) that also presents the foodquality index for each locust A food quality index near to oneindicates a better food sourceTherefore Figure 2(b) presentsthe final configuration after the social phase assuming 119887 = 2

3 The Locust Search (LS) Algorithm

In this paper some behavioral principles drawn froma swarmof locusts have been used as guidelines for developing anew swarm optimization algorithm The LS assumes thatentire search space is a plantation where all the locustsinteract to each other In the proposed approach eachsolution within the search space represents a locust positioninside the plantation Every locust receives a food qualityindex according to the fitness value of the solution that issymbolized by the locustrsquos position As it has been previouslydiscussed the algorithm implements two different behavioralschemes solitary and social Depending on each behavioralphase each individual is conducted by a set of evolutionaryoperators which mimics the different cooperative operationsthat are typically found in the swarm The proposed methodformulates the optimization problem in the following form

maximizeminimize 119891 (l) l = (1198971 119897119899) isin R119899

subject to l isin S(6)

where 119891 R119899 rarr R is a nonlinear function whereas S = l isinR119899 | 119897119887

119889le 119897119889le 119906119887119889 119889 = 1 119899 is a bounded feasible search

space which is constrained by the lower (119897119887119889) and upper (119906119887

119889)

limitsIn order to solve the problem formulated in (6) a

population L119896 (l1198961 l119896

2 l119896

119873) of 119873 locusts (individuals) is

evolved inside the LS operation from the initial point (119896 =

0) to a total 119892119890119899 number of iterations (119896 = 119892119890119899) Eachlocust l119896

119894(119894 isin [1 119873]) represents an 119899-dimensional

vector 1198971198961198941 119897119896

1198942 119897119896

119894119899 where each dimension corresponds to

a decision variable of the optimization problem to be solvedThe set of decision variables constitutes the feasible searchspace S = l119896

119894isin R119899 | 119897119887

119889le 119897119896

119894119889le 119906119887119889 where 119897119887

119889and 119906119887

119889

correspond to the lower and upper bounds for the dimension119889 respectively The food quality index that is associated witheach locust l119896

119894(candidate solution) is evaluated through an

objective function 119891(l119896119894) whose final result represents the

fitness value of l119896119894 In the LS algorithm each iteration of the

evolution process consists of two operators (A) solitary and(B) social Beginning by the solitary stage the set of locusts isoperated in order to sufficiently explore the search space Onthe other hand the social operation refines existent solutionswithin a determined neighborhood (exploitation) subspace

31 Solitary Operation (A) One of the most interestingfeatures of the proposed method is the use of the solitaryoperator to modify the current locust positions Under thisapproach locusts are displaced as a consequence of thesocial forces produced by the positional relations among theelements of the swarm Therefore near individuals tend torepel each other avoiding the concentration of elements inregions On the other hand distant individuals tend to attractto each other maintaining the cohesion of the swarm A cleardifference to the original model in [20] considers that socialforces are alsomagnified or weakened depending on themostdominant (best fitness values) individuals that are involved inthe repulsion-attraction process

Mathematical Problems in Engineering 5

1

2

3

4

5

6

7

8

Fq4 = 09

Fq8 = 04

Fq3 = 01 Fq1 = 02

Fq6 = 10

Fq7 = 03

Fq5 = 02

Fq2 = 03

(a)

4

6

1

5 3

2

78

Rc

(b)

Figure 2 Behavioral model under the social phase (a) Initial configuration and food quality indexes (b) final configuration after theoperation of the social phase

In the solitary phase a new position p119894(119894 isin [1 119873]) is

produced by perturbing the current locust position l119896119894with a

change of position Δl119894(p119894= l119896119894+ Δl119894) The change of position

Δl119894is the result of the social interactions experimented by l119896

119894

as a consequence of its repulsion-attraction behavioralmodelSuch social interactions are pairwise computed among l119896

119894and

the other 119873 minus 1 individuals in the swarm In the originalmodel social forces are calculated by using (3) However inthe proposedmethod it is modified to include the best fitnessvalue (the most dominant) of the individuals involved inthe repulsion-attraction process Therefore the social forcethat is exerted between l119896

119895and l119896119894 is calculated by using the

following new model

s119898119894119895= 120588 (l119896

119894 l119896119895) sdot 119904 (119903

119894119895) sdot d119894119895+ rand (1 minus 1) (7)

where 119904(119903119894119895) is the social force strength defined in (2) and d

119894119895=

(l119896119895minus l119896119894)119903119894119895is the unit vector pointing from l119896

119894to l119896119895 Besides

rand(1 minus1) is a randomly generated number between 1 andminus1 Likewise 120588(l119896

119894 l119896119895) is the dominance function that calcu-

lates the dominance value of the most dominant individualfrom l119896

119895and l119896119894 In order to operate 120588(l119896

119894 l119896119895) all the individuals

from L119896 (l1198961 l119896

2 l119896

119873) are ranked according to their fitness

values The ranks are assigned so that the best individualreceives the rank 0 (zero) whereas the worst individualobtains the rank 119873 minus 1 Therefore the function 120588(l119896

119894 l119896119895) is

defined as follows

120588 (l119896119894 l119896119895) =

119890minus(5sdotrank(l119896

119894)119873) if rank (l119896

119894) lt rank (l119896

119895)

119890minus(5sdotrank(l119896

119895)119873) if rank (l119896

119894) gt rank (l119896

119895)

(8)

1

08

06

04

02

00 20 40 60 80 100

120588(119845k i119845k j)

Rank (119845ki )

Figure 3 Behavior of 120588(l119896119894 l119896119895) considering 100 individuals

where the function rank(120572) delivers the rank of the 120572-individual According to (8) 120588(l119896

119894 l119896119895) yields a value within

the interval (1 0) Its maximum value of one in 120588(l119896119894 l119896119895) is

reached when either individual l119896119895or l119896119894is the best element of

the population L119896 regarding their fitness values On the otherhand a value close to zero is obtained when both individualsl119896119895and l119896119894possess quite bad fitness values Figure 3 shows the

behavior of 120588(l119896119894 l119896119895) considering 100 individuals In the figure

it is assumed that l119896119894represents one of the 99 individuals with

ranks between 0 and 98whereas l119896119895is fixed to the element with

the worst fitness value (rank 99)Under the incorporation of 120588(l119896

119894 l119896119895) in (7) social forces

are magnified or weakened depending on the best fitnessvalue (the most dominant) of the individuals involved in therepulsion-attraction process

6 Mathematical Problems in Engineering

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(a)

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(b)

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(c)

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(d)

Figure 4 Examples of different distributions (a) Initial condition (b) distribution after applying 25 (c) 50 and (d) 100 operationsThe greenasterisk represents the minimum value so far

Finally the total social force on each individual l119896119894is mod-

eled as the superposition of all of the pairwise interactionsexerted over it

S119898119894=

119873

sum

119895=1119895 =119894

s119898119894119895 (9)

Therefore the change of positionΔl119894is considered as the total

social force experimented by l119896119894as the superposition of all of

the pairwise interactions Thus Δl119894is defined as follows

Δl119894= S119898119894 (10)

After calculating the new positions P (p1 p2 p119873)of the population L119896 (l1198961 l

119896

2 l119896

119873) the final positions

F (f1 f2 f119873) must be calculated The idea is to admitonly the changes that guarantee an improvement in thesearch strategy If the fitness value of p

119894(119891(p119894)) is better than

l119896119894(119891(l119896119894)) then p

119894is accepted as the final solution Otherwise

l119896119894is retainedThis procedure can be resumed by the following

statement (considering a minimization problem)

f119894=

p119894

if 119891 (p119894) lt 119891 (l119896

119894)

l119896119894

otherwise(11)

In order to illustrate the performance of the solitary oper-ator Figure 4 presents a simple example with the solitaryoperator being iteratively applied A population of 50 dif-ferent members (119873 = 50) is assumed which adopt aconcentrated configuration as initial condition (Figure 4(a))As a consequence of the social forces the set of elementstends to distribute throughout the search space Examplesof different distributions are shown in Figures 4(b) 4(c)and 4(d) after applying 25 50 and 100 different solitaryoperations respectively

32 Social Operation (B) The social procedure representsthe exploitation phase of the LS algorithm Exploitation is

Mathematical Problems in Engineering 7

the process of refining existent individuals within a smallneighborhood in order to improve their solution quality

The social procedure is a selective operation which isapplied only to a subset E of the final positions F (whereE sube F) The operation starts by sorting F with respect tofitness values storing the elements in a temporary populationB = b1 b2 b119873 The elements in B are sorted so thatthe best individual receives the position b1 whereas the worstindividual obtains the location b

119873 Therefore the subset E

is integrated by only the first 119892 locations of B (promisingsolutions) Under this operation a subspace 119862

119895is created

around each selected particle f119895isin E The size of 119862

119895depends

on the distance 119890119889which is defined as follows

119890119889=

sum119899

119902=1 (119906119887119902 minus 119897119887119902)

119899sdot 120573 (12)

where 119906119887119902and 119897119887

119902are the upper and lower bounds in the

119902th dimension and 119899 is the number of dimensions of theoptimization problem whereas 120573 isin [0 1] is a tuning factorTherefore the limits of 119862

119895can be modeled as follows

119906119904119904119902

119895= 119887119895119902+ 119890119889

119897119904119904119902

119895= 119887119895119902minus 119890119889

(13)

where 119906119904119904119902119895and 119897119904119904119902

119895are the upper and lower bounds of the 119902th

dimension for the subspace 119862119895 respectively

Considering the subspace 119862119895around each element f

119895isin

E a set of ℎ new particles (Mℎ119895

= m1119895m2119895 mℎ

119895) are

randomly generated inside bounds fixed by (13) Once theℎ samples are generated the individual l119896+1

119895of the next

population L119896+1 must be created In order to calculate l119896+1119895

the best particlem119887119890119904119905

119895 in terms of its fitness value from the ℎ

samples (wherem119887119890119904119905119895

isin [m1119895m2119895 mℎ

119895]) is compared to f

119895

Ifm119887119890119904119905119895

is better than f119895according to their fitness values l119896+1

119895

is updated with m119887119890119904119905119895

otherwise f119895is selected The elements

of F that have not been processed by the procedure (f119908notin E)

transfer their corresponding values to L119896+1 with no changeThe social operation is used to exploit only prominent

solutions According to the proposed method inside eachsubspace 119862

119895 ℎ random samples are selected Since the

number of selected samples at each subspace is very small(typically ℎ lt 4) the use of this operator substantially reducesthe number of fitness function evaluations

In order to demonstrate the social operation a numericalexample has been set by applying the proposed process to asimple function Such function considers the interval of minus3 le1198891 1198892 le 3 whereas the function possesses one global maximaof value 81 at (0 16) Notice that 1198891 and 1198892 correspond tothe axis coordinates (commonly 119909 and 119910) For this examplea final position population F of six 2-dimensional members(119873 = 6) is assumed Figure 5 shows the initial configurationof the proposed example with the black points representinghalf of the particles with the best fitness values (the first threeelements of B 119892 = 3) whereas the grey points (f2 f4 f6 notin E)correspond to the remaining individuals From Figure 5 it

0

0

0

d1

d2

minus3

minus2

minus1

minus2

minus2

1198396

11983931198391

1198394

1198395

1198392 119846211198461

1

11984623

11984613

11984625

11984615

C1

C5

C3

2

2

2

2

22

2

0

0 0

0

minus2

minus2

minus2

minus4

minus4minus6

4

44

6

68

3

2

1

0

minus3 minus2 minus1 3210

Figure 5 Operation of the social procedure

can be seen that the social procedure is applied to all blackparticles (f5 = b1 f3 = b2 and f1 = b3 f5 f3 f1 isin

E) yielding two new random particles (ℎ = 2) which arecharacterized by white points m1

1 m21 m

13 m

23 m

15 and m2

5for each black point inside of their corresponding subspaces1198621 1198623 and 1198625 Considering the particle f3 in Figure 7 theparticle m2

3 corresponds to the best particle (m1198871198901199041199053 ) from thetwo randomly generated particles (according to their fitnessvalues) within 1198623 Therefore the particlem1198871198901199041199053 will substitutef3 in the individual l119896+13 for the next generation since it holdsa better fitness value than f3(119891(f3)lt119891(m1198871198901199041199053 ))

The LS optimization procedure is defined over a boundedsearch space S Search points that do not belong to sucharea are considered to be infeasible However during theevolution process some candidate solutions could fall outsidethe search space In the proposed approach such infeasiblesolutions are arbitrarily placed with a random position insidethe search space S

33 Complete LS Algorithm LS is a simple algorithm withonly seven adjustable parameters the strength of attraction119865 the attractive length 119871 number of promising solutions 119892the population size 119873 the tuning factor 120573 the number ofrandom samples ℎ and the number of generations 119892119890119899 Theoperation of LS is divided into three parts initialization of thesolitary and social operations In the initialization (119896 = 0) thefirst population L0 (l01 l

02 l

0119873) is produced The values

11989701198941 119897

01198942 119897

0119894119899 of each individual l119896

119894and each dimension

119889 are randomly and uniformly distributed between theprespecified lower initial parameter bound 119897119887

119889and the upper

initial parameter bound 119906119887119889

1198970119894119895= 119897119887119889+ rand sdot (119906119887

119889minus 119897119887119889)

119894 = 1 2 119873 119889 = 1 2 119899(14)

In the evolution process the solitary (A) and social (B) oper-ations are iteratively applied until the number of iterations119896 = 119892119890119899 has been reached The complete LS procedure isillustrated in Algorithm 1

8 Mathematical Problems in Engineering

(1) Input 119865 119871 119892119873 119892119890119899 ℎ and 120573(2) Initialize L0 (119896 = 0)(3) until (119896 = 119892119890119899)(4) F larr SolitaryOperation(L119896) Solitary operator (Section 31)(5) L119896+1 larr SocialOperation(L119896 F) Social operator (Section 32)(6) 119896 = k + 1(7) end until

Algorithm 1 Locust Search (LS) algorithm

34 Discussion about the LS Algorithm Evolutionary algo-rithms (EA) have been widely employed for solving complexoptimization problems These methods are found to bemore powerful than conventional methods that are basedon formal logics or mathematical programming [46] Inthe EA algorithm search agents have to decide whetherto explore unknown search positions or to exploit alreadytested positions in order to improve their solution qualityPure exploration degrades the precision of the evolutionaryprocess but increases its capacity to find new potentially solu-tions On the other hand pure exploitation allows refiningexistent solutions but adversely drives the process to localoptimal solutions Therefore the ability of an EA to find aglobal optimal solution depends on its capacity to find a goodbalance between the exploitation of found-so-far elementsand the exploration of the search space [47]

Most of swarm algorithms and other evolutionary algo-rithms tend to exclusively concentrate the individuals in thecurrent best positions Under such circumstances such algo-rithms seriously limit their exploration-exploitation capaci-ties

Different to most of existent evolutionary algorithmsin the proposed approach the modeled behavior explicitlyavoids the concentration of individuals in the current bestpositions Such fact allows not only to emulate the coop-erative behavior of the locust colony in a good realisticway but also to incorporate a computational mechanism toavoid critical flaws that are commonly present in the popularevolutionary algorithms such as the premature convergenceand the incorrect exploration-exploitation balance

It is important to emphasize that the proposed approachconducts two operators (solitary and social) within a singleiteration Such operators are similar to those that are usedby other evolutionary methods such as ABC (employed beesonlooker bees and scout bees) AIS (clonal proliferationoperator affinity maturation operator and clonal selectionoperator) andDE (mutation crossover and selection) whichare all executed in a single iteration

4 Numerical Experiments overBenchmark Functions

A comprehensive set of 13 functions all collected from [48ndash50] has been used to test the performance of the LS approachas an optimization method Tables 11 and 12 present thebenchmark functions used in our experimental study Such

functions are classified into two different categories uni-modal test functions (Table 11) andmultimodal test functions(Table 12) In these tables 119899 is the function dimension 119891optis the minimum value of the function with S being a subsetof 119877119899 The optimum locations (xopt) for functions in Tables11 and 12 are in [0]119899 except for 1198915 11989112 and 11989113 with xopt in[1]119899 and 1198918 in [42096]

119899 A detailed description of optimumlocations is given in Tables 11 and 12

We have applied the LS algorithm to 13 functions whoseresults have been compared to those produced by the ParticleSwarmOptimization (PSO) method [27] and the DifferentialEvolution (DE) algorithm [28] both considered as the mostpopular algorithms formany optimization applications In allcomparisons the population has been set to 40 (119873 = 40)individualsThemaximum iteration number for all functionshas been set to 1000 Such stop criterion has been selectedto maintain compatibility to similar works reported in theliterature [48 49]

The parameter setting for each of the algorithms in thecomparison is described as follows

(1) PSO in the algorithm 1198881 = 1198882 = 2 while the inertiafactor (120596) is decreased linearly from 09 to 02

(2) DE the DERand1 scheme is employed The param-eter settings follow the instructions in [28 51] Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(3) In LS 119865 and 119871 are set to 06 and 119871 respectivelyBesides 119892 is fixed to 20 (1198732) ℎ = 2 120573 = 06 whereas119892119890119899 and119873 are configured to 1000 and 40 respectivelyOnce such parameters have been experimentallydetermined they are kept for all experiments in thissection

In the comparison three indexes are considered the averagebest-so-far solution (ABS) the standard deviation (SD)and the number of function evaluations (NFE) The firsttwo indexes assess the accuracy of the solution whereasthe last one measures the computational cost The averagebest-so-far solution (ABS) expresses the average value ofthe best function evaluations that have been obtained from30 independent executions The standard deviation (SD)indicates the dispersion of the ABS values Evolutionarymethods are in general complex pieces of software withseveral operators and stochastic branches Therefore it isdifficult to conduct a complexity analysis fromadeterministicperspective Under such circumstances it ismore appropriate

Mathematical Problems in Engineering 9

Table 1 Minimization results from the benchmark functions test inTable 11 with 119899 = 30 Maximum number of iterations = 1000

PSO DE LS

1198911

ABS 166 times 10minus1 627 times 10minus3 455 times 10minus4

SD 379 times 10minus1 168 times 10minus1 698 times 10minus4

NFE 28610 20534 16780

1198912

ABS 483 times 10minus1 202 times 10minus1 541 times 10minus3

SD 159 times 10minus1 066 145 times 10minus2

NFE 28745 21112 16324

1198913

ABS 275 572 times 10minus1 161 times 10minus3

SD 101 015 132 times 10minus3

NFE 38320 36894 20462

1198914

ABS 184 011 105 times 10minus2

SD 087 005 663 times 10minus3

NFE 37028 36450 21158

1198915

ABS 307 239 411 times 10minus2

SD 042 036 274 times 10minus3

NFE 39432 37264 21678

1198916

ABS 636 651 588 times 10minus2

SD 074 087 167 times 10minus2

NFE 38490 36564 22238

1198917

ABS 614 012 271 times 10minus2

SD 073 002 118 times 10minus2

NFE 37274 35486 21842

to use the number of function evaluations (NFE) just asit is used in the literature [52 53] to evaluate and assessthe computational effort (time) and the complexity amongoptimizers It represents how many times an algorithm usesthe objective function to evaluate the objective (fitness)function until the best solution of a determined executionhas been found Since the experiments require 30 differentexecutions the NFE index corresponds to the averaged valueobtained from these executions A small NFE value indicatesthat less time is needed to reach the global optimum

41 Unimodal Test Functions Functions 1198911 to 1198917 are uni-modal functions The results for unimodal functions over30 runs are reported in Table 1 considering the averagebest-so-far solution (ABS) the standard deviation (SD) andthe number of function evaluations (NFE) According tothis table LS provides better results than PSO and DE forall functions in terms of ABS and SD In particular thetest yields the largest performance difference in functions1198914ndash1198917 Such functions maintain a narrow curving valley that

is hard to optimize in case the search space cannot beexplored properly and the direction changes cannot be keptup with [54] For this reason the performance differencesare directly related to a better trade-off between explorationand exploitation that is produced by LS operators In thepractice a main goal of an optimization algorithm is to finda solution as good as possible within a small time windowThe computational cost for the optimizer is represented byits NFE values According to Table 1 the NFE values thatare obtained by the proposed method are smaller than its

Table 2 119901 values produced by Wilcoxonrsquos test that compares LSversus PSO and DE over the ldquoaverage best-so-farrdquo values fromTable 1

LS versus PSO DE1198911

183 times 10minus4 173 times 10minus2

1198912

385 times 10minus3 183 times 10minus4

1198913

173 times 10minus4 623 times 10minus3

1198914

257 times 10minus4 521 times 10minus3

1198915

473 times 10minus4 183 times 10minus3

1198916

639 times 10minus5 215 times 10minus3

1198917

183 times 10minus4 221 times 10minus3

counterparts LowerNFE values aremore desirable since theycorrespond to less computational overload and thereforefaster results In the results perspective it is clear that PSOand DE needmore than 1000 generations in order to producebetter solutions However this number of generations isconsidered in the experiments aiming for producing a visiblecontrast among the approaches If the number of generationshas been set to an exaggerated value then all methods wouldconverge to the best solution with no significant troubles

A nonparametric statistical significance proof known asWilcoxonrsquos rank sum test for independent samples [55 56]has been conducted with an 5 significance level over theldquoaverage best-so-farrdquo data of Table 1 Table 2 reports the 119901values produced byWilcoxonrsquos test for the pairwise compari-son of the ldquoaverage best-so-farrdquo of two groups Such groupsare formed by LS versus PSO and LS versus DE As a nullhypothesis it is assumed that there is no significant differencebetween mean values of the two algorithms The alternativehypothesis considers a significant difference between theldquoaverage best-so-farrdquo values of both approaches All 119901 valuesreported in the table are less than 005 (5 significancelevel) which is a strong evidence against the null hypothesisindicating that the LS results are statistically significant andthat it has not occurred by coincidence (ie due to the normalnoise contained in the process)

42 Multimodal Test Functions Multimodal functions pos-sess many local minima which make the optimization adifficult task to be accomplished In multimodal functionsthe results reflect the algorithmrsquos ability to escape fromlocal optima We have applied the algorithms over functions1198918 to 11989113 where the number of local minima increasesexponentially as the dimension of the function increasesThe dimension of such functions is set to 30 The resultsare averaged over 30 runs with performance indexes beingreported in Table 3 as follows the average best-so-far solution(ABS) the standard deviation (SD) and the number of func-tion evaluations (NFE) Likewise 119901 values of the Wilcoxonsigned-rank test of 30 independent runs are listed in Table 4From the results it is clear that LS yields better solutionsthan others algorithms for functions 1198919 11989110 11989111 and 11989112 interms of the indexes ABS and SD However for functions 1198918and 11989113 LS produces similar results to DE The Wilcoxonrank test results that are presented in Table 6 confirm that

10 Mathematical Problems in Engineering

Table 3Minimization results from the benchmark functions test inTable 12 with 119899 = 30 Maximum number of iterations = 1000

PSO DE LS

1198918

ABS minus67 times 103 minus126 times 104 minus126 times 104

SD 63 times 102 37 times 102 11 times 102

NFE 38452 35240 21846

1198919

ABS 148 401 times 10minus1 249 times 10minus3

SD 139 51 times 10minus2 48 times 10minus4

NFE 37672 34576 20784

11989110

ABS 147 466 times 10minus2 215 times 10minus3

SD 144 127 times 10minus2 318 times 10minus4

NFE 39475 37080 21235

11989111

ABS 1201 115 147 times 10minus4

SD 312 006 148 times 10minus5

NFE 38542 34875 22126

11989112

ABS 687 times 10minus1 374 times 10minus1 558 times 10minus3

SD 707 times 10minus1 155 times 10minus1 418 times 10minus4

NFE 35248 30540 16984

11989113

ABS 187 times 10minus1 181 times 10minus2 178 times 10minus2

SD 574 times 10minus1 166 times 10minus2 164 times 10minus3

NFE 36022 31968 18802

Table 4 119901 values produced byWilcoxonrsquos test comparing LS versusPSO and DE over the ldquoaverage best-so-farrdquo values from Table 3

LS versus PSO DE1198918

183 times 10minus4 00611198919

117 times 10minus4 241 times 10minus4

11989110

143 times 10minus4 312 times 10minus3

11989111

625 times 10minus4 114 times 10minus3

11989112

234 times 10minus5 715 times 10minus4

11989113

473 times 10minus4 0071

LS performed better than PSO and DE considering fourproblems 119891

9ndash11989112 whereas from a statistical viewpoint there

is no difference between results from LS and DE for 1198918 and11989113 According to Table 3 the NFE values obtained by theproposed method are smaller than those produced by otheroptimizers The reason of this remarkable performance isassociated with its two operators (i) the solitary operatorallows a better particle distribution in the search spaceincreasing the algorithmrsquos ability to find the global optimaand (ii) the use of the social operation provides a simpleexploitation operator that intensifies the capacity of findingbetter solutions during the evolution process

5 Gaussian Mixture Modelling

In this section the modeling of image histograms throughGaussian mixture models is presented Let one consider animage holding 119871 gray levels [0 119871 minus 1] whose distribution

is defined by a histogram ℎ(119892) represented by the followingformulation

ℎ (119892) =

119899119892

119873119901 ℎ (119892) gt 0

Np =119871minus1sum

119892=0119899119892

119871minus1sum

119892=0ℎ (119892) = 1

(15)

where 119899119892denotes the number of pixels with gray level 119892

and Np the total number of pixels in the image Under suchcircumstances ℎ(119892) can be modeled by using a mixture 119901(119909)of Gaussian functions of the form

119901 (119909) =

119870

sum

119894=1

119875119894

radic2120587120590119894

exp[minus (119909 minus 120583

119894)2

21205902119894

] (16)

where119870 symbolizes the number of Gaussian functions of themodel whereas 119875

119894is the a priori probability of function 119894 120583

119894

and 120590119894represent the mean and standard deviation of the 119894th

Gaussian function respectively Furthermore the constraintsum119870

119894=1 119875119894 = 1must be satisfied by themodel In order to evaluatethe similarity between the image histogram and a candidatemixture model the mean square error can be used as follows

119869 =1119899

119871

sum

119895=1[119901 (119909119895) minus ℎ (119909

119895)]

2+120596 sdot

1003816100381610038161003816100381610038161003816100381610038161003816

(

119870

sum

119894=1119875119894)minus 1

1003816100381610038161003816100381610038161003816100381610038161003816

(17)

where 120596 represents the penalty associated with the constrainsum119870

119894=1 119875119894 = 1 Therefore 119869 is considered as the objective func-tion which must be minimized in the estimation problem Inorder to illustrate the histogram modeling through a Gaus-sian mixture Figure 6 presents an example assuming threeclasses that is 119870 = 3 Considering Figure 6(a) as the imagehistogram ℎ(119909) the Gaussian mixture 119901(119909) that is shown inFigure 6(c) is produced by adding the Gaussian functions1199011(119909) 1199012(119909) and 1199013(119909) in the configuration presented inFigure 6(b) Once themodel parameters that bettermodel theimage histogram have been determined the final step is todefine the threshold values 119879

119894(119894 isin [1 119870]) which can be

calculated by simple standard methods just as it is presentedin [19ndash21]

6 Segmentation Algorithm Based on LS

In the proposed method the segmentation process isapproached as an optimization problem Computationaloptimization generally involves two distinct elements (1) asearch strategy to produce candidate solutions (individualsparticles insects locust etc) and (2) an objective functionthat evaluates the quality of each selected candidate solutionSeveral computational algorithms are available to performthe first element The second element where the objectivefunction is designed is unquestionably the most criticalIn the objective function it is expected to embody the full

Mathematical Problems in Engineering 11

Freq

uenc

y

Gray level

h(x)

(a)

Gray level

Freq

uenc

y

p1(x)p2(x) p3(x)

(b)

Freq

uenc

y

Gray level

p(x)

(c)

Figure 6 Histogram approximation through a Gaussian mixture (a) Original histogram (b) configuration of the Gaussian components1199011(119909) 1199012(119909) and 1199013(119909) and (c) final Gaussian mixture 119901(119909)

complexity of the performance biases and restrictions of theproblem to be solved In the segmentation problem candidatesolutions represent Gaussian mixtures The objective func-tion 119869 (17) is used as a fitness value to evaluate the similaritypresented between the Gaussian mixture and the imagehistogram Therefore guided by the fitness values (119869 values)a set of encoded candidate solutions are evolved using theevolutionary operators until the best possible resemblancecan be found

Over the last decade several algorithms based on evo-lutionary and swarm principles [19ndash22] have been proposedto solve the problem of segmentation through a Gaussianmixture model Although these algorithms own certain goodperformance indexes they present two important limitations(1) They frequently obtain suboptimal approximations asa consequence of an inappropriate balance between explo-ration and exploitation in their search strategies (2) Theyare based on the assumption that the number of Gaussians(classes) in the mixture is preknown and fixed otherwisethey cannot work

In order to eliminate such flaws the proposed approachincludes (A) a new search strategy and (B) the definitionof a new objective function For the search strategy the LSmethod (Section 4) is adopted Under LS the concentrationof individuals in the current best positions is explicitlyavoided Such fact allows reducing critical problems such asthe premature convergence to suboptimal solutions and theincorrect exploration-exploitation balance

61 New Objective Function 119869new Previous segmentation

algorithms based on evolutionary and swarm methods use(17) as objective function Under these circumstances each

candidate solution (Gaussian mixture) is only evaluated interms of its approximation with the image histogram

Since the proposed approach aims to automatically selectthe number ofGaussian functions119870 in the finalmixture119901(119909)the objective function must be modified The new objectivefunction 119869new is defined as follows

119869new

= 119869 + 120582 sdot119876 (18)

where 120582 is a scaling constant The new objective function isdivided into two partsThe first part 119869 evaluates the quality ofeach candidate solution in terms of its similarity with regardto the image histogram (17) The second part 119876 penalizes theoverlapped area among Gaussian functions (classes) with 119876being defined as follows

119876 =

119870

sum

119894=1

119870

sum

119895=1119895 =119894

119871

sum

119897=1min (119875

119894sdot 119901119894(119897) 119875119895sdot 119901119895(119897)) (19)

where 119870 and 119871 represent the number of classes and the graylevels respectively The parameters 119901

119894(119897) and 119901

119895(119897) symbolize

the Gaussian functions 119894 and 119895 respectively that are to beevaluated on the point 119897 (gray level) whereas the elements119875119894and 119875

119895represent the a priori probabilities of the Gaussian

functions 119894 and 119895 respectively Under such circumstancesmixtures with Gaussian functions that do not ldquopositivelyrdquoparticipate in the histogram approximation are severelypenalized

Figure 7 illustrates the effect of the new objective function119869new in the evaluation of Gaussian mixtures (candidatesolutions) From the image histogram (Figure 7(a)) it isevident that two Gaussian functions are enough to accurately

12 Mathematical Problems in Engineering

Gray level

Freq

uenc

y h(x)

(a)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)Freq

uenc

y

(b)

Gray level

Freq

uenc

y p1(x)

p2(x)

p3(x)

p4(x)

Q12 Q23 Q34

(c)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)

Q12

Q23

Q34

Freq

uenc

y(d)

Figure 7 Effect of the new objective function 119869new in the evaluation of Gaussian mixtures (candidate solutions) (a) Original histogram (b)Gaussian mixture considering four classes (c) penalization areas and (d) Gaussian mixture of better quality solution

approximate the original histogramHowever if theGaussianmixture is modeled by using a greater number of functions(eg four as it is shown in Figure 7(b)) the original objectivefunction 119869 is unable to obtain a reasonable approximationUnder the new objective function 119869

new the overlapped areaamong Gaussian functions (classes) is penalized Such areasin Figure 7(c) correspond to 11987612 11987623 and 11987634 where11987612 represents the penalization value produced between theGaussian function 1199011(119909) and 1199012(119909) Therefore due to thepenalization theGaussianmixture shown in Figures 7(b) and7(c) provides a solution of low quality On the other hand theGaussian mixture presented in Figure 7(d) maintains a lowpenalty thus it represents a solution of high quality FromFigure 7(d) it is easy to see that functions 1199011(119909) and 1199014(119909)can be removed from the final mixture This eliminationcould be performed by a simple comparison with a thresholdvalue 120579 since 1199011(119909) and 1199014(119909) have a reduced amplitude(1199011(119909) asymp 1199012(119909) asymp 0) Therefore under 119869new it is possibleto find the reduced Gaussian mixture model starting from aconsiderable number of functions

Since the proposed segmentation method is conceivedas an optimization problem the overall operation can bereduced to solve the formulation of (20) by using the LSalgorithm

minimize 119869new

(x) = 119869 (x) + 120582 sdot 119876 (x)

x = (1198751 1205831 1205901 119875119870 120583119870 120590119870) isin R3sdot119870

subject to 0 le 119875119889le 1 119889 isin (1 119870)

0 le 120583119889le 255

0 le 120590119889le 25

(20)

where 119875119889 120583119889 and 120590

119889represent the probability mean and

standard deviation of the class 119889 It is important to remarkthat the new objective function 119869

new allows the evaluationof a candidate solution in terms of the necessary numberof Gaussian functions and its approximation quality Undersuch circumstances it can be used in combination with anyother evolutionary method and not only with the proposedLS algorithm

62 Complete Segmentation Algorithm Once the new searchstrategy (LS) and objective function (119869new) have been definedthe proposed segmentation algorithm can be summarized byAlgorithm 2The new algorithm combines operators definedby LS and operations for calculating the threshold values

(Line 1) The algorithm sets the operative parameters 119865119871 119892 119892119890119899 119873 119870 120582 and 120579 They rule the behavior of thesegmentation algorithm (Line 2) Afterwards the populationL0 is initialized considering 119873 different random Gaussianmixtures of119870 functionsThe idea is to generate an119873-randomGaussian mixture subjected to the constraints formulatedin (20) The parameter 119870 must hold a high value in orderto correctly segment all images (recall that the algorithmis able to reduce the Gaussian mixture to its minimumexpression) (Line 3)Then the Gaussianmixtures are evolvedby using the LS operators and the new objective function119869newThis process is repeated during119892119890119899 cycles (Line 8) Afterthis procedure the best Gaussian mixture l119892119890119899

119887119890119904119905is selected

according to its objective function 119869new (Line 9) Then

the Gaussian mixture l119892119890119899119887119890119904119905

is reduced by eliminating thosefunctions whose amplitudes are lower than 120579 (119901

119894(119909) le 120579)

(Line 10) Then the threshold values 119879119894from the reduced

model are calculated (Line 11) Finally the calculated 119879119894

values are employed to segment the image Figure 8 shows aflowchart of the complete segmentation procedure

Mathematical Problems in Engineering 13

(1) Input 119865 119871 119892 119892119890119899119873 119870 120582 and 120579(2) Initialize L0 (119896 = 0)(3) until (119896 = 119892119890119899)(4) F larr SolitaryOperation(L119896) Solitary operator (Section 31)(5) L119896+1 larr SocialOperation(L119896 F) Social operator (Section 32)(6) 119896 = k + 1(7) end until(8) Obtain I119892119890119899

119887119890119904119905

(9) Reduce I119892119890119899119887119890119904119905

(10) Calculate the threshold values 119879119894from the reduced model

(11) Use 119879119894to segment the image

Algorithm 2 Segmentation LS algorithm

The proposed segmentation algorithm permits to auto-matically detect the number of segmentation partitions(classes) Furthermore due to its remarkable search capaci-ties the LSmethodmaintains a better accuracy than previousalgorithms based on evolutionary principles However theproposed method presents two disadvantages first it isrelated to its implementation which in general is morecomplex than most of the other segmentators based onevolutionary basics The second refers to the segmentationprocedure of the proposed approach which does not considerany spatial pixel characteristics As a consequence pixelsthat may belong to a determined region due to its positionare labeled as a part of another region due to its graylevel intensity Such a fact adversely affects the segmentationperformance of the method

7 Segmentation Results

This section analyses the performance of the proposedsegmentation algorithmThe discussion is divided into threeparts the first one shows the performance of the proposedLS segmentation algorithm while the second presents a com-parison between the proposed approach and others segmen-tation algorithms that are based on evolutionary and swammethods The comparison mainly considers the capacities ofeach algorithm to accurately and robustly approximate theimage histogram Finally the third part presents an objectiveevaluation of segmentation results produced by all algorithmsthat have been employed in the comparisons

71 Performance of LS Algorithm in Image SegmentationThis section presents two experiments that analyze theperformance of the proposed approach Table 5 presentsthe algorithmrsquos parameters that have been experimentallydetermined and kept for all the test images through allexperiments

First Image The first test considers the histogram shownby Figure 9(b) while Figure 9(a) presents the original imageAfter applying the proposed algorithm just as it has beenconfigured according to parameters in Table 5 a minimummodel of four classes emerges after the start from Gaussian

Table 5 Parameter setup for the LS segmentation algorithm

119865 119871 119892 119892119890119899 119873 119870 120582 120579

06 02 20 1000 40 10 001 00001

Table 6 Results of the reduced Gaussian mixture for the first andthe second image

Parameters First image Second image1198751 0004 00321205831 181 1211205901 82 291198752 00035 000151205832 699 4511205902 184 2491198753 001 00061205833 949 13011205903 89 1791198754 0007 0021205834 1631 16721205904 299 101

mixtures of 10 classes Considering 30 independent exe-cutions the averaged parameters of the resultant Gaussianmixture are presented in Table 6 One final Gaussian mixture(ten classes) which has been obtained by LS is presentedin Figure 10 Furthermore the approximation of the reducedGaussian mixture is also visually compared with the originalhistogram in Figure 10 On the other hand Figure 11 presentsthe segmented image after calculating the threshold points

Second Image For the second experiment the image inFigure 12 is testedThemethod aims to segment the image byusing a reduced Gaussian mixture model obtained by the LSapproach After executing the algorithm according to param-eters from Table 5 the resulting averaged parameters of theresultant Gaussian mixture are presented in Table 6 In orderto assure consistency the results are averaged considering 30independent executions Figure 13 shows the approximationquality that is obtained by the reduced Gaussian mixturemodel in (a) and the segmented image in (b)

14 Mathematical Problems in Engineering

Input parameters

ImageI(xy)

from the reduced model

YesLS method

No

k = k + 1

k gt gen

Reduce 119845genbest

Calculate the threshold values Ti

Use Ti to segment the image

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

119813 larr SolitaryOperation (119819k)

119819k+1 larr SocialOperation (119819k 119813)

Initialize 1198190 (k = 0)

F L g gen N K 120582 and 120579

Figure 8 Flowchart of the complete segmentation procedure

(a)

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray level

Freq

uenc

y

(b)

Figure 9 (a) Original first image used on the first experiment and (b) its histogram

Mathematical Problems in Engineering 15

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray levelFr

eque

ncy

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

Figure 10 Gaussian mixture obtained by LS for the first image

Figure 11 Image segmented with the reduced Gaussian mixture

(a)

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

(b)

Figure 12 (a) Original second image used on the first experiment and (b) its histogram

72 Histogram Approximation Comparisons This sectiondiscusses the comparison between the proposed segmenta-tion algorithm and other evolutionary-segmentation meth-ods that have been proposed in the literature The set ofmethods used in the experiments includes the 119869 + ABC[19] 119869 + AIS [20] and 119869 + DE [21] These algorithmsconsider the combination between the original objectivefunction 119869 (17) and an evolutionary technique such asArtificial Bee Colony (ABC) the Artificial Immune Systems(AIS) and the Differential Evolution (DE) [21] respectively

Since the proposed segmentation approach considers thecombination of the new objective function 119869

new (18) andthe LS algorithm it will be referred to as 119869

new+ LS

The comparison focuses mainly on the capacities of eachalgorithm to accurately and robustly approximate the imagehistogram

In the experiments the populations have been set to 40(119873 = 40) individuals The maximum iteration number for allfunctions has been set to 1000 Such stop criterion has beenconsidered to maintain compatibility to similar experiments

16 Mathematical Problems in Engineering

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

(a) (b)

Figure 13 (a) Segmentation result obtained by LS for the first image and (b) the segmented image

(1) (2)

(3) (4)

(a)

0015

0005001

0020025

0035003

0045004

005

00 50 100 150 200 250

Gray level

Freq

uenc

y

(b)

Figure 14 (a) Synthetic image used in the comparison and (b) its histogram

that are reported in the literature [18] The parameter settingfor each of the segmentation algorithms in the comparison isdescribed as follows

(1) 119869 + ABC [19] in the algorithm its parameters areconfigured as follows the abandonment limit = 100120572 = 005 and limit = 30

(2) 119869 + AIS [20] it presents the following parameters ℎ =120119873

119888= 80 120588 = 10 119875

119903= 20 119871 = 22 and 119879

119890= 001

(3) 119869 + DE [21] the DERand1 scheme is employed Theparameter settings follow the instructions in [21]Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(4) In 119869new +LS the method is set according to the valuesdescribed in Table 5

In order to conduct the experiments a synthetic image isdesigned to be used as a reference in the comparisons Themain idea is to know in advance the exact number of classes(and their parameters) that are contained in the image sothat the histogram can be considered as a ground truth The

Table 7 Employed parameters for the design of the reference image

119875119894

120583119894

120590119894

(1) 005 40 8(2) 004 100 10(3) 005 160 8(4) 0027 220 15

synthetic image is divided into four sections Each sectioncorresponds to a different class which is produced by settingeach gray level pixel 119875V119894 to a value that is determined by thefollowing model

119875V119894= 119890minus((119909minus1205831)

221205902119894) (21)

where 119894 represents the section whereas 120583119894and 120590119894are themean

and the dispersion of the gray level pixels respectively Thecomparative study employs the image of 512 times 512 that isshown in Figure 14(a) and the algorithmrsquos parameters thathave been presented in Table 7 Figure 14(b) illustrates theresultant histogram

Mathematical Problems in Engineering 17

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2Class 3Class 4

Class 5Class 6Class 7Class 8

(a)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(b)

Figure 15 Convergence results (a) Convergence of the following methods 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes (b) Convergence of the proposed method (reduced Gaussian mixture)

(a) (b)

Figure 16 Segmentation results obtained by (a) several methods including 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes and (b) the proposed method (reduced Gaussian mixture)

In the comparison the discussion focuses on the follow-ing issues first of all accuracy second convergence andthird computational cost

Convergence This section analyzes the approximation con-vergence when the number of classes that are used bythe algorithm during the evolution process is different tothe actual number of classes in the image Recall that theproposed approach automatically finds the reduced Gaussianmixture which better adapts to the image histogram

In the experiment the methods 119869 + ABC 119869 + AISand 119869 + DE are executed considering Gaussian mixturescomposed of 8 functions Under such circumstances thenumber of classes to be detected is higher than the actualnumber of classes in the image On the other hand the

proposed algorithm maintains the same configuration ofTable 5Therefore it can detect and calculate up to ten classes(119870 = 10)

As a result the techniques 119869 + ABC 119869 + AIS and 119869 +DE tend to overestimate the image histogram This effectcan be seen in Figure 15(a) where the resulting Gaussianfunctions are concentrated within actual classes Such abehavior is a consequence of the evaluation that is consideredby the original objective function 119869 which privileges onlythe approximation between the Gaussian mixture and theimage histogram This effect can be graphically illustrated byFigure 16(a) that shows the pixel misclassification producedby the wrong segmentation of Figure 14(a) On the otherhand the proposed approach obtains a reduced Gaussianmixture model which allows the detection of each class from

18 Mathematical Problems in Engineering

006

004

005

002

003

001

00 50 100 150 200 250

Gray level

Freq

uenc

y

(a)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

(b)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

HistogramClass 1Class 2

Class 3Class 4

(c)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(d)

Figure 17 Approximation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

the actual histogram (see Figure 15(b)) As a consequencethe segmentation is significantly improved by eliminating thepixel misclassification as it is shown by Figure 16(b)

It is evident from Figure 15 that the techniques 119869 + ABC119869 + AIS and 119869 + DE all need an a priori knowledge of thenumber of classes that are contained in the actual histogramin order to obtain a satisfactory result On the other handthe proposed algorithm is able to find a reduced Gaussianmixture whose classes coincide with the actual number ofclasses that are contained in the image histogram

Accuracy In this section the comparison among the algo-rithms in terms of accuracy is reported Most of the reportedcomparisons [19ndash26] are concerned about comparing theparameters of the resultant Gaussian mixtures by using realimages Under such circumstances it is difficult to considera clear reference in order to define a meaningful errorTherefore the image defined in Figure 14 has been used in theexperiments because its construction parameters are clearlydefined in Table 7

Since the parameter values from Table 7 act as groundtruth a simple averaged difference between them and thevalues that are computed by each algorithm could be used as

comparison error However as each parametermaintains dif-ferent active intervals it is necessary to express the differencesin terms of percentage Therefore if Δ120573 is the parametricdifference and 120573 the ground truth parameter the percentageerror Δ120573 can be defined as follows

Δ120573 =Δ120573

120573sdot 100 (22)

In the segmentation problem each Gaussian mixture repre-sents a 119870-dimensional model where each dimension corre-sponds to a Gaussian function of the optimization problemto be solved Since each Gaussian function possesses threeparameters 119875

119894 120583119894 and 120590

119894 the complete number of parameters

is 3 sdot 119870 dimensions Therefore the final error 119864 produced bythe final Gaussian mixture is

119864 =1

119870 sdot 3

119870sdot3sum

V=1Δ120573V (23)

where 120573V isin (119875119894 120583119894 120590119894)In order to compare accuracy the algorithms 119869 + ABC

119869 + AIS 119869 + DE and the proposed approach are all executedover the image shown by Figure 14(a) The experiment aims

Mathematical Problems in Engineering 19

(a) (b)

(c) (d)

Figure 18 Segmentation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

to estimate the Gaussian mixture that better approximatesthe actual image histogram Methods 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures composed of 4functions (119870 = 4) In case of the 119869new + LS method althoughthe algorithm finds a reduced Gaussian mixture of fourfunctions it is initially set with ten functions (119870 = 10)Table 8 presents the final Gaussian mixture parameters andthe final error 119864 The final Gaussian mixture parametershave been averaged over 30 independent executions in orderto assure consistency A close inspection of Table 8 revealsthat the proposed method is able to achieve the smallesterror (119864) in comparison to the other algorithms Figure 16presents the histogram approximations that are produced byeach algorithm whereas Figure 17 shows their correspondentsegmented images Both illustrations present the mediancase obtained throughout 30 runs Figure 18 exhibits that 119869 +ABC 119869 + AIS and 119869 + DE present different levels of misclas-sifications which are nearly absent in the proposed approachcase

Computational Cost The experiment aims to measure thecomplexity and the computing time spent by the 119869 + ABC

Table 8 Results of the reduced Gaussian mixture in terms ofaccuracy

Algorithm Gaussian function 119875119894

120583119894

120590119894

119864

J + ABC

(1) 0052 445 64

1179(2) 0084 9812 1287(3) 0058 16350 894(4) 0025 21884 175

J + AIS

(1) 0072 3101 614

2201(2) 0054 8852 1221(3) 0039 14921 914(4) 0034 24841 1384

J + DE

(1) 0041 3574 786

1357(2) 0036 9057 1197(3) 0059 14847 901(4) 0020 20134 1302

119869new + LS

(1) 0049 4012 75

398(2) 0041 10204 104(3) 0052 16866 83(4) 0025 11092 152

20 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 19 Images employed in the computational cost analysis

the 119869 + AIS the 119869 + DE and the 119869new + LS algorithm whilecalculating the parameters of the Gaussian mixture in bench-mark images (see Figures 19(a)ndash19(d)) 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures that are composedof 4 functions (119870 = 4) In case of the 119869new + LS methodalthough the algorithm finds a reduced Gaussian mixture offour functions despite being initialized with ten functions(119870 = 10) Table 9 shows the averaged measurements after30 experiments It is evident that the 119869 + ABC and 119869 +

DE are the slowest to converge (iterations) and the 119869 + AISshows the highest computational cost (time elapsed) becauseit requires operators which demand long times On the otherhand the 119869new + LS shows an acceptable trade-off betweenits convergence time and its computational cost Thereforealthough the implementation of 119869new + LS in general requiresmore code than most of other evolution-based segmentatorssuch a fact is not reflected in the execution time FinallyFigure 19 below shows the segmented images as they are

Mathematical Problems in Engineering 21

(a) (b) (c) (d)

Figure 20 Experimental set used in the evaluation of the segmentation results

Table 9 Iterations and time requirements of the 119869 + ABC the 119869 +AIS the 119869 + DE and the 119869new + LS algorithm as they are applied tosegment benchmark images (see Figure 17)

Iterations (a) (b) (c) (d)Time elapsed

119869 + ABC 855 833 870 997272 s 270 s 273 s 31 s

119869 + AIS 725 704 754 812178 s 161 s 141 s 201 s

119869 + DE 657 627 694 742125 s 112 s 145 s 188 s

119869new + LS 314 298 307 402

098 s 084 s 072 s 102 s

Table 10 Evaluation of the segmentation results in terms of the ROSindex

Number of classesImage

119873119877= 4

(a)119873119877= 3

(b)119873119877= 4

(c)119873119877= 4

(d)119869 + ABC 0534 0328 0411 0457119869 + AIS 0522 0321 0427 0437119869 + DE 0512 0312 0408 0418119869new + LS 0674 0401 0514 0527

generated by each algorithm It can be seen that the proposedapproach generate more homogeneous regions whereas 119869 +ABC 119869 + AIS and 119869 + DE present several artifacts that areproduced by an incorrect pixel classification

73 Performance Evaluation of the Segmentation Results Thissection presents an objective evaluation of segmentationresults that are produced by all algorithms in the compar-isons The ill-defined nature of the segmentation problemmakes the evaluation of a candidate algorithm difficult[57] Traditionally the evaluation has been conducted byusing some supervised criteria [58] which are based on thecomputation of a dissimilarity measure between a segmen-tation result and a ground truth image Recently the use ofunsupervised measures has substituted supervised indexesfor the objective evaluation of segmentation results [59]They

Table 11 Unimodal test functions

Test function S 119891opt

1198911(x) =119899

sum

119894=11199092119894

[minus100 100]119899 0

1198912(x) =119899

sum

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 +

119899

prod

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 [minus10 10]119899 0

1198913(x) =119899

sum

119894=1(

119894

sum

119895=1119909119895)

2

[minus100 100]119899 0

1198914 (x) = max119894

10038161003816100381610038161199091198941003816100381610038161003816 1 le 119894 le 119899 [minus100 100]119899 0

1198915(x) =119899minus1sum

119894=1[100 (119909

119894+1 minus 1199092119894)2+ (119909119894minus 1)2] [minus30 30]119899 0

1198916(x) =119899

sum

119894=1(119909119894+ 05)2 [minus100 100]119899 0

1198917(x) =119899

sum

119894=1119894119909

4119894+ rand(0 1) [minus128 128]119899 0

enable the quantification of the quality of a segmentationresult without a priori knowledge (ground truth image)

Evaluation Criteria In this paper the unsupervised indexROS proposed by Chabrier et al [60] has been used toobjectively evaluate the performance of each candidate algo-rithm This index evaluates the segmentation quality interms of the homogeneity within segmented regions andthe heterogeneity among the different regions ROS can becomputed as follows

ROS =119863 minus 119863

2 (24)

where 119863 quantifies the homogeneity within segmentedregions Similarly 119863 measures the disparity among theregions A segmentation result 1198781 is considered better than 1198782if ROS

1198781gt ROS

1198782 The interregion homogeneity character-

ized by119863 is calculated considering the following formulation

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot

120590119888

(sum119873119877

119897=1 120590119897) (25)

where 119873119877represents the number of partitions in which the

image has been segmented 119877119888symbolizes the number of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 3

(Gaussian functions in themixture) a new objective functionhas been also incorporated The new objective function isdivided into two parts The first part evaluates the qualityof each candidate solution in terms of its similarity withregard to the image histogram The second part penalizesthe overlapped area among Gaussian functions (classes)Under these circumstances Gaussian functions that do notldquopositivelyrdquo participate in the histogram approximation couldbe easily eliminated in the final Gaussian mixture model

In order to illustrate the proficiency and robustnessof the proposed approach several numerical experimentshave been conducted Such experiments are divided intotwo sections In the first part the proposed LS methodis compared to other well-known evolutionary techniquesover a set of benchmark functions In the second partthe performance of the proposed segmentation algorithmis compared to other segmentation methods which are alsobased on evolutionary principles The results in both casesvalidate the efficiency of the proposed technique with regardto accuracy and robustness

This paper is organized as follows in Section 2 basicbiological issues of the algorithm analogy are introduced andexplained Section 3 describes the novel LS algorithm and itscharacteristics A numerical study on different benchmarkfunction is presented in Section 4 while Section 5 presentsthe modelling of an image histogram through a Gaussianmixture Section 6 exposes the LS segmentation algorithmand Section 7 the performance of the proposed segmentationalgorithm Finally Section 8 draws some conclusions

2 Biological Fundamentals andMathematical Models

Social insect societies are complex cooperative systems thatself-organize within a set of constraints Cooperative groupsare good at manipulating and exploiting their environmentdefending resources and breeding yet allowing task special-ization among groupmembers [38 39] A social insect colonyfunctions as an integrated unit that not only possesses theability to operate at a distributed manner but also undertakesa huge construction of global projects [40] It is important toacknowledge that global order for insects can arise as a resultof internal interactions among members

Locusts are a kind of grasshoppers that exhibit twoopposite behavioral phases solitary and social (gregarious)Individuals in the solitary phase avoid contact to eachother (locust concentrations) As consequence they dis-tribute throughout the space while sufficiently exploring theplantation [36] In contrast locusts in the gregarious phasegather into several concentrations Such congregations maycontain up to 1010 members cover cross-sectional areas ofup to 10 km2 and a travelling capacity up to 10 km perday for a period of days or weeks as they feed causing adevastating crop loss [41]Themechanism to switch from thesolitary phase to the gregarious phase is complex and hasbeen a subject of significant biological inquiry Recently aset of factors has been implicated to include geometry of thevegetation landscape and the olfactory stimulus [42]

Only few works [36 37] that mathematically modelthe locust behavior have been published Both approachesdevelop two different minimal models with the goal ofreproducing the macroscopic structure and motion of agroup of locusts Considering that themethod in [36] focuseson modelling the behavior for each locust in the group itsfundamentals have been employed to develop the algorithmthat is proposed in this paper

21 Solitary Phase This section describes how each locustrsquosposition is modified as a result of its behavior under thesolitary phase Considering that x119896

119894represents the current

position of the 119894th locust in a group of 119873 different elementsthe new position x119896+1

119894is calculated by using the following

model

x119896+1119894

= x119896119894+Δx119894 (1)

with Δx119894corresponding to the change of position that is

experimented by x119896119894as a consequence of its social interaction

with all other elements in the groupTwo locusts in the solitary phase exert forces on each

other according to basic biological principles of attractionand repulsion (see eg [36]) Repulsion operates quitestrongly over a short length scale in order to avoid concentra-tions Attraction is weaker and operates over a longer lengthscale providing the social force that is required to maintainthe grouprsquos cohesion Therefore the strength of such socialforces can be modeled by the following function

119904 (119903) = 119865 sdot 119890minus119903119871

minus 119890minus119903 (2)

Here 119903 is a distance 119865 describes the strength of attractionand 119871 is the typical attractive length scale We have scaled thetime and the space coordinates so that the repulsive strengthand length scale are both represented by the unityWe assumethat 119865 lt 1 and 119871 gt 1 so that repulsion is stronger and featuresin a shorter-scale while attraction is applied in a weaker andlonger-scale both facts are typical for social organisms [21]The social force exerted by locust 119895 over locust 119894 is

s119894119895= 119904 (119903119894119895) sdot d119894119895 (3)

where 119903119894119895= x119895minus x119894 is the distance between the two locusts

and d119894119895= (x119895minus x119894)119903119894119895is the unit vector pointing from x

119894to

x119895The total social force on each locust can be modeled as the

superposition of all of the pairwise interactions

S119894=

119873

sum

119895=1119895 =119894

s119894119895 (4)

The change of positionΔx119894is modeled as the total social force

experimented by x119896119894as the superposition of all of the pairwise

interactions Therefore Δx119894is defined as follows

Δx119894= S119894 (5)

In order to illustrate the behavioral model under the solitaryphase Figure 1 presents an example assuming a population of

4 Mathematical Problems in Engineering

1198261

11982621198263

11985221

1198523211985231

1198522311985213

1198521211985711198572

1198573

Figure 1 Behavioral model under the solitary phase

three different members (119873 = 3) which adopt a determinedconfiguration in the current iteration 119896 As a consequenceof the social forces each element suffers an attraction orrepulsion to other elements depending on the distance amongthem Such forces are represented by s12 s13 s21 s23 s31and s32 Since x1 and x2 are too close the social forces s12and s13 present a repulsive nature On the other hand as thedistances x1 minus x3 and x2 minus x3 are quite long the socialforces s13 s23 s31 and s32 between x1 harr x3 and x2 harr x3all belong to the attractive nature Therefore the change ofposition Δx1 is computed as the vector resultant between s12and s13 (Δx1 = s12+s13) is S1The valuesΔx2 andΔx3 are alsocalculated accordingly

In addition to the presented model [36] some studies[43ndash45] suggest that the social force s

119894119895is also affected by

the dominance of the involved individuals x119894and x

119895in the

pairwise process Dominance is a property that relativelyqualifies the capacity of an individual to survive in relationto other elements in a group The locustrsquos dominance isdetermined by several characteristics such as size chemicalemissions and location with regard to food sources Undersuch circumstances the social force ismagnified orweakeneddepending on the most dominant individual that is involvedin the repulsion-attraction process

22 Social Phase In this phase locusts frantically concen-trate around the elements that have already found good foodsources They attempt to efficiently find better nutrients bydevastating promising areas within the plantation In orderto simulate the social phase the food quality index Fq

119894is

assigned to each locust x119894of the group as such index reflects

the quality of the food source where x119894is located

Under this behavioral model each of the 119873 elementsof the group is ranked according to its corresponding foodquality index Afterwards the 119887 elements featuring the bestfood quality indexes are selected (119887 ≪ 119873) Considering aconcentration radius 119877

119888that is created around each selected

element a set of 119888 new locusts is randomly generated inside119877119888 As a result most of the locusts will be concentrated

around the best 119887 elements Figure 2 shows a simple exampleof the behavioral model under the social phase In theexample the configuration includes eight locusts (119873 = 8)just as it is illustrated by Figure 2(a) that also presents the foodquality index for each locust A food quality index near to oneindicates a better food sourceTherefore Figure 2(b) presentsthe final configuration after the social phase assuming 119887 = 2

3 The Locust Search (LS) Algorithm

In this paper some behavioral principles drawn froma swarmof locusts have been used as guidelines for developing anew swarm optimization algorithm The LS assumes thatentire search space is a plantation where all the locustsinteract to each other In the proposed approach eachsolution within the search space represents a locust positioninside the plantation Every locust receives a food qualityindex according to the fitness value of the solution that issymbolized by the locustrsquos position As it has been previouslydiscussed the algorithm implements two different behavioralschemes solitary and social Depending on each behavioralphase each individual is conducted by a set of evolutionaryoperators which mimics the different cooperative operationsthat are typically found in the swarm The proposed methodformulates the optimization problem in the following form

maximizeminimize 119891 (l) l = (1198971 119897119899) isin R119899

subject to l isin S(6)

where 119891 R119899 rarr R is a nonlinear function whereas S = l isinR119899 | 119897119887

119889le 119897119889le 119906119887119889 119889 = 1 119899 is a bounded feasible search

space which is constrained by the lower (119897119887119889) and upper (119906119887

119889)

limitsIn order to solve the problem formulated in (6) a

population L119896 (l1198961 l119896

2 l119896

119873) of 119873 locusts (individuals) is

evolved inside the LS operation from the initial point (119896 =

0) to a total 119892119890119899 number of iterations (119896 = 119892119890119899) Eachlocust l119896

119894(119894 isin [1 119873]) represents an 119899-dimensional

vector 1198971198961198941 119897119896

1198942 119897119896

119894119899 where each dimension corresponds to

a decision variable of the optimization problem to be solvedThe set of decision variables constitutes the feasible searchspace S = l119896

119894isin R119899 | 119897119887

119889le 119897119896

119894119889le 119906119887119889 where 119897119887

119889and 119906119887

119889

correspond to the lower and upper bounds for the dimension119889 respectively The food quality index that is associated witheach locust l119896

119894(candidate solution) is evaluated through an

objective function 119891(l119896119894) whose final result represents the

fitness value of l119896119894 In the LS algorithm each iteration of the

evolution process consists of two operators (A) solitary and(B) social Beginning by the solitary stage the set of locusts isoperated in order to sufficiently explore the search space Onthe other hand the social operation refines existent solutionswithin a determined neighborhood (exploitation) subspace

31 Solitary Operation (A) One of the most interestingfeatures of the proposed method is the use of the solitaryoperator to modify the current locust positions Under thisapproach locusts are displaced as a consequence of thesocial forces produced by the positional relations among theelements of the swarm Therefore near individuals tend torepel each other avoiding the concentration of elements inregions On the other hand distant individuals tend to attractto each other maintaining the cohesion of the swarm A cleardifference to the original model in [20] considers that socialforces are alsomagnified or weakened depending on themostdominant (best fitness values) individuals that are involved inthe repulsion-attraction process

Mathematical Problems in Engineering 5

1

2

3

4

5

6

7

8

Fq4 = 09

Fq8 = 04

Fq3 = 01 Fq1 = 02

Fq6 = 10

Fq7 = 03

Fq5 = 02

Fq2 = 03

(a)

4

6

1

5 3

2

78

Rc

(b)

Figure 2 Behavioral model under the social phase (a) Initial configuration and food quality indexes (b) final configuration after theoperation of the social phase

In the solitary phase a new position p119894(119894 isin [1 119873]) is

produced by perturbing the current locust position l119896119894with a

change of position Δl119894(p119894= l119896119894+ Δl119894) The change of position

Δl119894is the result of the social interactions experimented by l119896

119894

as a consequence of its repulsion-attraction behavioralmodelSuch social interactions are pairwise computed among l119896

119894and

the other 119873 minus 1 individuals in the swarm In the originalmodel social forces are calculated by using (3) However inthe proposedmethod it is modified to include the best fitnessvalue (the most dominant) of the individuals involved inthe repulsion-attraction process Therefore the social forcethat is exerted between l119896

119895and l119896119894 is calculated by using the

following new model

s119898119894119895= 120588 (l119896

119894 l119896119895) sdot 119904 (119903

119894119895) sdot d119894119895+ rand (1 minus 1) (7)

where 119904(119903119894119895) is the social force strength defined in (2) and d

119894119895=

(l119896119895minus l119896119894)119903119894119895is the unit vector pointing from l119896

119894to l119896119895 Besides

rand(1 minus1) is a randomly generated number between 1 andminus1 Likewise 120588(l119896

119894 l119896119895) is the dominance function that calcu-

lates the dominance value of the most dominant individualfrom l119896

119895and l119896119894 In order to operate 120588(l119896

119894 l119896119895) all the individuals

from L119896 (l1198961 l119896

2 l119896

119873) are ranked according to their fitness

values The ranks are assigned so that the best individualreceives the rank 0 (zero) whereas the worst individualobtains the rank 119873 minus 1 Therefore the function 120588(l119896

119894 l119896119895) is

defined as follows

120588 (l119896119894 l119896119895) =

119890minus(5sdotrank(l119896

119894)119873) if rank (l119896

119894) lt rank (l119896

119895)

119890minus(5sdotrank(l119896

119895)119873) if rank (l119896

119894) gt rank (l119896

119895)

(8)

1

08

06

04

02

00 20 40 60 80 100

120588(119845k i119845k j)

Rank (119845ki )

Figure 3 Behavior of 120588(l119896119894 l119896119895) considering 100 individuals

where the function rank(120572) delivers the rank of the 120572-individual According to (8) 120588(l119896

119894 l119896119895) yields a value within

the interval (1 0) Its maximum value of one in 120588(l119896119894 l119896119895) is

reached when either individual l119896119895or l119896119894is the best element of

the population L119896 regarding their fitness values On the otherhand a value close to zero is obtained when both individualsl119896119895and l119896119894possess quite bad fitness values Figure 3 shows the

behavior of 120588(l119896119894 l119896119895) considering 100 individuals In the figure

it is assumed that l119896119894represents one of the 99 individuals with

ranks between 0 and 98whereas l119896119895is fixed to the element with

the worst fitness value (rank 99)Under the incorporation of 120588(l119896

119894 l119896119895) in (7) social forces

are magnified or weakened depending on the best fitnessvalue (the most dominant) of the individuals involved in therepulsion-attraction process

6 Mathematical Problems in Engineering

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(a)

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(b)

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(c)

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(d)

Figure 4 Examples of different distributions (a) Initial condition (b) distribution after applying 25 (c) 50 and (d) 100 operationsThe greenasterisk represents the minimum value so far

Finally the total social force on each individual l119896119894is mod-

eled as the superposition of all of the pairwise interactionsexerted over it

S119898119894=

119873

sum

119895=1119895 =119894

s119898119894119895 (9)

Therefore the change of positionΔl119894is considered as the total

social force experimented by l119896119894as the superposition of all of

the pairwise interactions Thus Δl119894is defined as follows

Δl119894= S119898119894 (10)

After calculating the new positions P (p1 p2 p119873)of the population L119896 (l1198961 l

119896

2 l119896

119873) the final positions

F (f1 f2 f119873) must be calculated The idea is to admitonly the changes that guarantee an improvement in thesearch strategy If the fitness value of p

119894(119891(p119894)) is better than

l119896119894(119891(l119896119894)) then p

119894is accepted as the final solution Otherwise

l119896119894is retainedThis procedure can be resumed by the following

statement (considering a minimization problem)

f119894=

p119894

if 119891 (p119894) lt 119891 (l119896

119894)

l119896119894

otherwise(11)

In order to illustrate the performance of the solitary oper-ator Figure 4 presents a simple example with the solitaryoperator being iteratively applied A population of 50 dif-ferent members (119873 = 50) is assumed which adopt aconcentrated configuration as initial condition (Figure 4(a))As a consequence of the social forces the set of elementstends to distribute throughout the search space Examplesof different distributions are shown in Figures 4(b) 4(c)and 4(d) after applying 25 50 and 100 different solitaryoperations respectively

32 Social Operation (B) The social procedure representsthe exploitation phase of the LS algorithm Exploitation is

Mathematical Problems in Engineering 7

the process of refining existent individuals within a smallneighborhood in order to improve their solution quality

The social procedure is a selective operation which isapplied only to a subset E of the final positions F (whereE sube F) The operation starts by sorting F with respect tofitness values storing the elements in a temporary populationB = b1 b2 b119873 The elements in B are sorted so thatthe best individual receives the position b1 whereas the worstindividual obtains the location b

119873 Therefore the subset E

is integrated by only the first 119892 locations of B (promisingsolutions) Under this operation a subspace 119862

119895is created

around each selected particle f119895isin E The size of 119862

119895depends

on the distance 119890119889which is defined as follows

119890119889=

sum119899

119902=1 (119906119887119902 minus 119897119887119902)

119899sdot 120573 (12)

where 119906119887119902and 119897119887

119902are the upper and lower bounds in the

119902th dimension and 119899 is the number of dimensions of theoptimization problem whereas 120573 isin [0 1] is a tuning factorTherefore the limits of 119862

119895can be modeled as follows

119906119904119904119902

119895= 119887119895119902+ 119890119889

119897119904119904119902

119895= 119887119895119902minus 119890119889

(13)

where 119906119904119904119902119895and 119897119904119904119902

119895are the upper and lower bounds of the 119902th

dimension for the subspace 119862119895 respectively

Considering the subspace 119862119895around each element f

119895isin

E a set of ℎ new particles (Mℎ119895

= m1119895m2119895 mℎ

119895) are

randomly generated inside bounds fixed by (13) Once theℎ samples are generated the individual l119896+1

119895of the next

population L119896+1 must be created In order to calculate l119896+1119895

the best particlem119887119890119904119905

119895 in terms of its fitness value from the ℎ

samples (wherem119887119890119904119905119895

isin [m1119895m2119895 mℎ

119895]) is compared to f

119895

Ifm119887119890119904119905119895

is better than f119895according to their fitness values l119896+1

119895

is updated with m119887119890119904119905119895

otherwise f119895is selected The elements

of F that have not been processed by the procedure (f119908notin E)

transfer their corresponding values to L119896+1 with no changeThe social operation is used to exploit only prominent

solutions According to the proposed method inside eachsubspace 119862

119895 ℎ random samples are selected Since the

number of selected samples at each subspace is very small(typically ℎ lt 4) the use of this operator substantially reducesthe number of fitness function evaluations

In order to demonstrate the social operation a numericalexample has been set by applying the proposed process to asimple function Such function considers the interval of minus3 le1198891 1198892 le 3 whereas the function possesses one global maximaof value 81 at (0 16) Notice that 1198891 and 1198892 correspond tothe axis coordinates (commonly 119909 and 119910) For this examplea final position population F of six 2-dimensional members(119873 = 6) is assumed Figure 5 shows the initial configurationof the proposed example with the black points representinghalf of the particles with the best fitness values (the first threeelements of B 119892 = 3) whereas the grey points (f2 f4 f6 notin E)correspond to the remaining individuals From Figure 5 it

0

0

0

d1

d2

minus3

minus2

minus1

minus2

minus2

1198396

11983931198391

1198394

1198395

1198392 119846211198461

1

11984623

11984613

11984625

11984615

C1

C5

C3

2

2

2

2

22

2

0

0 0

0

minus2

minus2

minus2

minus4

minus4minus6

4

44

6

68

3

2

1

0

minus3 minus2 minus1 3210

Figure 5 Operation of the social procedure

can be seen that the social procedure is applied to all blackparticles (f5 = b1 f3 = b2 and f1 = b3 f5 f3 f1 isin

E) yielding two new random particles (ℎ = 2) which arecharacterized by white points m1

1 m21 m

13 m

23 m

15 and m2

5for each black point inside of their corresponding subspaces1198621 1198623 and 1198625 Considering the particle f3 in Figure 7 theparticle m2

3 corresponds to the best particle (m1198871198901199041199053 ) from thetwo randomly generated particles (according to their fitnessvalues) within 1198623 Therefore the particlem1198871198901199041199053 will substitutef3 in the individual l119896+13 for the next generation since it holdsa better fitness value than f3(119891(f3)lt119891(m1198871198901199041199053 ))

The LS optimization procedure is defined over a boundedsearch space S Search points that do not belong to sucharea are considered to be infeasible However during theevolution process some candidate solutions could fall outsidethe search space In the proposed approach such infeasiblesolutions are arbitrarily placed with a random position insidethe search space S

33 Complete LS Algorithm LS is a simple algorithm withonly seven adjustable parameters the strength of attraction119865 the attractive length 119871 number of promising solutions 119892the population size 119873 the tuning factor 120573 the number ofrandom samples ℎ and the number of generations 119892119890119899 Theoperation of LS is divided into three parts initialization of thesolitary and social operations In the initialization (119896 = 0) thefirst population L0 (l01 l

02 l

0119873) is produced The values

11989701198941 119897

01198942 119897

0119894119899 of each individual l119896

119894and each dimension

119889 are randomly and uniformly distributed between theprespecified lower initial parameter bound 119897119887

119889and the upper

initial parameter bound 119906119887119889

1198970119894119895= 119897119887119889+ rand sdot (119906119887

119889minus 119897119887119889)

119894 = 1 2 119873 119889 = 1 2 119899(14)

In the evolution process the solitary (A) and social (B) oper-ations are iteratively applied until the number of iterations119896 = 119892119890119899 has been reached The complete LS procedure isillustrated in Algorithm 1

8 Mathematical Problems in Engineering

(1) Input 119865 119871 119892119873 119892119890119899 ℎ and 120573(2) Initialize L0 (119896 = 0)(3) until (119896 = 119892119890119899)(4) F larr SolitaryOperation(L119896) Solitary operator (Section 31)(5) L119896+1 larr SocialOperation(L119896 F) Social operator (Section 32)(6) 119896 = k + 1(7) end until

Algorithm 1 Locust Search (LS) algorithm

34 Discussion about the LS Algorithm Evolutionary algo-rithms (EA) have been widely employed for solving complexoptimization problems These methods are found to bemore powerful than conventional methods that are basedon formal logics or mathematical programming [46] Inthe EA algorithm search agents have to decide whetherto explore unknown search positions or to exploit alreadytested positions in order to improve their solution qualityPure exploration degrades the precision of the evolutionaryprocess but increases its capacity to find new potentially solu-tions On the other hand pure exploitation allows refiningexistent solutions but adversely drives the process to localoptimal solutions Therefore the ability of an EA to find aglobal optimal solution depends on its capacity to find a goodbalance between the exploitation of found-so-far elementsand the exploration of the search space [47]

Most of swarm algorithms and other evolutionary algo-rithms tend to exclusively concentrate the individuals in thecurrent best positions Under such circumstances such algo-rithms seriously limit their exploration-exploitation capaci-ties

Different to most of existent evolutionary algorithmsin the proposed approach the modeled behavior explicitlyavoids the concentration of individuals in the current bestpositions Such fact allows not only to emulate the coop-erative behavior of the locust colony in a good realisticway but also to incorporate a computational mechanism toavoid critical flaws that are commonly present in the popularevolutionary algorithms such as the premature convergenceand the incorrect exploration-exploitation balance

It is important to emphasize that the proposed approachconducts two operators (solitary and social) within a singleiteration Such operators are similar to those that are usedby other evolutionary methods such as ABC (employed beesonlooker bees and scout bees) AIS (clonal proliferationoperator affinity maturation operator and clonal selectionoperator) andDE (mutation crossover and selection) whichare all executed in a single iteration

4 Numerical Experiments overBenchmark Functions

A comprehensive set of 13 functions all collected from [48ndash50] has been used to test the performance of the LS approachas an optimization method Tables 11 and 12 present thebenchmark functions used in our experimental study Such

functions are classified into two different categories uni-modal test functions (Table 11) andmultimodal test functions(Table 12) In these tables 119899 is the function dimension 119891optis the minimum value of the function with S being a subsetof 119877119899 The optimum locations (xopt) for functions in Tables11 and 12 are in [0]119899 except for 1198915 11989112 and 11989113 with xopt in[1]119899 and 1198918 in [42096]

119899 A detailed description of optimumlocations is given in Tables 11 and 12

We have applied the LS algorithm to 13 functions whoseresults have been compared to those produced by the ParticleSwarmOptimization (PSO) method [27] and the DifferentialEvolution (DE) algorithm [28] both considered as the mostpopular algorithms formany optimization applications In allcomparisons the population has been set to 40 (119873 = 40)individualsThemaximum iteration number for all functionshas been set to 1000 Such stop criterion has been selectedto maintain compatibility to similar works reported in theliterature [48 49]

The parameter setting for each of the algorithms in thecomparison is described as follows

(1) PSO in the algorithm 1198881 = 1198882 = 2 while the inertiafactor (120596) is decreased linearly from 09 to 02

(2) DE the DERand1 scheme is employed The param-eter settings follow the instructions in [28 51] Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(3) In LS 119865 and 119871 are set to 06 and 119871 respectivelyBesides 119892 is fixed to 20 (1198732) ℎ = 2 120573 = 06 whereas119892119890119899 and119873 are configured to 1000 and 40 respectivelyOnce such parameters have been experimentallydetermined they are kept for all experiments in thissection

In the comparison three indexes are considered the averagebest-so-far solution (ABS) the standard deviation (SD)and the number of function evaluations (NFE) The firsttwo indexes assess the accuracy of the solution whereasthe last one measures the computational cost The averagebest-so-far solution (ABS) expresses the average value ofthe best function evaluations that have been obtained from30 independent executions The standard deviation (SD)indicates the dispersion of the ABS values Evolutionarymethods are in general complex pieces of software withseveral operators and stochastic branches Therefore it isdifficult to conduct a complexity analysis fromadeterministicperspective Under such circumstances it ismore appropriate

Mathematical Problems in Engineering 9

Table 1 Minimization results from the benchmark functions test inTable 11 with 119899 = 30 Maximum number of iterations = 1000

PSO DE LS

1198911

ABS 166 times 10minus1 627 times 10minus3 455 times 10minus4

SD 379 times 10minus1 168 times 10minus1 698 times 10minus4

NFE 28610 20534 16780

1198912

ABS 483 times 10minus1 202 times 10minus1 541 times 10minus3

SD 159 times 10minus1 066 145 times 10minus2

NFE 28745 21112 16324

1198913

ABS 275 572 times 10minus1 161 times 10minus3

SD 101 015 132 times 10minus3

NFE 38320 36894 20462

1198914

ABS 184 011 105 times 10minus2

SD 087 005 663 times 10minus3

NFE 37028 36450 21158

1198915

ABS 307 239 411 times 10minus2

SD 042 036 274 times 10minus3

NFE 39432 37264 21678

1198916

ABS 636 651 588 times 10minus2

SD 074 087 167 times 10minus2

NFE 38490 36564 22238

1198917

ABS 614 012 271 times 10minus2

SD 073 002 118 times 10minus2

NFE 37274 35486 21842

to use the number of function evaluations (NFE) just asit is used in the literature [52 53] to evaluate and assessthe computational effort (time) and the complexity amongoptimizers It represents how many times an algorithm usesthe objective function to evaluate the objective (fitness)function until the best solution of a determined executionhas been found Since the experiments require 30 differentexecutions the NFE index corresponds to the averaged valueobtained from these executions A small NFE value indicatesthat less time is needed to reach the global optimum

41 Unimodal Test Functions Functions 1198911 to 1198917 are uni-modal functions The results for unimodal functions over30 runs are reported in Table 1 considering the averagebest-so-far solution (ABS) the standard deviation (SD) andthe number of function evaluations (NFE) According tothis table LS provides better results than PSO and DE forall functions in terms of ABS and SD In particular thetest yields the largest performance difference in functions1198914ndash1198917 Such functions maintain a narrow curving valley that

is hard to optimize in case the search space cannot beexplored properly and the direction changes cannot be keptup with [54] For this reason the performance differencesare directly related to a better trade-off between explorationand exploitation that is produced by LS operators In thepractice a main goal of an optimization algorithm is to finda solution as good as possible within a small time windowThe computational cost for the optimizer is represented byits NFE values According to Table 1 the NFE values thatare obtained by the proposed method are smaller than its

Table 2 119901 values produced by Wilcoxonrsquos test that compares LSversus PSO and DE over the ldquoaverage best-so-farrdquo values fromTable 1

LS versus PSO DE1198911

183 times 10minus4 173 times 10minus2

1198912

385 times 10minus3 183 times 10minus4

1198913

173 times 10minus4 623 times 10minus3

1198914

257 times 10minus4 521 times 10minus3

1198915

473 times 10minus4 183 times 10minus3

1198916

639 times 10minus5 215 times 10minus3

1198917

183 times 10minus4 221 times 10minus3

counterparts LowerNFE values aremore desirable since theycorrespond to less computational overload and thereforefaster results In the results perspective it is clear that PSOand DE needmore than 1000 generations in order to producebetter solutions However this number of generations isconsidered in the experiments aiming for producing a visiblecontrast among the approaches If the number of generationshas been set to an exaggerated value then all methods wouldconverge to the best solution with no significant troubles

A nonparametric statistical significance proof known asWilcoxonrsquos rank sum test for independent samples [55 56]has been conducted with an 5 significance level over theldquoaverage best-so-farrdquo data of Table 1 Table 2 reports the 119901values produced byWilcoxonrsquos test for the pairwise compari-son of the ldquoaverage best-so-farrdquo of two groups Such groupsare formed by LS versus PSO and LS versus DE As a nullhypothesis it is assumed that there is no significant differencebetween mean values of the two algorithms The alternativehypothesis considers a significant difference between theldquoaverage best-so-farrdquo values of both approaches All 119901 valuesreported in the table are less than 005 (5 significancelevel) which is a strong evidence against the null hypothesisindicating that the LS results are statistically significant andthat it has not occurred by coincidence (ie due to the normalnoise contained in the process)

42 Multimodal Test Functions Multimodal functions pos-sess many local minima which make the optimization adifficult task to be accomplished In multimodal functionsthe results reflect the algorithmrsquos ability to escape fromlocal optima We have applied the algorithms over functions1198918 to 11989113 where the number of local minima increasesexponentially as the dimension of the function increasesThe dimension of such functions is set to 30 The resultsare averaged over 30 runs with performance indexes beingreported in Table 3 as follows the average best-so-far solution(ABS) the standard deviation (SD) and the number of func-tion evaluations (NFE) Likewise 119901 values of the Wilcoxonsigned-rank test of 30 independent runs are listed in Table 4From the results it is clear that LS yields better solutionsthan others algorithms for functions 1198919 11989110 11989111 and 11989112 interms of the indexes ABS and SD However for functions 1198918and 11989113 LS produces similar results to DE The Wilcoxonrank test results that are presented in Table 6 confirm that

10 Mathematical Problems in Engineering

Table 3Minimization results from the benchmark functions test inTable 12 with 119899 = 30 Maximum number of iterations = 1000

PSO DE LS

1198918

ABS minus67 times 103 minus126 times 104 minus126 times 104

SD 63 times 102 37 times 102 11 times 102

NFE 38452 35240 21846

1198919

ABS 148 401 times 10minus1 249 times 10minus3

SD 139 51 times 10minus2 48 times 10minus4

NFE 37672 34576 20784

11989110

ABS 147 466 times 10minus2 215 times 10minus3

SD 144 127 times 10minus2 318 times 10minus4

NFE 39475 37080 21235

11989111

ABS 1201 115 147 times 10minus4

SD 312 006 148 times 10minus5

NFE 38542 34875 22126

11989112

ABS 687 times 10minus1 374 times 10minus1 558 times 10minus3

SD 707 times 10minus1 155 times 10minus1 418 times 10minus4

NFE 35248 30540 16984

11989113

ABS 187 times 10minus1 181 times 10minus2 178 times 10minus2

SD 574 times 10minus1 166 times 10minus2 164 times 10minus3

NFE 36022 31968 18802

Table 4 119901 values produced byWilcoxonrsquos test comparing LS versusPSO and DE over the ldquoaverage best-so-farrdquo values from Table 3

LS versus PSO DE1198918

183 times 10minus4 00611198919

117 times 10minus4 241 times 10minus4

11989110

143 times 10minus4 312 times 10minus3

11989111

625 times 10minus4 114 times 10minus3

11989112

234 times 10minus5 715 times 10minus4

11989113

473 times 10minus4 0071

LS performed better than PSO and DE considering fourproblems 119891

9ndash11989112 whereas from a statistical viewpoint there

is no difference between results from LS and DE for 1198918 and11989113 According to Table 3 the NFE values obtained by theproposed method are smaller than those produced by otheroptimizers The reason of this remarkable performance isassociated with its two operators (i) the solitary operatorallows a better particle distribution in the search spaceincreasing the algorithmrsquos ability to find the global optimaand (ii) the use of the social operation provides a simpleexploitation operator that intensifies the capacity of findingbetter solutions during the evolution process

5 Gaussian Mixture Modelling

In this section the modeling of image histograms throughGaussian mixture models is presented Let one consider animage holding 119871 gray levels [0 119871 minus 1] whose distribution

is defined by a histogram ℎ(119892) represented by the followingformulation

ℎ (119892) =

119899119892

119873119901 ℎ (119892) gt 0

Np =119871minus1sum

119892=0119899119892

119871minus1sum

119892=0ℎ (119892) = 1

(15)

where 119899119892denotes the number of pixels with gray level 119892

and Np the total number of pixels in the image Under suchcircumstances ℎ(119892) can be modeled by using a mixture 119901(119909)of Gaussian functions of the form

119901 (119909) =

119870

sum

119894=1

119875119894

radic2120587120590119894

exp[minus (119909 minus 120583

119894)2

21205902119894

] (16)

where119870 symbolizes the number of Gaussian functions of themodel whereas 119875

119894is the a priori probability of function 119894 120583

119894

and 120590119894represent the mean and standard deviation of the 119894th

Gaussian function respectively Furthermore the constraintsum119870

119894=1 119875119894 = 1must be satisfied by themodel In order to evaluatethe similarity between the image histogram and a candidatemixture model the mean square error can be used as follows

119869 =1119899

119871

sum

119895=1[119901 (119909119895) minus ℎ (119909

119895)]

2+120596 sdot

1003816100381610038161003816100381610038161003816100381610038161003816

(

119870

sum

119894=1119875119894)minus 1

1003816100381610038161003816100381610038161003816100381610038161003816

(17)

where 120596 represents the penalty associated with the constrainsum119870

119894=1 119875119894 = 1 Therefore 119869 is considered as the objective func-tion which must be minimized in the estimation problem Inorder to illustrate the histogram modeling through a Gaus-sian mixture Figure 6 presents an example assuming threeclasses that is 119870 = 3 Considering Figure 6(a) as the imagehistogram ℎ(119909) the Gaussian mixture 119901(119909) that is shown inFigure 6(c) is produced by adding the Gaussian functions1199011(119909) 1199012(119909) and 1199013(119909) in the configuration presented inFigure 6(b) Once themodel parameters that bettermodel theimage histogram have been determined the final step is todefine the threshold values 119879

119894(119894 isin [1 119870]) which can be

calculated by simple standard methods just as it is presentedin [19ndash21]

6 Segmentation Algorithm Based on LS

In the proposed method the segmentation process isapproached as an optimization problem Computationaloptimization generally involves two distinct elements (1) asearch strategy to produce candidate solutions (individualsparticles insects locust etc) and (2) an objective functionthat evaluates the quality of each selected candidate solutionSeveral computational algorithms are available to performthe first element The second element where the objectivefunction is designed is unquestionably the most criticalIn the objective function it is expected to embody the full

Mathematical Problems in Engineering 11

Freq

uenc

y

Gray level

h(x)

(a)

Gray level

Freq

uenc

y

p1(x)p2(x) p3(x)

(b)

Freq

uenc

y

Gray level

p(x)

(c)

Figure 6 Histogram approximation through a Gaussian mixture (a) Original histogram (b) configuration of the Gaussian components1199011(119909) 1199012(119909) and 1199013(119909) and (c) final Gaussian mixture 119901(119909)

complexity of the performance biases and restrictions of theproblem to be solved In the segmentation problem candidatesolutions represent Gaussian mixtures The objective func-tion 119869 (17) is used as a fitness value to evaluate the similaritypresented between the Gaussian mixture and the imagehistogram Therefore guided by the fitness values (119869 values)a set of encoded candidate solutions are evolved using theevolutionary operators until the best possible resemblancecan be found

Over the last decade several algorithms based on evo-lutionary and swarm principles [19ndash22] have been proposedto solve the problem of segmentation through a Gaussianmixture model Although these algorithms own certain goodperformance indexes they present two important limitations(1) They frequently obtain suboptimal approximations asa consequence of an inappropriate balance between explo-ration and exploitation in their search strategies (2) Theyare based on the assumption that the number of Gaussians(classes) in the mixture is preknown and fixed otherwisethey cannot work

In order to eliminate such flaws the proposed approachincludes (A) a new search strategy and (B) the definitionof a new objective function For the search strategy the LSmethod (Section 4) is adopted Under LS the concentrationof individuals in the current best positions is explicitlyavoided Such fact allows reducing critical problems such asthe premature convergence to suboptimal solutions and theincorrect exploration-exploitation balance

61 New Objective Function 119869new Previous segmentation

algorithms based on evolutionary and swarm methods use(17) as objective function Under these circumstances each

candidate solution (Gaussian mixture) is only evaluated interms of its approximation with the image histogram

Since the proposed approach aims to automatically selectthe number ofGaussian functions119870 in the finalmixture119901(119909)the objective function must be modified The new objectivefunction 119869new is defined as follows

119869new

= 119869 + 120582 sdot119876 (18)

where 120582 is a scaling constant The new objective function isdivided into two partsThe first part 119869 evaluates the quality ofeach candidate solution in terms of its similarity with regardto the image histogram (17) The second part 119876 penalizes theoverlapped area among Gaussian functions (classes) with 119876being defined as follows

119876 =

119870

sum

119894=1

119870

sum

119895=1119895 =119894

119871

sum

119897=1min (119875

119894sdot 119901119894(119897) 119875119895sdot 119901119895(119897)) (19)

where 119870 and 119871 represent the number of classes and the graylevels respectively The parameters 119901

119894(119897) and 119901

119895(119897) symbolize

the Gaussian functions 119894 and 119895 respectively that are to beevaluated on the point 119897 (gray level) whereas the elements119875119894and 119875

119895represent the a priori probabilities of the Gaussian

functions 119894 and 119895 respectively Under such circumstancesmixtures with Gaussian functions that do not ldquopositivelyrdquoparticipate in the histogram approximation are severelypenalized

Figure 7 illustrates the effect of the new objective function119869new in the evaluation of Gaussian mixtures (candidatesolutions) From the image histogram (Figure 7(a)) it isevident that two Gaussian functions are enough to accurately

12 Mathematical Problems in Engineering

Gray level

Freq

uenc

y h(x)

(a)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)Freq

uenc

y

(b)

Gray level

Freq

uenc

y p1(x)

p2(x)

p3(x)

p4(x)

Q12 Q23 Q34

(c)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)

Q12

Q23

Q34

Freq

uenc

y(d)

Figure 7 Effect of the new objective function 119869new in the evaluation of Gaussian mixtures (candidate solutions) (a) Original histogram (b)Gaussian mixture considering four classes (c) penalization areas and (d) Gaussian mixture of better quality solution

approximate the original histogramHowever if theGaussianmixture is modeled by using a greater number of functions(eg four as it is shown in Figure 7(b)) the original objectivefunction 119869 is unable to obtain a reasonable approximationUnder the new objective function 119869

new the overlapped areaamong Gaussian functions (classes) is penalized Such areasin Figure 7(c) correspond to 11987612 11987623 and 11987634 where11987612 represents the penalization value produced between theGaussian function 1199011(119909) and 1199012(119909) Therefore due to thepenalization theGaussianmixture shown in Figures 7(b) and7(c) provides a solution of low quality On the other hand theGaussian mixture presented in Figure 7(d) maintains a lowpenalty thus it represents a solution of high quality FromFigure 7(d) it is easy to see that functions 1199011(119909) and 1199014(119909)can be removed from the final mixture This eliminationcould be performed by a simple comparison with a thresholdvalue 120579 since 1199011(119909) and 1199014(119909) have a reduced amplitude(1199011(119909) asymp 1199012(119909) asymp 0) Therefore under 119869new it is possibleto find the reduced Gaussian mixture model starting from aconsiderable number of functions

Since the proposed segmentation method is conceivedas an optimization problem the overall operation can bereduced to solve the formulation of (20) by using the LSalgorithm

minimize 119869new

(x) = 119869 (x) + 120582 sdot 119876 (x)

x = (1198751 1205831 1205901 119875119870 120583119870 120590119870) isin R3sdot119870

subject to 0 le 119875119889le 1 119889 isin (1 119870)

0 le 120583119889le 255

0 le 120590119889le 25

(20)

where 119875119889 120583119889 and 120590

119889represent the probability mean and

standard deviation of the class 119889 It is important to remarkthat the new objective function 119869

new allows the evaluationof a candidate solution in terms of the necessary numberof Gaussian functions and its approximation quality Undersuch circumstances it can be used in combination with anyother evolutionary method and not only with the proposedLS algorithm

62 Complete Segmentation Algorithm Once the new searchstrategy (LS) and objective function (119869new) have been definedthe proposed segmentation algorithm can be summarized byAlgorithm 2The new algorithm combines operators definedby LS and operations for calculating the threshold values

(Line 1) The algorithm sets the operative parameters 119865119871 119892 119892119890119899 119873 119870 120582 and 120579 They rule the behavior of thesegmentation algorithm (Line 2) Afterwards the populationL0 is initialized considering 119873 different random Gaussianmixtures of119870 functionsThe idea is to generate an119873-randomGaussian mixture subjected to the constraints formulatedin (20) The parameter 119870 must hold a high value in orderto correctly segment all images (recall that the algorithmis able to reduce the Gaussian mixture to its minimumexpression) (Line 3)Then the Gaussianmixtures are evolvedby using the LS operators and the new objective function119869newThis process is repeated during119892119890119899 cycles (Line 8) Afterthis procedure the best Gaussian mixture l119892119890119899

119887119890119904119905is selected

according to its objective function 119869new (Line 9) Then

the Gaussian mixture l119892119890119899119887119890119904119905

is reduced by eliminating thosefunctions whose amplitudes are lower than 120579 (119901

119894(119909) le 120579)

(Line 10) Then the threshold values 119879119894from the reduced

model are calculated (Line 11) Finally the calculated 119879119894

values are employed to segment the image Figure 8 shows aflowchart of the complete segmentation procedure

Mathematical Problems in Engineering 13

(1) Input 119865 119871 119892 119892119890119899119873 119870 120582 and 120579(2) Initialize L0 (119896 = 0)(3) until (119896 = 119892119890119899)(4) F larr SolitaryOperation(L119896) Solitary operator (Section 31)(5) L119896+1 larr SocialOperation(L119896 F) Social operator (Section 32)(6) 119896 = k + 1(7) end until(8) Obtain I119892119890119899

119887119890119904119905

(9) Reduce I119892119890119899119887119890119904119905

(10) Calculate the threshold values 119879119894from the reduced model

(11) Use 119879119894to segment the image

Algorithm 2 Segmentation LS algorithm

The proposed segmentation algorithm permits to auto-matically detect the number of segmentation partitions(classes) Furthermore due to its remarkable search capaci-ties the LSmethodmaintains a better accuracy than previousalgorithms based on evolutionary principles However theproposed method presents two disadvantages first it isrelated to its implementation which in general is morecomplex than most of the other segmentators based onevolutionary basics The second refers to the segmentationprocedure of the proposed approach which does not considerany spatial pixel characteristics As a consequence pixelsthat may belong to a determined region due to its positionare labeled as a part of another region due to its graylevel intensity Such a fact adversely affects the segmentationperformance of the method

7 Segmentation Results

This section analyses the performance of the proposedsegmentation algorithmThe discussion is divided into threeparts the first one shows the performance of the proposedLS segmentation algorithm while the second presents a com-parison between the proposed approach and others segmen-tation algorithms that are based on evolutionary and swammethods The comparison mainly considers the capacities ofeach algorithm to accurately and robustly approximate theimage histogram Finally the third part presents an objectiveevaluation of segmentation results produced by all algorithmsthat have been employed in the comparisons

71 Performance of LS Algorithm in Image SegmentationThis section presents two experiments that analyze theperformance of the proposed approach Table 5 presentsthe algorithmrsquos parameters that have been experimentallydetermined and kept for all the test images through allexperiments

First Image The first test considers the histogram shownby Figure 9(b) while Figure 9(a) presents the original imageAfter applying the proposed algorithm just as it has beenconfigured according to parameters in Table 5 a minimummodel of four classes emerges after the start from Gaussian

Table 5 Parameter setup for the LS segmentation algorithm

119865 119871 119892 119892119890119899 119873 119870 120582 120579

06 02 20 1000 40 10 001 00001

Table 6 Results of the reduced Gaussian mixture for the first andthe second image

Parameters First image Second image1198751 0004 00321205831 181 1211205901 82 291198752 00035 000151205832 699 4511205902 184 2491198753 001 00061205833 949 13011205903 89 1791198754 0007 0021205834 1631 16721205904 299 101

mixtures of 10 classes Considering 30 independent exe-cutions the averaged parameters of the resultant Gaussianmixture are presented in Table 6 One final Gaussian mixture(ten classes) which has been obtained by LS is presentedin Figure 10 Furthermore the approximation of the reducedGaussian mixture is also visually compared with the originalhistogram in Figure 10 On the other hand Figure 11 presentsthe segmented image after calculating the threshold points

Second Image For the second experiment the image inFigure 12 is testedThemethod aims to segment the image byusing a reduced Gaussian mixture model obtained by the LSapproach After executing the algorithm according to param-eters from Table 5 the resulting averaged parameters of theresultant Gaussian mixture are presented in Table 6 In orderto assure consistency the results are averaged considering 30independent executions Figure 13 shows the approximationquality that is obtained by the reduced Gaussian mixturemodel in (a) and the segmented image in (b)

14 Mathematical Problems in Engineering

Input parameters

ImageI(xy)

from the reduced model

YesLS method

No

k = k + 1

k gt gen

Reduce 119845genbest

Calculate the threshold values Ti

Use Ti to segment the image

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

119813 larr SolitaryOperation (119819k)

119819k+1 larr SocialOperation (119819k 119813)

Initialize 1198190 (k = 0)

F L g gen N K 120582 and 120579

Figure 8 Flowchart of the complete segmentation procedure

(a)

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray level

Freq

uenc

y

(b)

Figure 9 (a) Original first image used on the first experiment and (b) its histogram

Mathematical Problems in Engineering 15

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray levelFr

eque

ncy

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

Figure 10 Gaussian mixture obtained by LS for the first image

Figure 11 Image segmented with the reduced Gaussian mixture

(a)

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

(b)

Figure 12 (a) Original second image used on the first experiment and (b) its histogram

72 Histogram Approximation Comparisons This sectiondiscusses the comparison between the proposed segmenta-tion algorithm and other evolutionary-segmentation meth-ods that have been proposed in the literature The set ofmethods used in the experiments includes the 119869 + ABC[19] 119869 + AIS [20] and 119869 + DE [21] These algorithmsconsider the combination between the original objectivefunction 119869 (17) and an evolutionary technique such asArtificial Bee Colony (ABC) the Artificial Immune Systems(AIS) and the Differential Evolution (DE) [21] respectively

Since the proposed segmentation approach considers thecombination of the new objective function 119869

new (18) andthe LS algorithm it will be referred to as 119869

new+ LS

The comparison focuses mainly on the capacities of eachalgorithm to accurately and robustly approximate the imagehistogram

In the experiments the populations have been set to 40(119873 = 40) individuals The maximum iteration number for allfunctions has been set to 1000 Such stop criterion has beenconsidered to maintain compatibility to similar experiments

16 Mathematical Problems in Engineering

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

(a) (b)

Figure 13 (a) Segmentation result obtained by LS for the first image and (b) the segmented image

(1) (2)

(3) (4)

(a)

0015

0005001

0020025

0035003

0045004

005

00 50 100 150 200 250

Gray level

Freq

uenc

y

(b)

Figure 14 (a) Synthetic image used in the comparison and (b) its histogram

that are reported in the literature [18] The parameter settingfor each of the segmentation algorithms in the comparison isdescribed as follows

(1) 119869 + ABC [19] in the algorithm its parameters areconfigured as follows the abandonment limit = 100120572 = 005 and limit = 30

(2) 119869 + AIS [20] it presents the following parameters ℎ =120119873

119888= 80 120588 = 10 119875

119903= 20 119871 = 22 and 119879

119890= 001

(3) 119869 + DE [21] the DERand1 scheme is employed Theparameter settings follow the instructions in [21]Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(4) In 119869new +LS the method is set according to the valuesdescribed in Table 5

In order to conduct the experiments a synthetic image isdesigned to be used as a reference in the comparisons Themain idea is to know in advance the exact number of classes(and their parameters) that are contained in the image sothat the histogram can be considered as a ground truth The

Table 7 Employed parameters for the design of the reference image

119875119894

120583119894

120590119894

(1) 005 40 8(2) 004 100 10(3) 005 160 8(4) 0027 220 15

synthetic image is divided into four sections Each sectioncorresponds to a different class which is produced by settingeach gray level pixel 119875V119894 to a value that is determined by thefollowing model

119875V119894= 119890minus((119909minus1205831)

221205902119894) (21)

where 119894 represents the section whereas 120583119894and 120590119894are themean

and the dispersion of the gray level pixels respectively Thecomparative study employs the image of 512 times 512 that isshown in Figure 14(a) and the algorithmrsquos parameters thathave been presented in Table 7 Figure 14(b) illustrates theresultant histogram

Mathematical Problems in Engineering 17

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2Class 3Class 4

Class 5Class 6Class 7Class 8

(a)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(b)

Figure 15 Convergence results (a) Convergence of the following methods 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes (b) Convergence of the proposed method (reduced Gaussian mixture)

(a) (b)

Figure 16 Segmentation results obtained by (a) several methods including 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes and (b) the proposed method (reduced Gaussian mixture)

In the comparison the discussion focuses on the follow-ing issues first of all accuracy second convergence andthird computational cost

Convergence This section analyzes the approximation con-vergence when the number of classes that are used bythe algorithm during the evolution process is different tothe actual number of classes in the image Recall that theproposed approach automatically finds the reduced Gaussianmixture which better adapts to the image histogram

In the experiment the methods 119869 + ABC 119869 + AISand 119869 + DE are executed considering Gaussian mixturescomposed of 8 functions Under such circumstances thenumber of classes to be detected is higher than the actualnumber of classes in the image On the other hand the

proposed algorithm maintains the same configuration ofTable 5Therefore it can detect and calculate up to ten classes(119870 = 10)

As a result the techniques 119869 + ABC 119869 + AIS and 119869 +DE tend to overestimate the image histogram This effectcan be seen in Figure 15(a) where the resulting Gaussianfunctions are concentrated within actual classes Such abehavior is a consequence of the evaluation that is consideredby the original objective function 119869 which privileges onlythe approximation between the Gaussian mixture and theimage histogram This effect can be graphically illustrated byFigure 16(a) that shows the pixel misclassification producedby the wrong segmentation of Figure 14(a) On the otherhand the proposed approach obtains a reduced Gaussianmixture model which allows the detection of each class from

18 Mathematical Problems in Engineering

006

004

005

002

003

001

00 50 100 150 200 250

Gray level

Freq

uenc

y

(a)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

(b)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

HistogramClass 1Class 2

Class 3Class 4

(c)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(d)

Figure 17 Approximation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

the actual histogram (see Figure 15(b)) As a consequencethe segmentation is significantly improved by eliminating thepixel misclassification as it is shown by Figure 16(b)

It is evident from Figure 15 that the techniques 119869 + ABC119869 + AIS and 119869 + DE all need an a priori knowledge of thenumber of classes that are contained in the actual histogramin order to obtain a satisfactory result On the other handthe proposed algorithm is able to find a reduced Gaussianmixture whose classes coincide with the actual number ofclasses that are contained in the image histogram

Accuracy In this section the comparison among the algo-rithms in terms of accuracy is reported Most of the reportedcomparisons [19ndash26] are concerned about comparing theparameters of the resultant Gaussian mixtures by using realimages Under such circumstances it is difficult to considera clear reference in order to define a meaningful errorTherefore the image defined in Figure 14 has been used in theexperiments because its construction parameters are clearlydefined in Table 7

Since the parameter values from Table 7 act as groundtruth a simple averaged difference between them and thevalues that are computed by each algorithm could be used as

comparison error However as each parametermaintains dif-ferent active intervals it is necessary to express the differencesin terms of percentage Therefore if Δ120573 is the parametricdifference and 120573 the ground truth parameter the percentageerror Δ120573 can be defined as follows

Δ120573 =Δ120573

120573sdot 100 (22)

In the segmentation problem each Gaussian mixture repre-sents a 119870-dimensional model where each dimension corre-sponds to a Gaussian function of the optimization problemto be solved Since each Gaussian function possesses threeparameters 119875

119894 120583119894 and 120590

119894 the complete number of parameters

is 3 sdot 119870 dimensions Therefore the final error 119864 produced bythe final Gaussian mixture is

119864 =1

119870 sdot 3

119870sdot3sum

V=1Δ120573V (23)

where 120573V isin (119875119894 120583119894 120590119894)In order to compare accuracy the algorithms 119869 + ABC

119869 + AIS 119869 + DE and the proposed approach are all executedover the image shown by Figure 14(a) The experiment aims

Mathematical Problems in Engineering 19

(a) (b)

(c) (d)

Figure 18 Segmentation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

to estimate the Gaussian mixture that better approximatesthe actual image histogram Methods 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures composed of 4functions (119870 = 4) In case of the 119869new + LS method althoughthe algorithm finds a reduced Gaussian mixture of fourfunctions it is initially set with ten functions (119870 = 10)Table 8 presents the final Gaussian mixture parameters andthe final error 119864 The final Gaussian mixture parametershave been averaged over 30 independent executions in orderto assure consistency A close inspection of Table 8 revealsthat the proposed method is able to achieve the smallesterror (119864) in comparison to the other algorithms Figure 16presents the histogram approximations that are produced byeach algorithm whereas Figure 17 shows their correspondentsegmented images Both illustrations present the mediancase obtained throughout 30 runs Figure 18 exhibits that 119869 +ABC 119869 + AIS and 119869 + DE present different levels of misclas-sifications which are nearly absent in the proposed approachcase

Computational Cost The experiment aims to measure thecomplexity and the computing time spent by the 119869 + ABC

Table 8 Results of the reduced Gaussian mixture in terms ofaccuracy

Algorithm Gaussian function 119875119894

120583119894

120590119894

119864

J + ABC

(1) 0052 445 64

1179(2) 0084 9812 1287(3) 0058 16350 894(4) 0025 21884 175

J + AIS

(1) 0072 3101 614

2201(2) 0054 8852 1221(3) 0039 14921 914(4) 0034 24841 1384

J + DE

(1) 0041 3574 786

1357(2) 0036 9057 1197(3) 0059 14847 901(4) 0020 20134 1302

119869new + LS

(1) 0049 4012 75

398(2) 0041 10204 104(3) 0052 16866 83(4) 0025 11092 152

20 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 19 Images employed in the computational cost analysis

the 119869 + AIS the 119869 + DE and the 119869new + LS algorithm whilecalculating the parameters of the Gaussian mixture in bench-mark images (see Figures 19(a)ndash19(d)) 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures that are composedof 4 functions (119870 = 4) In case of the 119869new + LS methodalthough the algorithm finds a reduced Gaussian mixture offour functions despite being initialized with ten functions(119870 = 10) Table 9 shows the averaged measurements after30 experiments It is evident that the 119869 + ABC and 119869 +

DE are the slowest to converge (iterations) and the 119869 + AISshows the highest computational cost (time elapsed) becauseit requires operators which demand long times On the otherhand the 119869new + LS shows an acceptable trade-off betweenits convergence time and its computational cost Thereforealthough the implementation of 119869new + LS in general requiresmore code than most of other evolution-based segmentatorssuch a fact is not reflected in the execution time FinallyFigure 19 below shows the segmented images as they are

Mathematical Problems in Engineering 21

(a) (b) (c) (d)

Figure 20 Experimental set used in the evaluation of the segmentation results

Table 9 Iterations and time requirements of the 119869 + ABC the 119869 +AIS the 119869 + DE and the 119869new + LS algorithm as they are applied tosegment benchmark images (see Figure 17)

Iterations (a) (b) (c) (d)Time elapsed

119869 + ABC 855 833 870 997272 s 270 s 273 s 31 s

119869 + AIS 725 704 754 812178 s 161 s 141 s 201 s

119869 + DE 657 627 694 742125 s 112 s 145 s 188 s

119869new + LS 314 298 307 402

098 s 084 s 072 s 102 s

Table 10 Evaluation of the segmentation results in terms of the ROSindex

Number of classesImage

119873119877= 4

(a)119873119877= 3

(b)119873119877= 4

(c)119873119877= 4

(d)119869 + ABC 0534 0328 0411 0457119869 + AIS 0522 0321 0427 0437119869 + DE 0512 0312 0408 0418119869new + LS 0674 0401 0514 0527

generated by each algorithm It can be seen that the proposedapproach generate more homogeneous regions whereas 119869 +ABC 119869 + AIS and 119869 + DE present several artifacts that areproduced by an incorrect pixel classification

73 Performance Evaluation of the Segmentation Results Thissection presents an objective evaluation of segmentationresults that are produced by all algorithms in the compar-isons The ill-defined nature of the segmentation problemmakes the evaluation of a candidate algorithm difficult[57] Traditionally the evaluation has been conducted byusing some supervised criteria [58] which are based on thecomputation of a dissimilarity measure between a segmen-tation result and a ground truth image Recently the use ofunsupervised measures has substituted supervised indexesfor the objective evaluation of segmentation results [59]They

Table 11 Unimodal test functions

Test function S 119891opt

1198911(x) =119899

sum

119894=11199092119894

[minus100 100]119899 0

1198912(x) =119899

sum

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 +

119899

prod

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 [minus10 10]119899 0

1198913(x) =119899

sum

119894=1(

119894

sum

119895=1119909119895)

2

[minus100 100]119899 0

1198914 (x) = max119894

10038161003816100381610038161199091198941003816100381610038161003816 1 le 119894 le 119899 [minus100 100]119899 0

1198915(x) =119899minus1sum

119894=1[100 (119909

119894+1 minus 1199092119894)2+ (119909119894minus 1)2] [minus30 30]119899 0

1198916(x) =119899

sum

119894=1(119909119894+ 05)2 [minus100 100]119899 0

1198917(x) =119899

sum

119894=1119894119909

4119894+ rand(0 1) [minus128 128]119899 0

enable the quantification of the quality of a segmentationresult without a priori knowledge (ground truth image)

Evaluation Criteria In this paper the unsupervised indexROS proposed by Chabrier et al [60] has been used toobjectively evaluate the performance of each candidate algo-rithm This index evaluates the segmentation quality interms of the homogeneity within segmented regions andthe heterogeneity among the different regions ROS can becomputed as follows

ROS =119863 minus 119863

2 (24)

where 119863 quantifies the homogeneity within segmentedregions Similarly 119863 measures the disparity among theregions A segmentation result 1198781 is considered better than 1198782if ROS

1198781gt ROS

1198782 The interregion homogeneity character-

ized by119863 is calculated considering the following formulation

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot

120590119888

(sum119873119877

119897=1 120590119897) (25)

where 119873119877represents the number of partitions in which the

image has been segmented 119877119888symbolizes the number of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

4 Mathematical Problems in Engineering

1198261

11982621198263

11985221

1198523211985231

1198522311985213

1198521211985711198572

1198573

Figure 1 Behavioral model under the solitary phase

three different members (119873 = 3) which adopt a determinedconfiguration in the current iteration 119896 As a consequenceof the social forces each element suffers an attraction orrepulsion to other elements depending on the distance amongthem Such forces are represented by s12 s13 s21 s23 s31and s32 Since x1 and x2 are too close the social forces s12and s13 present a repulsive nature On the other hand as thedistances x1 minus x3 and x2 minus x3 are quite long the socialforces s13 s23 s31 and s32 between x1 harr x3 and x2 harr x3all belong to the attractive nature Therefore the change ofposition Δx1 is computed as the vector resultant between s12and s13 (Δx1 = s12+s13) is S1The valuesΔx2 andΔx3 are alsocalculated accordingly

In addition to the presented model [36] some studies[43ndash45] suggest that the social force s

119894119895is also affected by

the dominance of the involved individuals x119894and x

119895in the

pairwise process Dominance is a property that relativelyqualifies the capacity of an individual to survive in relationto other elements in a group The locustrsquos dominance isdetermined by several characteristics such as size chemicalemissions and location with regard to food sources Undersuch circumstances the social force ismagnified orweakeneddepending on the most dominant individual that is involvedin the repulsion-attraction process

22 Social Phase In this phase locusts frantically concen-trate around the elements that have already found good foodsources They attempt to efficiently find better nutrients bydevastating promising areas within the plantation In orderto simulate the social phase the food quality index Fq

119894is

assigned to each locust x119894of the group as such index reflects

the quality of the food source where x119894is located

Under this behavioral model each of the 119873 elementsof the group is ranked according to its corresponding foodquality index Afterwards the 119887 elements featuring the bestfood quality indexes are selected (119887 ≪ 119873) Considering aconcentration radius 119877

119888that is created around each selected

element a set of 119888 new locusts is randomly generated inside119877119888 As a result most of the locusts will be concentrated

around the best 119887 elements Figure 2 shows a simple exampleof the behavioral model under the social phase In theexample the configuration includes eight locusts (119873 = 8)just as it is illustrated by Figure 2(a) that also presents the foodquality index for each locust A food quality index near to oneindicates a better food sourceTherefore Figure 2(b) presentsthe final configuration after the social phase assuming 119887 = 2

3 The Locust Search (LS) Algorithm

In this paper some behavioral principles drawn froma swarmof locusts have been used as guidelines for developing anew swarm optimization algorithm The LS assumes thatentire search space is a plantation where all the locustsinteract to each other In the proposed approach eachsolution within the search space represents a locust positioninside the plantation Every locust receives a food qualityindex according to the fitness value of the solution that issymbolized by the locustrsquos position As it has been previouslydiscussed the algorithm implements two different behavioralschemes solitary and social Depending on each behavioralphase each individual is conducted by a set of evolutionaryoperators which mimics the different cooperative operationsthat are typically found in the swarm The proposed methodformulates the optimization problem in the following form

maximizeminimize 119891 (l) l = (1198971 119897119899) isin R119899

subject to l isin S(6)

where 119891 R119899 rarr R is a nonlinear function whereas S = l isinR119899 | 119897119887

119889le 119897119889le 119906119887119889 119889 = 1 119899 is a bounded feasible search

space which is constrained by the lower (119897119887119889) and upper (119906119887

119889)

limitsIn order to solve the problem formulated in (6) a

population L119896 (l1198961 l119896

2 l119896

119873) of 119873 locusts (individuals) is

evolved inside the LS operation from the initial point (119896 =

0) to a total 119892119890119899 number of iterations (119896 = 119892119890119899) Eachlocust l119896

119894(119894 isin [1 119873]) represents an 119899-dimensional

vector 1198971198961198941 119897119896

1198942 119897119896

119894119899 where each dimension corresponds to

a decision variable of the optimization problem to be solvedThe set of decision variables constitutes the feasible searchspace S = l119896

119894isin R119899 | 119897119887

119889le 119897119896

119894119889le 119906119887119889 where 119897119887

119889and 119906119887

119889

correspond to the lower and upper bounds for the dimension119889 respectively The food quality index that is associated witheach locust l119896

119894(candidate solution) is evaluated through an

objective function 119891(l119896119894) whose final result represents the

fitness value of l119896119894 In the LS algorithm each iteration of the

evolution process consists of two operators (A) solitary and(B) social Beginning by the solitary stage the set of locusts isoperated in order to sufficiently explore the search space Onthe other hand the social operation refines existent solutionswithin a determined neighborhood (exploitation) subspace

31 Solitary Operation (A) One of the most interestingfeatures of the proposed method is the use of the solitaryoperator to modify the current locust positions Under thisapproach locusts are displaced as a consequence of thesocial forces produced by the positional relations among theelements of the swarm Therefore near individuals tend torepel each other avoiding the concentration of elements inregions On the other hand distant individuals tend to attractto each other maintaining the cohesion of the swarm A cleardifference to the original model in [20] considers that socialforces are alsomagnified or weakened depending on themostdominant (best fitness values) individuals that are involved inthe repulsion-attraction process

Mathematical Problems in Engineering 5

1

2

3

4

5

6

7

8

Fq4 = 09

Fq8 = 04

Fq3 = 01 Fq1 = 02

Fq6 = 10

Fq7 = 03

Fq5 = 02

Fq2 = 03

(a)

4

6

1

5 3

2

78

Rc

(b)

Figure 2 Behavioral model under the social phase (a) Initial configuration and food quality indexes (b) final configuration after theoperation of the social phase

In the solitary phase a new position p119894(119894 isin [1 119873]) is

produced by perturbing the current locust position l119896119894with a

change of position Δl119894(p119894= l119896119894+ Δl119894) The change of position

Δl119894is the result of the social interactions experimented by l119896

119894

as a consequence of its repulsion-attraction behavioralmodelSuch social interactions are pairwise computed among l119896

119894and

the other 119873 minus 1 individuals in the swarm In the originalmodel social forces are calculated by using (3) However inthe proposedmethod it is modified to include the best fitnessvalue (the most dominant) of the individuals involved inthe repulsion-attraction process Therefore the social forcethat is exerted between l119896

119895and l119896119894 is calculated by using the

following new model

s119898119894119895= 120588 (l119896

119894 l119896119895) sdot 119904 (119903

119894119895) sdot d119894119895+ rand (1 minus 1) (7)

where 119904(119903119894119895) is the social force strength defined in (2) and d

119894119895=

(l119896119895minus l119896119894)119903119894119895is the unit vector pointing from l119896

119894to l119896119895 Besides

rand(1 minus1) is a randomly generated number between 1 andminus1 Likewise 120588(l119896

119894 l119896119895) is the dominance function that calcu-

lates the dominance value of the most dominant individualfrom l119896

119895and l119896119894 In order to operate 120588(l119896

119894 l119896119895) all the individuals

from L119896 (l1198961 l119896

2 l119896

119873) are ranked according to their fitness

values The ranks are assigned so that the best individualreceives the rank 0 (zero) whereas the worst individualobtains the rank 119873 minus 1 Therefore the function 120588(l119896

119894 l119896119895) is

defined as follows

120588 (l119896119894 l119896119895) =

119890minus(5sdotrank(l119896

119894)119873) if rank (l119896

119894) lt rank (l119896

119895)

119890minus(5sdotrank(l119896

119895)119873) if rank (l119896

119894) gt rank (l119896

119895)

(8)

1

08

06

04

02

00 20 40 60 80 100

120588(119845k i119845k j)

Rank (119845ki )

Figure 3 Behavior of 120588(l119896119894 l119896119895) considering 100 individuals

where the function rank(120572) delivers the rank of the 120572-individual According to (8) 120588(l119896

119894 l119896119895) yields a value within

the interval (1 0) Its maximum value of one in 120588(l119896119894 l119896119895) is

reached when either individual l119896119895or l119896119894is the best element of

the population L119896 regarding their fitness values On the otherhand a value close to zero is obtained when both individualsl119896119895and l119896119894possess quite bad fitness values Figure 3 shows the

behavior of 120588(l119896119894 l119896119895) considering 100 individuals In the figure

it is assumed that l119896119894represents one of the 99 individuals with

ranks between 0 and 98whereas l119896119895is fixed to the element with

the worst fitness value (rank 99)Under the incorporation of 120588(l119896

119894 l119896119895) in (7) social forces

are magnified or weakened depending on the best fitnessvalue (the most dominant) of the individuals involved in therepulsion-attraction process

6 Mathematical Problems in Engineering

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(a)

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(b)

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(c)

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(d)

Figure 4 Examples of different distributions (a) Initial condition (b) distribution after applying 25 (c) 50 and (d) 100 operationsThe greenasterisk represents the minimum value so far

Finally the total social force on each individual l119896119894is mod-

eled as the superposition of all of the pairwise interactionsexerted over it

S119898119894=

119873

sum

119895=1119895 =119894

s119898119894119895 (9)

Therefore the change of positionΔl119894is considered as the total

social force experimented by l119896119894as the superposition of all of

the pairwise interactions Thus Δl119894is defined as follows

Δl119894= S119898119894 (10)

After calculating the new positions P (p1 p2 p119873)of the population L119896 (l1198961 l

119896

2 l119896

119873) the final positions

F (f1 f2 f119873) must be calculated The idea is to admitonly the changes that guarantee an improvement in thesearch strategy If the fitness value of p

119894(119891(p119894)) is better than

l119896119894(119891(l119896119894)) then p

119894is accepted as the final solution Otherwise

l119896119894is retainedThis procedure can be resumed by the following

statement (considering a minimization problem)

f119894=

p119894

if 119891 (p119894) lt 119891 (l119896

119894)

l119896119894

otherwise(11)

In order to illustrate the performance of the solitary oper-ator Figure 4 presents a simple example with the solitaryoperator being iteratively applied A population of 50 dif-ferent members (119873 = 50) is assumed which adopt aconcentrated configuration as initial condition (Figure 4(a))As a consequence of the social forces the set of elementstends to distribute throughout the search space Examplesof different distributions are shown in Figures 4(b) 4(c)and 4(d) after applying 25 50 and 100 different solitaryoperations respectively

32 Social Operation (B) The social procedure representsthe exploitation phase of the LS algorithm Exploitation is

Mathematical Problems in Engineering 7

the process of refining existent individuals within a smallneighborhood in order to improve their solution quality

The social procedure is a selective operation which isapplied only to a subset E of the final positions F (whereE sube F) The operation starts by sorting F with respect tofitness values storing the elements in a temporary populationB = b1 b2 b119873 The elements in B are sorted so thatthe best individual receives the position b1 whereas the worstindividual obtains the location b

119873 Therefore the subset E

is integrated by only the first 119892 locations of B (promisingsolutions) Under this operation a subspace 119862

119895is created

around each selected particle f119895isin E The size of 119862

119895depends

on the distance 119890119889which is defined as follows

119890119889=

sum119899

119902=1 (119906119887119902 minus 119897119887119902)

119899sdot 120573 (12)

where 119906119887119902and 119897119887

119902are the upper and lower bounds in the

119902th dimension and 119899 is the number of dimensions of theoptimization problem whereas 120573 isin [0 1] is a tuning factorTherefore the limits of 119862

119895can be modeled as follows

119906119904119904119902

119895= 119887119895119902+ 119890119889

119897119904119904119902

119895= 119887119895119902minus 119890119889

(13)

where 119906119904119904119902119895and 119897119904119904119902

119895are the upper and lower bounds of the 119902th

dimension for the subspace 119862119895 respectively

Considering the subspace 119862119895around each element f

119895isin

E a set of ℎ new particles (Mℎ119895

= m1119895m2119895 mℎ

119895) are

randomly generated inside bounds fixed by (13) Once theℎ samples are generated the individual l119896+1

119895of the next

population L119896+1 must be created In order to calculate l119896+1119895

the best particlem119887119890119904119905

119895 in terms of its fitness value from the ℎ

samples (wherem119887119890119904119905119895

isin [m1119895m2119895 mℎ

119895]) is compared to f

119895

Ifm119887119890119904119905119895

is better than f119895according to their fitness values l119896+1

119895

is updated with m119887119890119904119905119895

otherwise f119895is selected The elements

of F that have not been processed by the procedure (f119908notin E)

transfer their corresponding values to L119896+1 with no changeThe social operation is used to exploit only prominent

solutions According to the proposed method inside eachsubspace 119862

119895 ℎ random samples are selected Since the

number of selected samples at each subspace is very small(typically ℎ lt 4) the use of this operator substantially reducesthe number of fitness function evaluations

In order to demonstrate the social operation a numericalexample has been set by applying the proposed process to asimple function Such function considers the interval of minus3 le1198891 1198892 le 3 whereas the function possesses one global maximaof value 81 at (0 16) Notice that 1198891 and 1198892 correspond tothe axis coordinates (commonly 119909 and 119910) For this examplea final position population F of six 2-dimensional members(119873 = 6) is assumed Figure 5 shows the initial configurationof the proposed example with the black points representinghalf of the particles with the best fitness values (the first threeelements of B 119892 = 3) whereas the grey points (f2 f4 f6 notin E)correspond to the remaining individuals From Figure 5 it

0

0

0

d1

d2

minus3

minus2

minus1

minus2

minus2

1198396

11983931198391

1198394

1198395

1198392 119846211198461

1

11984623

11984613

11984625

11984615

C1

C5

C3

2

2

2

2

22

2

0

0 0

0

minus2

minus2

minus2

minus4

minus4minus6

4

44

6

68

3

2

1

0

minus3 minus2 minus1 3210

Figure 5 Operation of the social procedure

can be seen that the social procedure is applied to all blackparticles (f5 = b1 f3 = b2 and f1 = b3 f5 f3 f1 isin

E) yielding two new random particles (ℎ = 2) which arecharacterized by white points m1

1 m21 m

13 m

23 m

15 and m2

5for each black point inside of their corresponding subspaces1198621 1198623 and 1198625 Considering the particle f3 in Figure 7 theparticle m2

3 corresponds to the best particle (m1198871198901199041199053 ) from thetwo randomly generated particles (according to their fitnessvalues) within 1198623 Therefore the particlem1198871198901199041199053 will substitutef3 in the individual l119896+13 for the next generation since it holdsa better fitness value than f3(119891(f3)lt119891(m1198871198901199041199053 ))

The LS optimization procedure is defined over a boundedsearch space S Search points that do not belong to sucharea are considered to be infeasible However during theevolution process some candidate solutions could fall outsidethe search space In the proposed approach such infeasiblesolutions are arbitrarily placed with a random position insidethe search space S

33 Complete LS Algorithm LS is a simple algorithm withonly seven adjustable parameters the strength of attraction119865 the attractive length 119871 number of promising solutions 119892the population size 119873 the tuning factor 120573 the number ofrandom samples ℎ and the number of generations 119892119890119899 Theoperation of LS is divided into three parts initialization of thesolitary and social operations In the initialization (119896 = 0) thefirst population L0 (l01 l

02 l

0119873) is produced The values

11989701198941 119897

01198942 119897

0119894119899 of each individual l119896

119894and each dimension

119889 are randomly and uniformly distributed between theprespecified lower initial parameter bound 119897119887

119889and the upper

initial parameter bound 119906119887119889

1198970119894119895= 119897119887119889+ rand sdot (119906119887

119889minus 119897119887119889)

119894 = 1 2 119873 119889 = 1 2 119899(14)

In the evolution process the solitary (A) and social (B) oper-ations are iteratively applied until the number of iterations119896 = 119892119890119899 has been reached The complete LS procedure isillustrated in Algorithm 1

8 Mathematical Problems in Engineering

(1) Input 119865 119871 119892119873 119892119890119899 ℎ and 120573(2) Initialize L0 (119896 = 0)(3) until (119896 = 119892119890119899)(4) F larr SolitaryOperation(L119896) Solitary operator (Section 31)(5) L119896+1 larr SocialOperation(L119896 F) Social operator (Section 32)(6) 119896 = k + 1(7) end until

Algorithm 1 Locust Search (LS) algorithm

34 Discussion about the LS Algorithm Evolutionary algo-rithms (EA) have been widely employed for solving complexoptimization problems These methods are found to bemore powerful than conventional methods that are basedon formal logics or mathematical programming [46] Inthe EA algorithm search agents have to decide whetherto explore unknown search positions or to exploit alreadytested positions in order to improve their solution qualityPure exploration degrades the precision of the evolutionaryprocess but increases its capacity to find new potentially solu-tions On the other hand pure exploitation allows refiningexistent solutions but adversely drives the process to localoptimal solutions Therefore the ability of an EA to find aglobal optimal solution depends on its capacity to find a goodbalance between the exploitation of found-so-far elementsand the exploration of the search space [47]

Most of swarm algorithms and other evolutionary algo-rithms tend to exclusively concentrate the individuals in thecurrent best positions Under such circumstances such algo-rithms seriously limit their exploration-exploitation capaci-ties

Different to most of existent evolutionary algorithmsin the proposed approach the modeled behavior explicitlyavoids the concentration of individuals in the current bestpositions Such fact allows not only to emulate the coop-erative behavior of the locust colony in a good realisticway but also to incorporate a computational mechanism toavoid critical flaws that are commonly present in the popularevolutionary algorithms such as the premature convergenceand the incorrect exploration-exploitation balance

It is important to emphasize that the proposed approachconducts two operators (solitary and social) within a singleiteration Such operators are similar to those that are usedby other evolutionary methods such as ABC (employed beesonlooker bees and scout bees) AIS (clonal proliferationoperator affinity maturation operator and clonal selectionoperator) andDE (mutation crossover and selection) whichare all executed in a single iteration

4 Numerical Experiments overBenchmark Functions

A comprehensive set of 13 functions all collected from [48ndash50] has been used to test the performance of the LS approachas an optimization method Tables 11 and 12 present thebenchmark functions used in our experimental study Such

functions are classified into two different categories uni-modal test functions (Table 11) andmultimodal test functions(Table 12) In these tables 119899 is the function dimension 119891optis the minimum value of the function with S being a subsetof 119877119899 The optimum locations (xopt) for functions in Tables11 and 12 are in [0]119899 except for 1198915 11989112 and 11989113 with xopt in[1]119899 and 1198918 in [42096]

119899 A detailed description of optimumlocations is given in Tables 11 and 12

We have applied the LS algorithm to 13 functions whoseresults have been compared to those produced by the ParticleSwarmOptimization (PSO) method [27] and the DifferentialEvolution (DE) algorithm [28] both considered as the mostpopular algorithms formany optimization applications In allcomparisons the population has been set to 40 (119873 = 40)individualsThemaximum iteration number for all functionshas been set to 1000 Such stop criterion has been selectedto maintain compatibility to similar works reported in theliterature [48 49]

The parameter setting for each of the algorithms in thecomparison is described as follows

(1) PSO in the algorithm 1198881 = 1198882 = 2 while the inertiafactor (120596) is decreased linearly from 09 to 02

(2) DE the DERand1 scheme is employed The param-eter settings follow the instructions in [28 51] Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(3) In LS 119865 and 119871 are set to 06 and 119871 respectivelyBesides 119892 is fixed to 20 (1198732) ℎ = 2 120573 = 06 whereas119892119890119899 and119873 are configured to 1000 and 40 respectivelyOnce such parameters have been experimentallydetermined they are kept for all experiments in thissection

In the comparison three indexes are considered the averagebest-so-far solution (ABS) the standard deviation (SD)and the number of function evaluations (NFE) The firsttwo indexes assess the accuracy of the solution whereasthe last one measures the computational cost The averagebest-so-far solution (ABS) expresses the average value ofthe best function evaluations that have been obtained from30 independent executions The standard deviation (SD)indicates the dispersion of the ABS values Evolutionarymethods are in general complex pieces of software withseveral operators and stochastic branches Therefore it isdifficult to conduct a complexity analysis fromadeterministicperspective Under such circumstances it ismore appropriate

Mathematical Problems in Engineering 9

Table 1 Minimization results from the benchmark functions test inTable 11 with 119899 = 30 Maximum number of iterations = 1000

PSO DE LS

1198911

ABS 166 times 10minus1 627 times 10minus3 455 times 10minus4

SD 379 times 10minus1 168 times 10minus1 698 times 10minus4

NFE 28610 20534 16780

1198912

ABS 483 times 10minus1 202 times 10minus1 541 times 10minus3

SD 159 times 10minus1 066 145 times 10minus2

NFE 28745 21112 16324

1198913

ABS 275 572 times 10minus1 161 times 10minus3

SD 101 015 132 times 10minus3

NFE 38320 36894 20462

1198914

ABS 184 011 105 times 10minus2

SD 087 005 663 times 10minus3

NFE 37028 36450 21158

1198915

ABS 307 239 411 times 10minus2

SD 042 036 274 times 10minus3

NFE 39432 37264 21678

1198916

ABS 636 651 588 times 10minus2

SD 074 087 167 times 10minus2

NFE 38490 36564 22238

1198917

ABS 614 012 271 times 10minus2

SD 073 002 118 times 10minus2

NFE 37274 35486 21842

to use the number of function evaluations (NFE) just asit is used in the literature [52 53] to evaluate and assessthe computational effort (time) and the complexity amongoptimizers It represents how many times an algorithm usesthe objective function to evaluate the objective (fitness)function until the best solution of a determined executionhas been found Since the experiments require 30 differentexecutions the NFE index corresponds to the averaged valueobtained from these executions A small NFE value indicatesthat less time is needed to reach the global optimum

41 Unimodal Test Functions Functions 1198911 to 1198917 are uni-modal functions The results for unimodal functions over30 runs are reported in Table 1 considering the averagebest-so-far solution (ABS) the standard deviation (SD) andthe number of function evaluations (NFE) According tothis table LS provides better results than PSO and DE forall functions in terms of ABS and SD In particular thetest yields the largest performance difference in functions1198914ndash1198917 Such functions maintain a narrow curving valley that

is hard to optimize in case the search space cannot beexplored properly and the direction changes cannot be keptup with [54] For this reason the performance differencesare directly related to a better trade-off between explorationand exploitation that is produced by LS operators In thepractice a main goal of an optimization algorithm is to finda solution as good as possible within a small time windowThe computational cost for the optimizer is represented byits NFE values According to Table 1 the NFE values thatare obtained by the proposed method are smaller than its

Table 2 119901 values produced by Wilcoxonrsquos test that compares LSversus PSO and DE over the ldquoaverage best-so-farrdquo values fromTable 1

LS versus PSO DE1198911

183 times 10minus4 173 times 10minus2

1198912

385 times 10minus3 183 times 10minus4

1198913

173 times 10minus4 623 times 10minus3

1198914

257 times 10minus4 521 times 10minus3

1198915

473 times 10minus4 183 times 10minus3

1198916

639 times 10minus5 215 times 10minus3

1198917

183 times 10minus4 221 times 10minus3

counterparts LowerNFE values aremore desirable since theycorrespond to less computational overload and thereforefaster results In the results perspective it is clear that PSOand DE needmore than 1000 generations in order to producebetter solutions However this number of generations isconsidered in the experiments aiming for producing a visiblecontrast among the approaches If the number of generationshas been set to an exaggerated value then all methods wouldconverge to the best solution with no significant troubles

A nonparametric statistical significance proof known asWilcoxonrsquos rank sum test for independent samples [55 56]has been conducted with an 5 significance level over theldquoaverage best-so-farrdquo data of Table 1 Table 2 reports the 119901values produced byWilcoxonrsquos test for the pairwise compari-son of the ldquoaverage best-so-farrdquo of two groups Such groupsare formed by LS versus PSO and LS versus DE As a nullhypothesis it is assumed that there is no significant differencebetween mean values of the two algorithms The alternativehypothesis considers a significant difference between theldquoaverage best-so-farrdquo values of both approaches All 119901 valuesreported in the table are less than 005 (5 significancelevel) which is a strong evidence against the null hypothesisindicating that the LS results are statistically significant andthat it has not occurred by coincidence (ie due to the normalnoise contained in the process)

42 Multimodal Test Functions Multimodal functions pos-sess many local minima which make the optimization adifficult task to be accomplished In multimodal functionsthe results reflect the algorithmrsquos ability to escape fromlocal optima We have applied the algorithms over functions1198918 to 11989113 where the number of local minima increasesexponentially as the dimension of the function increasesThe dimension of such functions is set to 30 The resultsare averaged over 30 runs with performance indexes beingreported in Table 3 as follows the average best-so-far solution(ABS) the standard deviation (SD) and the number of func-tion evaluations (NFE) Likewise 119901 values of the Wilcoxonsigned-rank test of 30 independent runs are listed in Table 4From the results it is clear that LS yields better solutionsthan others algorithms for functions 1198919 11989110 11989111 and 11989112 interms of the indexes ABS and SD However for functions 1198918and 11989113 LS produces similar results to DE The Wilcoxonrank test results that are presented in Table 6 confirm that

10 Mathematical Problems in Engineering

Table 3Minimization results from the benchmark functions test inTable 12 with 119899 = 30 Maximum number of iterations = 1000

PSO DE LS

1198918

ABS minus67 times 103 minus126 times 104 minus126 times 104

SD 63 times 102 37 times 102 11 times 102

NFE 38452 35240 21846

1198919

ABS 148 401 times 10minus1 249 times 10minus3

SD 139 51 times 10minus2 48 times 10minus4

NFE 37672 34576 20784

11989110

ABS 147 466 times 10minus2 215 times 10minus3

SD 144 127 times 10minus2 318 times 10minus4

NFE 39475 37080 21235

11989111

ABS 1201 115 147 times 10minus4

SD 312 006 148 times 10minus5

NFE 38542 34875 22126

11989112

ABS 687 times 10minus1 374 times 10minus1 558 times 10minus3

SD 707 times 10minus1 155 times 10minus1 418 times 10minus4

NFE 35248 30540 16984

11989113

ABS 187 times 10minus1 181 times 10minus2 178 times 10minus2

SD 574 times 10minus1 166 times 10minus2 164 times 10minus3

NFE 36022 31968 18802

Table 4 119901 values produced byWilcoxonrsquos test comparing LS versusPSO and DE over the ldquoaverage best-so-farrdquo values from Table 3

LS versus PSO DE1198918

183 times 10minus4 00611198919

117 times 10minus4 241 times 10minus4

11989110

143 times 10minus4 312 times 10minus3

11989111

625 times 10minus4 114 times 10minus3

11989112

234 times 10minus5 715 times 10minus4

11989113

473 times 10minus4 0071

LS performed better than PSO and DE considering fourproblems 119891

9ndash11989112 whereas from a statistical viewpoint there

is no difference between results from LS and DE for 1198918 and11989113 According to Table 3 the NFE values obtained by theproposed method are smaller than those produced by otheroptimizers The reason of this remarkable performance isassociated with its two operators (i) the solitary operatorallows a better particle distribution in the search spaceincreasing the algorithmrsquos ability to find the global optimaand (ii) the use of the social operation provides a simpleexploitation operator that intensifies the capacity of findingbetter solutions during the evolution process

5 Gaussian Mixture Modelling

In this section the modeling of image histograms throughGaussian mixture models is presented Let one consider animage holding 119871 gray levels [0 119871 minus 1] whose distribution

is defined by a histogram ℎ(119892) represented by the followingformulation

ℎ (119892) =

119899119892

119873119901 ℎ (119892) gt 0

Np =119871minus1sum

119892=0119899119892

119871minus1sum

119892=0ℎ (119892) = 1

(15)

where 119899119892denotes the number of pixels with gray level 119892

and Np the total number of pixels in the image Under suchcircumstances ℎ(119892) can be modeled by using a mixture 119901(119909)of Gaussian functions of the form

119901 (119909) =

119870

sum

119894=1

119875119894

radic2120587120590119894

exp[minus (119909 minus 120583

119894)2

21205902119894

] (16)

where119870 symbolizes the number of Gaussian functions of themodel whereas 119875

119894is the a priori probability of function 119894 120583

119894

and 120590119894represent the mean and standard deviation of the 119894th

Gaussian function respectively Furthermore the constraintsum119870

119894=1 119875119894 = 1must be satisfied by themodel In order to evaluatethe similarity between the image histogram and a candidatemixture model the mean square error can be used as follows

119869 =1119899

119871

sum

119895=1[119901 (119909119895) minus ℎ (119909

119895)]

2+120596 sdot

1003816100381610038161003816100381610038161003816100381610038161003816

(

119870

sum

119894=1119875119894)minus 1

1003816100381610038161003816100381610038161003816100381610038161003816

(17)

where 120596 represents the penalty associated with the constrainsum119870

119894=1 119875119894 = 1 Therefore 119869 is considered as the objective func-tion which must be minimized in the estimation problem Inorder to illustrate the histogram modeling through a Gaus-sian mixture Figure 6 presents an example assuming threeclasses that is 119870 = 3 Considering Figure 6(a) as the imagehistogram ℎ(119909) the Gaussian mixture 119901(119909) that is shown inFigure 6(c) is produced by adding the Gaussian functions1199011(119909) 1199012(119909) and 1199013(119909) in the configuration presented inFigure 6(b) Once themodel parameters that bettermodel theimage histogram have been determined the final step is todefine the threshold values 119879

119894(119894 isin [1 119870]) which can be

calculated by simple standard methods just as it is presentedin [19ndash21]

6 Segmentation Algorithm Based on LS

In the proposed method the segmentation process isapproached as an optimization problem Computationaloptimization generally involves two distinct elements (1) asearch strategy to produce candidate solutions (individualsparticles insects locust etc) and (2) an objective functionthat evaluates the quality of each selected candidate solutionSeveral computational algorithms are available to performthe first element The second element where the objectivefunction is designed is unquestionably the most criticalIn the objective function it is expected to embody the full

Mathematical Problems in Engineering 11

Freq

uenc

y

Gray level

h(x)

(a)

Gray level

Freq

uenc

y

p1(x)p2(x) p3(x)

(b)

Freq

uenc

y

Gray level

p(x)

(c)

Figure 6 Histogram approximation through a Gaussian mixture (a) Original histogram (b) configuration of the Gaussian components1199011(119909) 1199012(119909) and 1199013(119909) and (c) final Gaussian mixture 119901(119909)

complexity of the performance biases and restrictions of theproblem to be solved In the segmentation problem candidatesolutions represent Gaussian mixtures The objective func-tion 119869 (17) is used as a fitness value to evaluate the similaritypresented between the Gaussian mixture and the imagehistogram Therefore guided by the fitness values (119869 values)a set of encoded candidate solutions are evolved using theevolutionary operators until the best possible resemblancecan be found

Over the last decade several algorithms based on evo-lutionary and swarm principles [19ndash22] have been proposedto solve the problem of segmentation through a Gaussianmixture model Although these algorithms own certain goodperformance indexes they present two important limitations(1) They frequently obtain suboptimal approximations asa consequence of an inappropriate balance between explo-ration and exploitation in their search strategies (2) Theyare based on the assumption that the number of Gaussians(classes) in the mixture is preknown and fixed otherwisethey cannot work

In order to eliminate such flaws the proposed approachincludes (A) a new search strategy and (B) the definitionof a new objective function For the search strategy the LSmethod (Section 4) is adopted Under LS the concentrationof individuals in the current best positions is explicitlyavoided Such fact allows reducing critical problems such asthe premature convergence to suboptimal solutions and theincorrect exploration-exploitation balance

61 New Objective Function 119869new Previous segmentation

algorithms based on evolutionary and swarm methods use(17) as objective function Under these circumstances each

candidate solution (Gaussian mixture) is only evaluated interms of its approximation with the image histogram

Since the proposed approach aims to automatically selectthe number ofGaussian functions119870 in the finalmixture119901(119909)the objective function must be modified The new objectivefunction 119869new is defined as follows

119869new

= 119869 + 120582 sdot119876 (18)

where 120582 is a scaling constant The new objective function isdivided into two partsThe first part 119869 evaluates the quality ofeach candidate solution in terms of its similarity with regardto the image histogram (17) The second part 119876 penalizes theoverlapped area among Gaussian functions (classes) with 119876being defined as follows

119876 =

119870

sum

119894=1

119870

sum

119895=1119895 =119894

119871

sum

119897=1min (119875

119894sdot 119901119894(119897) 119875119895sdot 119901119895(119897)) (19)

where 119870 and 119871 represent the number of classes and the graylevels respectively The parameters 119901

119894(119897) and 119901

119895(119897) symbolize

the Gaussian functions 119894 and 119895 respectively that are to beevaluated on the point 119897 (gray level) whereas the elements119875119894and 119875

119895represent the a priori probabilities of the Gaussian

functions 119894 and 119895 respectively Under such circumstancesmixtures with Gaussian functions that do not ldquopositivelyrdquoparticipate in the histogram approximation are severelypenalized

Figure 7 illustrates the effect of the new objective function119869new in the evaluation of Gaussian mixtures (candidatesolutions) From the image histogram (Figure 7(a)) it isevident that two Gaussian functions are enough to accurately

12 Mathematical Problems in Engineering

Gray level

Freq

uenc

y h(x)

(a)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)Freq

uenc

y

(b)

Gray level

Freq

uenc

y p1(x)

p2(x)

p3(x)

p4(x)

Q12 Q23 Q34

(c)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)

Q12

Q23

Q34

Freq

uenc

y(d)

Figure 7 Effect of the new objective function 119869new in the evaluation of Gaussian mixtures (candidate solutions) (a) Original histogram (b)Gaussian mixture considering four classes (c) penalization areas and (d) Gaussian mixture of better quality solution

approximate the original histogramHowever if theGaussianmixture is modeled by using a greater number of functions(eg four as it is shown in Figure 7(b)) the original objectivefunction 119869 is unable to obtain a reasonable approximationUnder the new objective function 119869

new the overlapped areaamong Gaussian functions (classes) is penalized Such areasin Figure 7(c) correspond to 11987612 11987623 and 11987634 where11987612 represents the penalization value produced between theGaussian function 1199011(119909) and 1199012(119909) Therefore due to thepenalization theGaussianmixture shown in Figures 7(b) and7(c) provides a solution of low quality On the other hand theGaussian mixture presented in Figure 7(d) maintains a lowpenalty thus it represents a solution of high quality FromFigure 7(d) it is easy to see that functions 1199011(119909) and 1199014(119909)can be removed from the final mixture This eliminationcould be performed by a simple comparison with a thresholdvalue 120579 since 1199011(119909) and 1199014(119909) have a reduced amplitude(1199011(119909) asymp 1199012(119909) asymp 0) Therefore under 119869new it is possibleto find the reduced Gaussian mixture model starting from aconsiderable number of functions

Since the proposed segmentation method is conceivedas an optimization problem the overall operation can bereduced to solve the formulation of (20) by using the LSalgorithm

minimize 119869new

(x) = 119869 (x) + 120582 sdot 119876 (x)

x = (1198751 1205831 1205901 119875119870 120583119870 120590119870) isin R3sdot119870

subject to 0 le 119875119889le 1 119889 isin (1 119870)

0 le 120583119889le 255

0 le 120590119889le 25

(20)

where 119875119889 120583119889 and 120590

119889represent the probability mean and

standard deviation of the class 119889 It is important to remarkthat the new objective function 119869

new allows the evaluationof a candidate solution in terms of the necessary numberof Gaussian functions and its approximation quality Undersuch circumstances it can be used in combination with anyother evolutionary method and not only with the proposedLS algorithm

62 Complete Segmentation Algorithm Once the new searchstrategy (LS) and objective function (119869new) have been definedthe proposed segmentation algorithm can be summarized byAlgorithm 2The new algorithm combines operators definedby LS and operations for calculating the threshold values

(Line 1) The algorithm sets the operative parameters 119865119871 119892 119892119890119899 119873 119870 120582 and 120579 They rule the behavior of thesegmentation algorithm (Line 2) Afterwards the populationL0 is initialized considering 119873 different random Gaussianmixtures of119870 functionsThe idea is to generate an119873-randomGaussian mixture subjected to the constraints formulatedin (20) The parameter 119870 must hold a high value in orderto correctly segment all images (recall that the algorithmis able to reduce the Gaussian mixture to its minimumexpression) (Line 3)Then the Gaussianmixtures are evolvedby using the LS operators and the new objective function119869newThis process is repeated during119892119890119899 cycles (Line 8) Afterthis procedure the best Gaussian mixture l119892119890119899

119887119890119904119905is selected

according to its objective function 119869new (Line 9) Then

the Gaussian mixture l119892119890119899119887119890119904119905

is reduced by eliminating thosefunctions whose amplitudes are lower than 120579 (119901

119894(119909) le 120579)

(Line 10) Then the threshold values 119879119894from the reduced

model are calculated (Line 11) Finally the calculated 119879119894

values are employed to segment the image Figure 8 shows aflowchart of the complete segmentation procedure

Mathematical Problems in Engineering 13

(1) Input 119865 119871 119892 119892119890119899119873 119870 120582 and 120579(2) Initialize L0 (119896 = 0)(3) until (119896 = 119892119890119899)(4) F larr SolitaryOperation(L119896) Solitary operator (Section 31)(5) L119896+1 larr SocialOperation(L119896 F) Social operator (Section 32)(6) 119896 = k + 1(7) end until(8) Obtain I119892119890119899

119887119890119904119905

(9) Reduce I119892119890119899119887119890119904119905

(10) Calculate the threshold values 119879119894from the reduced model

(11) Use 119879119894to segment the image

Algorithm 2 Segmentation LS algorithm

The proposed segmentation algorithm permits to auto-matically detect the number of segmentation partitions(classes) Furthermore due to its remarkable search capaci-ties the LSmethodmaintains a better accuracy than previousalgorithms based on evolutionary principles However theproposed method presents two disadvantages first it isrelated to its implementation which in general is morecomplex than most of the other segmentators based onevolutionary basics The second refers to the segmentationprocedure of the proposed approach which does not considerany spatial pixel characteristics As a consequence pixelsthat may belong to a determined region due to its positionare labeled as a part of another region due to its graylevel intensity Such a fact adversely affects the segmentationperformance of the method

7 Segmentation Results

This section analyses the performance of the proposedsegmentation algorithmThe discussion is divided into threeparts the first one shows the performance of the proposedLS segmentation algorithm while the second presents a com-parison between the proposed approach and others segmen-tation algorithms that are based on evolutionary and swammethods The comparison mainly considers the capacities ofeach algorithm to accurately and robustly approximate theimage histogram Finally the third part presents an objectiveevaluation of segmentation results produced by all algorithmsthat have been employed in the comparisons

71 Performance of LS Algorithm in Image SegmentationThis section presents two experiments that analyze theperformance of the proposed approach Table 5 presentsthe algorithmrsquos parameters that have been experimentallydetermined and kept for all the test images through allexperiments

First Image The first test considers the histogram shownby Figure 9(b) while Figure 9(a) presents the original imageAfter applying the proposed algorithm just as it has beenconfigured according to parameters in Table 5 a minimummodel of four classes emerges after the start from Gaussian

Table 5 Parameter setup for the LS segmentation algorithm

119865 119871 119892 119892119890119899 119873 119870 120582 120579

06 02 20 1000 40 10 001 00001

Table 6 Results of the reduced Gaussian mixture for the first andthe second image

Parameters First image Second image1198751 0004 00321205831 181 1211205901 82 291198752 00035 000151205832 699 4511205902 184 2491198753 001 00061205833 949 13011205903 89 1791198754 0007 0021205834 1631 16721205904 299 101

mixtures of 10 classes Considering 30 independent exe-cutions the averaged parameters of the resultant Gaussianmixture are presented in Table 6 One final Gaussian mixture(ten classes) which has been obtained by LS is presentedin Figure 10 Furthermore the approximation of the reducedGaussian mixture is also visually compared with the originalhistogram in Figure 10 On the other hand Figure 11 presentsthe segmented image after calculating the threshold points

Second Image For the second experiment the image inFigure 12 is testedThemethod aims to segment the image byusing a reduced Gaussian mixture model obtained by the LSapproach After executing the algorithm according to param-eters from Table 5 the resulting averaged parameters of theresultant Gaussian mixture are presented in Table 6 In orderto assure consistency the results are averaged considering 30independent executions Figure 13 shows the approximationquality that is obtained by the reduced Gaussian mixturemodel in (a) and the segmented image in (b)

14 Mathematical Problems in Engineering

Input parameters

ImageI(xy)

from the reduced model

YesLS method

No

k = k + 1

k gt gen

Reduce 119845genbest

Calculate the threshold values Ti

Use Ti to segment the image

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

119813 larr SolitaryOperation (119819k)

119819k+1 larr SocialOperation (119819k 119813)

Initialize 1198190 (k = 0)

F L g gen N K 120582 and 120579

Figure 8 Flowchart of the complete segmentation procedure

(a)

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray level

Freq

uenc

y

(b)

Figure 9 (a) Original first image used on the first experiment and (b) its histogram

Mathematical Problems in Engineering 15

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray levelFr

eque

ncy

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

Figure 10 Gaussian mixture obtained by LS for the first image

Figure 11 Image segmented with the reduced Gaussian mixture

(a)

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

(b)

Figure 12 (a) Original second image used on the first experiment and (b) its histogram

72 Histogram Approximation Comparisons This sectiondiscusses the comparison between the proposed segmenta-tion algorithm and other evolutionary-segmentation meth-ods that have been proposed in the literature The set ofmethods used in the experiments includes the 119869 + ABC[19] 119869 + AIS [20] and 119869 + DE [21] These algorithmsconsider the combination between the original objectivefunction 119869 (17) and an evolutionary technique such asArtificial Bee Colony (ABC) the Artificial Immune Systems(AIS) and the Differential Evolution (DE) [21] respectively

Since the proposed segmentation approach considers thecombination of the new objective function 119869

new (18) andthe LS algorithm it will be referred to as 119869

new+ LS

The comparison focuses mainly on the capacities of eachalgorithm to accurately and robustly approximate the imagehistogram

In the experiments the populations have been set to 40(119873 = 40) individuals The maximum iteration number for allfunctions has been set to 1000 Such stop criterion has beenconsidered to maintain compatibility to similar experiments

16 Mathematical Problems in Engineering

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

(a) (b)

Figure 13 (a) Segmentation result obtained by LS for the first image and (b) the segmented image

(1) (2)

(3) (4)

(a)

0015

0005001

0020025

0035003

0045004

005

00 50 100 150 200 250

Gray level

Freq

uenc

y

(b)

Figure 14 (a) Synthetic image used in the comparison and (b) its histogram

that are reported in the literature [18] The parameter settingfor each of the segmentation algorithms in the comparison isdescribed as follows

(1) 119869 + ABC [19] in the algorithm its parameters areconfigured as follows the abandonment limit = 100120572 = 005 and limit = 30

(2) 119869 + AIS [20] it presents the following parameters ℎ =120119873

119888= 80 120588 = 10 119875

119903= 20 119871 = 22 and 119879

119890= 001

(3) 119869 + DE [21] the DERand1 scheme is employed Theparameter settings follow the instructions in [21]Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(4) In 119869new +LS the method is set according to the valuesdescribed in Table 5

In order to conduct the experiments a synthetic image isdesigned to be used as a reference in the comparisons Themain idea is to know in advance the exact number of classes(and their parameters) that are contained in the image sothat the histogram can be considered as a ground truth The

Table 7 Employed parameters for the design of the reference image

119875119894

120583119894

120590119894

(1) 005 40 8(2) 004 100 10(3) 005 160 8(4) 0027 220 15

synthetic image is divided into four sections Each sectioncorresponds to a different class which is produced by settingeach gray level pixel 119875V119894 to a value that is determined by thefollowing model

119875V119894= 119890minus((119909minus1205831)

221205902119894) (21)

where 119894 represents the section whereas 120583119894and 120590119894are themean

and the dispersion of the gray level pixels respectively Thecomparative study employs the image of 512 times 512 that isshown in Figure 14(a) and the algorithmrsquos parameters thathave been presented in Table 7 Figure 14(b) illustrates theresultant histogram

Mathematical Problems in Engineering 17

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2Class 3Class 4

Class 5Class 6Class 7Class 8

(a)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(b)

Figure 15 Convergence results (a) Convergence of the following methods 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes (b) Convergence of the proposed method (reduced Gaussian mixture)

(a) (b)

Figure 16 Segmentation results obtained by (a) several methods including 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes and (b) the proposed method (reduced Gaussian mixture)

In the comparison the discussion focuses on the follow-ing issues first of all accuracy second convergence andthird computational cost

Convergence This section analyzes the approximation con-vergence when the number of classes that are used bythe algorithm during the evolution process is different tothe actual number of classes in the image Recall that theproposed approach automatically finds the reduced Gaussianmixture which better adapts to the image histogram

In the experiment the methods 119869 + ABC 119869 + AISand 119869 + DE are executed considering Gaussian mixturescomposed of 8 functions Under such circumstances thenumber of classes to be detected is higher than the actualnumber of classes in the image On the other hand the

proposed algorithm maintains the same configuration ofTable 5Therefore it can detect and calculate up to ten classes(119870 = 10)

As a result the techniques 119869 + ABC 119869 + AIS and 119869 +DE tend to overestimate the image histogram This effectcan be seen in Figure 15(a) where the resulting Gaussianfunctions are concentrated within actual classes Such abehavior is a consequence of the evaluation that is consideredby the original objective function 119869 which privileges onlythe approximation between the Gaussian mixture and theimage histogram This effect can be graphically illustrated byFigure 16(a) that shows the pixel misclassification producedby the wrong segmentation of Figure 14(a) On the otherhand the proposed approach obtains a reduced Gaussianmixture model which allows the detection of each class from

18 Mathematical Problems in Engineering

006

004

005

002

003

001

00 50 100 150 200 250

Gray level

Freq

uenc

y

(a)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

(b)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

HistogramClass 1Class 2

Class 3Class 4

(c)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(d)

Figure 17 Approximation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

the actual histogram (see Figure 15(b)) As a consequencethe segmentation is significantly improved by eliminating thepixel misclassification as it is shown by Figure 16(b)

It is evident from Figure 15 that the techniques 119869 + ABC119869 + AIS and 119869 + DE all need an a priori knowledge of thenumber of classes that are contained in the actual histogramin order to obtain a satisfactory result On the other handthe proposed algorithm is able to find a reduced Gaussianmixture whose classes coincide with the actual number ofclasses that are contained in the image histogram

Accuracy In this section the comparison among the algo-rithms in terms of accuracy is reported Most of the reportedcomparisons [19ndash26] are concerned about comparing theparameters of the resultant Gaussian mixtures by using realimages Under such circumstances it is difficult to considera clear reference in order to define a meaningful errorTherefore the image defined in Figure 14 has been used in theexperiments because its construction parameters are clearlydefined in Table 7

Since the parameter values from Table 7 act as groundtruth a simple averaged difference between them and thevalues that are computed by each algorithm could be used as

comparison error However as each parametermaintains dif-ferent active intervals it is necessary to express the differencesin terms of percentage Therefore if Δ120573 is the parametricdifference and 120573 the ground truth parameter the percentageerror Δ120573 can be defined as follows

Δ120573 =Δ120573

120573sdot 100 (22)

In the segmentation problem each Gaussian mixture repre-sents a 119870-dimensional model where each dimension corre-sponds to a Gaussian function of the optimization problemto be solved Since each Gaussian function possesses threeparameters 119875

119894 120583119894 and 120590

119894 the complete number of parameters

is 3 sdot 119870 dimensions Therefore the final error 119864 produced bythe final Gaussian mixture is

119864 =1

119870 sdot 3

119870sdot3sum

V=1Δ120573V (23)

where 120573V isin (119875119894 120583119894 120590119894)In order to compare accuracy the algorithms 119869 + ABC

119869 + AIS 119869 + DE and the proposed approach are all executedover the image shown by Figure 14(a) The experiment aims

Mathematical Problems in Engineering 19

(a) (b)

(c) (d)

Figure 18 Segmentation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

to estimate the Gaussian mixture that better approximatesthe actual image histogram Methods 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures composed of 4functions (119870 = 4) In case of the 119869new + LS method althoughthe algorithm finds a reduced Gaussian mixture of fourfunctions it is initially set with ten functions (119870 = 10)Table 8 presents the final Gaussian mixture parameters andthe final error 119864 The final Gaussian mixture parametershave been averaged over 30 independent executions in orderto assure consistency A close inspection of Table 8 revealsthat the proposed method is able to achieve the smallesterror (119864) in comparison to the other algorithms Figure 16presents the histogram approximations that are produced byeach algorithm whereas Figure 17 shows their correspondentsegmented images Both illustrations present the mediancase obtained throughout 30 runs Figure 18 exhibits that 119869 +ABC 119869 + AIS and 119869 + DE present different levels of misclas-sifications which are nearly absent in the proposed approachcase

Computational Cost The experiment aims to measure thecomplexity and the computing time spent by the 119869 + ABC

Table 8 Results of the reduced Gaussian mixture in terms ofaccuracy

Algorithm Gaussian function 119875119894

120583119894

120590119894

119864

J + ABC

(1) 0052 445 64

1179(2) 0084 9812 1287(3) 0058 16350 894(4) 0025 21884 175

J + AIS

(1) 0072 3101 614

2201(2) 0054 8852 1221(3) 0039 14921 914(4) 0034 24841 1384

J + DE

(1) 0041 3574 786

1357(2) 0036 9057 1197(3) 0059 14847 901(4) 0020 20134 1302

119869new + LS

(1) 0049 4012 75

398(2) 0041 10204 104(3) 0052 16866 83(4) 0025 11092 152

20 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 19 Images employed in the computational cost analysis

the 119869 + AIS the 119869 + DE and the 119869new + LS algorithm whilecalculating the parameters of the Gaussian mixture in bench-mark images (see Figures 19(a)ndash19(d)) 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures that are composedof 4 functions (119870 = 4) In case of the 119869new + LS methodalthough the algorithm finds a reduced Gaussian mixture offour functions despite being initialized with ten functions(119870 = 10) Table 9 shows the averaged measurements after30 experiments It is evident that the 119869 + ABC and 119869 +

DE are the slowest to converge (iterations) and the 119869 + AISshows the highest computational cost (time elapsed) becauseit requires operators which demand long times On the otherhand the 119869new + LS shows an acceptable trade-off betweenits convergence time and its computational cost Thereforealthough the implementation of 119869new + LS in general requiresmore code than most of other evolution-based segmentatorssuch a fact is not reflected in the execution time FinallyFigure 19 below shows the segmented images as they are

Mathematical Problems in Engineering 21

(a) (b) (c) (d)

Figure 20 Experimental set used in the evaluation of the segmentation results

Table 9 Iterations and time requirements of the 119869 + ABC the 119869 +AIS the 119869 + DE and the 119869new + LS algorithm as they are applied tosegment benchmark images (see Figure 17)

Iterations (a) (b) (c) (d)Time elapsed

119869 + ABC 855 833 870 997272 s 270 s 273 s 31 s

119869 + AIS 725 704 754 812178 s 161 s 141 s 201 s

119869 + DE 657 627 694 742125 s 112 s 145 s 188 s

119869new + LS 314 298 307 402

098 s 084 s 072 s 102 s

Table 10 Evaluation of the segmentation results in terms of the ROSindex

Number of classesImage

119873119877= 4

(a)119873119877= 3

(b)119873119877= 4

(c)119873119877= 4

(d)119869 + ABC 0534 0328 0411 0457119869 + AIS 0522 0321 0427 0437119869 + DE 0512 0312 0408 0418119869new + LS 0674 0401 0514 0527

generated by each algorithm It can be seen that the proposedapproach generate more homogeneous regions whereas 119869 +ABC 119869 + AIS and 119869 + DE present several artifacts that areproduced by an incorrect pixel classification

73 Performance Evaluation of the Segmentation Results Thissection presents an objective evaluation of segmentationresults that are produced by all algorithms in the compar-isons The ill-defined nature of the segmentation problemmakes the evaluation of a candidate algorithm difficult[57] Traditionally the evaluation has been conducted byusing some supervised criteria [58] which are based on thecomputation of a dissimilarity measure between a segmen-tation result and a ground truth image Recently the use ofunsupervised measures has substituted supervised indexesfor the objective evaluation of segmentation results [59]They

Table 11 Unimodal test functions

Test function S 119891opt

1198911(x) =119899

sum

119894=11199092119894

[minus100 100]119899 0

1198912(x) =119899

sum

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 +

119899

prod

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 [minus10 10]119899 0

1198913(x) =119899

sum

119894=1(

119894

sum

119895=1119909119895)

2

[minus100 100]119899 0

1198914 (x) = max119894

10038161003816100381610038161199091198941003816100381610038161003816 1 le 119894 le 119899 [minus100 100]119899 0

1198915(x) =119899minus1sum

119894=1[100 (119909

119894+1 minus 1199092119894)2+ (119909119894minus 1)2] [minus30 30]119899 0

1198916(x) =119899

sum

119894=1(119909119894+ 05)2 [minus100 100]119899 0

1198917(x) =119899

sum

119894=1119894119909

4119894+ rand(0 1) [minus128 128]119899 0

enable the quantification of the quality of a segmentationresult without a priori knowledge (ground truth image)

Evaluation Criteria In this paper the unsupervised indexROS proposed by Chabrier et al [60] has been used toobjectively evaluate the performance of each candidate algo-rithm This index evaluates the segmentation quality interms of the homogeneity within segmented regions andthe heterogeneity among the different regions ROS can becomputed as follows

ROS =119863 minus 119863

2 (24)

where 119863 quantifies the homogeneity within segmentedregions Similarly 119863 measures the disparity among theregions A segmentation result 1198781 is considered better than 1198782if ROS

1198781gt ROS

1198782 The interregion homogeneity character-

ized by119863 is calculated considering the following formulation

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot

120590119888

(sum119873119877

119897=1 120590119897) (25)

where 119873119877represents the number of partitions in which the

image has been segmented 119877119888symbolizes the number of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 5

1

2

3

4

5

6

7

8

Fq4 = 09

Fq8 = 04

Fq3 = 01 Fq1 = 02

Fq6 = 10

Fq7 = 03

Fq5 = 02

Fq2 = 03

(a)

4

6

1

5 3

2

78

Rc

(b)

Figure 2 Behavioral model under the social phase (a) Initial configuration and food quality indexes (b) final configuration after theoperation of the social phase

In the solitary phase a new position p119894(119894 isin [1 119873]) is

produced by perturbing the current locust position l119896119894with a

change of position Δl119894(p119894= l119896119894+ Δl119894) The change of position

Δl119894is the result of the social interactions experimented by l119896

119894

as a consequence of its repulsion-attraction behavioralmodelSuch social interactions are pairwise computed among l119896

119894and

the other 119873 minus 1 individuals in the swarm In the originalmodel social forces are calculated by using (3) However inthe proposedmethod it is modified to include the best fitnessvalue (the most dominant) of the individuals involved inthe repulsion-attraction process Therefore the social forcethat is exerted between l119896

119895and l119896119894 is calculated by using the

following new model

s119898119894119895= 120588 (l119896

119894 l119896119895) sdot 119904 (119903

119894119895) sdot d119894119895+ rand (1 minus 1) (7)

where 119904(119903119894119895) is the social force strength defined in (2) and d

119894119895=

(l119896119895minus l119896119894)119903119894119895is the unit vector pointing from l119896

119894to l119896119895 Besides

rand(1 minus1) is a randomly generated number between 1 andminus1 Likewise 120588(l119896

119894 l119896119895) is the dominance function that calcu-

lates the dominance value of the most dominant individualfrom l119896

119895and l119896119894 In order to operate 120588(l119896

119894 l119896119895) all the individuals

from L119896 (l1198961 l119896

2 l119896

119873) are ranked according to their fitness

values The ranks are assigned so that the best individualreceives the rank 0 (zero) whereas the worst individualobtains the rank 119873 minus 1 Therefore the function 120588(l119896

119894 l119896119895) is

defined as follows

120588 (l119896119894 l119896119895) =

119890minus(5sdotrank(l119896

119894)119873) if rank (l119896

119894) lt rank (l119896

119895)

119890minus(5sdotrank(l119896

119895)119873) if rank (l119896

119894) gt rank (l119896

119895)

(8)

1

08

06

04

02

00 20 40 60 80 100

120588(119845k i119845k j)

Rank (119845ki )

Figure 3 Behavior of 120588(l119896119894 l119896119895) considering 100 individuals

where the function rank(120572) delivers the rank of the 120572-individual According to (8) 120588(l119896

119894 l119896119895) yields a value within

the interval (1 0) Its maximum value of one in 120588(l119896119894 l119896119895) is

reached when either individual l119896119895or l119896119894is the best element of

the population L119896 regarding their fitness values On the otherhand a value close to zero is obtained when both individualsl119896119895and l119896119894possess quite bad fitness values Figure 3 shows the

behavior of 120588(l119896119894 l119896119895) considering 100 individuals In the figure

it is assumed that l119896119894represents one of the 99 individuals with

ranks between 0 and 98whereas l119896119895is fixed to the element with

the worst fitness value (rank 99)Under the incorporation of 120588(l119896

119894 l119896119895) in (7) social forces

are magnified or weakened depending on the best fitnessvalue (the most dominant) of the individuals involved in therepulsion-attraction process

6 Mathematical Problems in Engineering

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(a)

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(b)

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(c)

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(d)

Figure 4 Examples of different distributions (a) Initial condition (b) distribution after applying 25 (c) 50 and (d) 100 operationsThe greenasterisk represents the minimum value so far

Finally the total social force on each individual l119896119894is mod-

eled as the superposition of all of the pairwise interactionsexerted over it

S119898119894=

119873

sum

119895=1119895 =119894

s119898119894119895 (9)

Therefore the change of positionΔl119894is considered as the total

social force experimented by l119896119894as the superposition of all of

the pairwise interactions Thus Δl119894is defined as follows

Δl119894= S119898119894 (10)

After calculating the new positions P (p1 p2 p119873)of the population L119896 (l1198961 l

119896

2 l119896

119873) the final positions

F (f1 f2 f119873) must be calculated The idea is to admitonly the changes that guarantee an improvement in thesearch strategy If the fitness value of p

119894(119891(p119894)) is better than

l119896119894(119891(l119896119894)) then p

119894is accepted as the final solution Otherwise

l119896119894is retainedThis procedure can be resumed by the following

statement (considering a minimization problem)

f119894=

p119894

if 119891 (p119894) lt 119891 (l119896

119894)

l119896119894

otherwise(11)

In order to illustrate the performance of the solitary oper-ator Figure 4 presents a simple example with the solitaryoperator being iteratively applied A population of 50 dif-ferent members (119873 = 50) is assumed which adopt aconcentrated configuration as initial condition (Figure 4(a))As a consequence of the social forces the set of elementstends to distribute throughout the search space Examplesof different distributions are shown in Figures 4(b) 4(c)and 4(d) after applying 25 50 and 100 different solitaryoperations respectively

32 Social Operation (B) The social procedure representsthe exploitation phase of the LS algorithm Exploitation is

Mathematical Problems in Engineering 7

the process of refining existent individuals within a smallneighborhood in order to improve their solution quality

The social procedure is a selective operation which isapplied only to a subset E of the final positions F (whereE sube F) The operation starts by sorting F with respect tofitness values storing the elements in a temporary populationB = b1 b2 b119873 The elements in B are sorted so thatthe best individual receives the position b1 whereas the worstindividual obtains the location b

119873 Therefore the subset E

is integrated by only the first 119892 locations of B (promisingsolutions) Under this operation a subspace 119862

119895is created

around each selected particle f119895isin E The size of 119862

119895depends

on the distance 119890119889which is defined as follows

119890119889=

sum119899

119902=1 (119906119887119902 minus 119897119887119902)

119899sdot 120573 (12)

where 119906119887119902and 119897119887

119902are the upper and lower bounds in the

119902th dimension and 119899 is the number of dimensions of theoptimization problem whereas 120573 isin [0 1] is a tuning factorTherefore the limits of 119862

119895can be modeled as follows

119906119904119904119902

119895= 119887119895119902+ 119890119889

119897119904119904119902

119895= 119887119895119902minus 119890119889

(13)

where 119906119904119904119902119895and 119897119904119904119902

119895are the upper and lower bounds of the 119902th

dimension for the subspace 119862119895 respectively

Considering the subspace 119862119895around each element f

119895isin

E a set of ℎ new particles (Mℎ119895

= m1119895m2119895 mℎ

119895) are

randomly generated inside bounds fixed by (13) Once theℎ samples are generated the individual l119896+1

119895of the next

population L119896+1 must be created In order to calculate l119896+1119895

the best particlem119887119890119904119905

119895 in terms of its fitness value from the ℎ

samples (wherem119887119890119904119905119895

isin [m1119895m2119895 mℎ

119895]) is compared to f

119895

Ifm119887119890119904119905119895

is better than f119895according to their fitness values l119896+1

119895

is updated with m119887119890119904119905119895

otherwise f119895is selected The elements

of F that have not been processed by the procedure (f119908notin E)

transfer their corresponding values to L119896+1 with no changeThe social operation is used to exploit only prominent

solutions According to the proposed method inside eachsubspace 119862

119895 ℎ random samples are selected Since the

number of selected samples at each subspace is very small(typically ℎ lt 4) the use of this operator substantially reducesthe number of fitness function evaluations

In order to demonstrate the social operation a numericalexample has been set by applying the proposed process to asimple function Such function considers the interval of minus3 le1198891 1198892 le 3 whereas the function possesses one global maximaof value 81 at (0 16) Notice that 1198891 and 1198892 correspond tothe axis coordinates (commonly 119909 and 119910) For this examplea final position population F of six 2-dimensional members(119873 = 6) is assumed Figure 5 shows the initial configurationof the proposed example with the black points representinghalf of the particles with the best fitness values (the first threeelements of B 119892 = 3) whereas the grey points (f2 f4 f6 notin E)correspond to the remaining individuals From Figure 5 it

0

0

0

d1

d2

minus3

minus2

minus1

minus2

minus2

1198396

11983931198391

1198394

1198395

1198392 119846211198461

1

11984623

11984613

11984625

11984615

C1

C5

C3

2

2

2

2

22

2

0

0 0

0

minus2

minus2

minus2

minus4

minus4minus6

4

44

6

68

3

2

1

0

minus3 minus2 minus1 3210

Figure 5 Operation of the social procedure

can be seen that the social procedure is applied to all blackparticles (f5 = b1 f3 = b2 and f1 = b3 f5 f3 f1 isin

E) yielding two new random particles (ℎ = 2) which arecharacterized by white points m1

1 m21 m

13 m

23 m

15 and m2

5for each black point inside of their corresponding subspaces1198621 1198623 and 1198625 Considering the particle f3 in Figure 7 theparticle m2

3 corresponds to the best particle (m1198871198901199041199053 ) from thetwo randomly generated particles (according to their fitnessvalues) within 1198623 Therefore the particlem1198871198901199041199053 will substitutef3 in the individual l119896+13 for the next generation since it holdsa better fitness value than f3(119891(f3)lt119891(m1198871198901199041199053 ))

The LS optimization procedure is defined over a boundedsearch space S Search points that do not belong to sucharea are considered to be infeasible However during theevolution process some candidate solutions could fall outsidethe search space In the proposed approach such infeasiblesolutions are arbitrarily placed with a random position insidethe search space S

33 Complete LS Algorithm LS is a simple algorithm withonly seven adjustable parameters the strength of attraction119865 the attractive length 119871 number of promising solutions 119892the population size 119873 the tuning factor 120573 the number ofrandom samples ℎ and the number of generations 119892119890119899 Theoperation of LS is divided into three parts initialization of thesolitary and social operations In the initialization (119896 = 0) thefirst population L0 (l01 l

02 l

0119873) is produced The values

11989701198941 119897

01198942 119897

0119894119899 of each individual l119896

119894and each dimension

119889 are randomly and uniformly distributed between theprespecified lower initial parameter bound 119897119887

119889and the upper

initial parameter bound 119906119887119889

1198970119894119895= 119897119887119889+ rand sdot (119906119887

119889minus 119897119887119889)

119894 = 1 2 119873 119889 = 1 2 119899(14)

In the evolution process the solitary (A) and social (B) oper-ations are iteratively applied until the number of iterations119896 = 119892119890119899 has been reached The complete LS procedure isillustrated in Algorithm 1

8 Mathematical Problems in Engineering

(1) Input 119865 119871 119892119873 119892119890119899 ℎ and 120573(2) Initialize L0 (119896 = 0)(3) until (119896 = 119892119890119899)(4) F larr SolitaryOperation(L119896) Solitary operator (Section 31)(5) L119896+1 larr SocialOperation(L119896 F) Social operator (Section 32)(6) 119896 = k + 1(7) end until

Algorithm 1 Locust Search (LS) algorithm

34 Discussion about the LS Algorithm Evolutionary algo-rithms (EA) have been widely employed for solving complexoptimization problems These methods are found to bemore powerful than conventional methods that are basedon formal logics or mathematical programming [46] Inthe EA algorithm search agents have to decide whetherto explore unknown search positions or to exploit alreadytested positions in order to improve their solution qualityPure exploration degrades the precision of the evolutionaryprocess but increases its capacity to find new potentially solu-tions On the other hand pure exploitation allows refiningexistent solutions but adversely drives the process to localoptimal solutions Therefore the ability of an EA to find aglobal optimal solution depends on its capacity to find a goodbalance between the exploitation of found-so-far elementsand the exploration of the search space [47]

Most of swarm algorithms and other evolutionary algo-rithms tend to exclusively concentrate the individuals in thecurrent best positions Under such circumstances such algo-rithms seriously limit their exploration-exploitation capaci-ties

Different to most of existent evolutionary algorithmsin the proposed approach the modeled behavior explicitlyavoids the concentration of individuals in the current bestpositions Such fact allows not only to emulate the coop-erative behavior of the locust colony in a good realisticway but also to incorporate a computational mechanism toavoid critical flaws that are commonly present in the popularevolutionary algorithms such as the premature convergenceand the incorrect exploration-exploitation balance

It is important to emphasize that the proposed approachconducts two operators (solitary and social) within a singleiteration Such operators are similar to those that are usedby other evolutionary methods such as ABC (employed beesonlooker bees and scout bees) AIS (clonal proliferationoperator affinity maturation operator and clonal selectionoperator) andDE (mutation crossover and selection) whichare all executed in a single iteration

4 Numerical Experiments overBenchmark Functions

A comprehensive set of 13 functions all collected from [48ndash50] has been used to test the performance of the LS approachas an optimization method Tables 11 and 12 present thebenchmark functions used in our experimental study Such

functions are classified into two different categories uni-modal test functions (Table 11) andmultimodal test functions(Table 12) In these tables 119899 is the function dimension 119891optis the minimum value of the function with S being a subsetof 119877119899 The optimum locations (xopt) for functions in Tables11 and 12 are in [0]119899 except for 1198915 11989112 and 11989113 with xopt in[1]119899 and 1198918 in [42096]

119899 A detailed description of optimumlocations is given in Tables 11 and 12

We have applied the LS algorithm to 13 functions whoseresults have been compared to those produced by the ParticleSwarmOptimization (PSO) method [27] and the DifferentialEvolution (DE) algorithm [28] both considered as the mostpopular algorithms formany optimization applications In allcomparisons the population has been set to 40 (119873 = 40)individualsThemaximum iteration number for all functionshas been set to 1000 Such stop criterion has been selectedto maintain compatibility to similar works reported in theliterature [48 49]

The parameter setting for each of the algorithms in thecomparison is described as follows

(1) PSO in the algorithm 1198881 = 1198882 = 2 while the inertiafactor (120596) is decreased linearly from 09 to 02

(2) DE the DERand1 scheme is employed The param-eter settings follow the instructions in [28 51] Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(3) In LS 119865 and 119871 are set to 06 and 119871 respectivelyBesides 119892 is fixed to 20 (1198732) ℎ = 2 120573 = 06 whereas119892119890119899 and119873 are configured to 1000 and 40 respectivelyOnce such parameters have been experimentallydetermined they are kept for all experiments in thissection

In the comparison three indexes are considered the averagebest-so-far solution (ABS) the standard deviation (SD)and the number of function evaluations (NFE) The firsttwo indexes assess the accuracy of the solution whereasthe last one measures the computational cost The averagebest-so-far solution (ABS) expresses the average value ofthe best function evaluations that have been obtained from30 independent executions The standard deviation (SD)indicates the dispersion of the ABS values Evolutionarymethods are in general complex pieces of software withseveral operators and stochastic branches Therefore it isdifficult to conduct a complexity analysis fromadeterministicperspective Under such circumstances it ismore appropriate

Mathematical Problems in Engineering 9

Table 1 Minimization results from the benchmark functions test inTable 11 with 119899 = 30 Maximum number of iterations = 1000

PSO DE LS

1198911

ABS 166 times 10minus1 627 times 10minus3 455 times 10minus4

SD 379 times 10minus1 168 times 10minus1 698 times 10minus4

NFE 28610 20534 16780

1198912

ABS 483 times 10minus1 202 times 10minus1 541 times 10minus3

SD 159 times 10minus1 066 145 times 10minus2

NFE 28745 21112 16324

1198913

ABS 275 572 times 10minus1 161 times 10minus3

SD 101 015 132 times 10minus3

NFE 38320 36894 20462

1198914

ABS 184 011 105 times 10minus2

SD 087 005 663 times 10minus3

NFE 37028 36450 21158

1198915

ABS 307 239 411 times 10minus2

SD 042 036 274 times 10minus3

NFE 39432 37264 21678

1198916

ABS 636 651 588 times 10minus2

SD 074 087 167 times 10minus2

NFE 38490 36564 22238

1198917

ABS 614 012 271 times 10minus2

SD 073 002 118 times 10minus2

NFE 37274 35486 21842

to use the number of function evaluations (NFE) just asit is used in the literature [52 53] to evaluate and assessthe computational effort (time) and the complexity amongoptimizers It represents how many times an algorithm usesthe objective function to evaluate the objective (fitness)function until the best solution of a determined executionhas been found Since the experiments require 30 differentexecutions the NFE index corresponds to the averaged valueobtained from these executions A small NFE value indicatesthat less time is needed to reach the global optimum

41 Unimodal Test Functions Functions 1198911 to 1198917 are uni-modal functions The results for unimodal functions over30 runs are reported in Table 1 considering the averagebest-so-far solution (ABS) the standard deviation (SD) andthe number of function evaluations (NFE) According tothis table LS provides better results than PSO and DE forall functions in terms of ABS and SD In particular thetest yields the largest performance difference in functions1198914ndash1198917 Such functions maintain a narrow curving valley that

is hard to optimize in case the search space cannot beexplored properly and the direction changes cannot be keptup with [54] For this reason the performance differencesare directly related to a better trade-off between explorationand exploitation that is produced by LS operators In thepractice a main goal of an optimization algorithm is to finda solution as good as possible within a small time windowThe computational cost for the optimizer is represented byits NFE values According to Table 1 the NFE values thatare obtained by the proposed method are smaller than its

Table 2 119901 values produced by Wilcoxonrsquos test that compares LSversus PSO and DE over the ldquoaverage best-so-farrdquo values fromTable 1

LS versus PSO DE1198911

183 times 10minus4 173 times 10minus2

1198912

385 times 10minus3 183 times 10minus4

1198913

173 times 10minus4 623 times 10minus3

1198914

257 times 10minus4 521 times 10minus3

1198915

473 times 10minus4 183 times 10minus3

1198916

639 times 10minus5 215 times 10minus3

1198917

183 times 10minus4 221 times 10minus3

counterparts LowerNFE values aremore desirable since theycorrespond to less computational overload and thereforefaster results In the results perspective it is clear that PSOand DE needmore than 1000 generations in order to producebetter solutions However this number of generations isconsidered in the experiments aiming for producing a visiblecontrast among the approaches If the number of generationshas been set to an exaggerated value then all methods wouldconverge to the best solution with no significant troubles

A nonparametric statistical significance proof known asWilcoxonrsquos rank sum test for independent samples [55 56]has been conducted with an 5 significance level over theldquoaverage best-so-farrdquo data of Table 1 Table 2 reports the 119901values produced byWilcoxonrsquos test for the pairwise compari-son of the ldquoaverage best-so-farrdquo of two groups Such groupsare formed by LS versus PSO and LS versus DE As a nullhypothesis it is assumed that there is no significant differencebetween mean values of the two algorithms The alternativehypothesis considers a significant difference between theldquoaverage best-so-farrdquo values of both approaches All 119901 valuesreported in the table are less than 005 (5 significancelevel) which is a strong evidence against the null hypothesisindicating that the LS results are statistically significant andthat it has not occurred by coincidence (ie due to the normalnoise contained in the process)

42 Multimodal Test Functions Multimodal functions pos-sess many local minima which make the optimization adifficult task to be accomplished In multimodal functionsthe results reflect the algorithmrsquos ability to escape fromlocal optima We have applied the algorithms over functions1198918 to 11989113 where the number of local minima increasesexponentially as the dimension of the function increasesThe dimension of such functions is set to 30 The resultsare averaged over 30 runs with performance indexes beingreported in Table 3 as follows the average best-so-far solution(ABS) the standard deviation (SD) and the number of func-tion evaluations (NFE) Likewise 119901 values of the Wilcoxonsigned-rank test of 30 independent runs are listed in Table 4From the results it is clear that LS yields better solutionsthan others algorithms for functions 1198919 11989110 11989111 and 11989112 interms of the indexes ABS and SD However for functions 1198918and 11989113 LS produces similar results to DE The Wilcoxonrank test results that are presented in Table 6 confirm that

10 Mathematical Problems in Engineering

Table 3Minimization results from the benchmark functions test inTable 12 with 119899 = 30 Maximum number of iterations = 1000

PSO DE LS

1198918

ABS minus67 times 103 minus126 times 104 minus126 times 104

SD 63 times 102 37 times 102 11 times 102

NFE 38452 35240 21846

1198919

ABS 148 401 times 10minus1 249 times 10minus3

SD 139 51 times 10minus2 48 times 10minus4

NFE 37672 34576 20784

11989110

ABS 147 466 times 10minus2 215 times 10minus3

SD 144 127 times 10minus2 318 times 10minus4

NFE 39475 37080 21235

11989111

ABS 1201 115 147 times 10minus4

SD 312 006 148 times 10minus5

NFE 38542 34875 22126

11989112

ABS 687 times 10minus1 374 times 10minus1 558 times 10minus3

SD 707 times 10minus1 155 times 10minus1 418 times 10minus4

NFE 35248 30540 16984

11989113

ABS 187 times 10minus1 181 times 10minus2 178 times 10minus2

SD 574 times 10minus1 166 times 10minus2 164 times 10minus3

NFE 36022 31968 18802

Table 4 119901 values produced byWilcoxonrsquos test comparing LS versusPSO and DE over the ldquoaverage best-so-farrdquo values from Table 3

LS versus PSO DE1198918

183 times 10minus4 00611198919

117 times 10minus4 241 times 10minus4

11989110

143 times 10minus4 312 times 10minus3

11989111

625 times 10minus4 114 times 10minus3

11989112

234 times 10minus5 715 times 10minus4

11989113

473 times 10minus4 0071

LS performed better than PSO and DE considering fourproblems 119891

9ndash11989112 whereas from a statistical viewpoint there

is no difference between results from LS and DE for 1198918 and11989113 According to Table 3 the NFE values obtained by theproposed method are smaller than those produced by otheroptimizers The reason of this remarkable performance isassociated with its two operators (i) the solitary operatorallows a better particle distribution in the search spaceincreasing the algorithmrsquos ability to find the global optimaand (ii) the use of the social operation provides a simpleexploitation operator that intensifies the capacity of findingbetter solutions during the evolution process

5 Gaussian Mixture Modelling

In this section the modeling of image histograms throughGaussian mixture models is presented Let one consider animage holding 119871 gray levels [0 119871 minus 1] whose distribution

is defined by a histogram ℎ(119892) represented by the followingformulation

ℎ (119892) =

119899119892

119873119901 ℎ (119892) gt 0

Np =119871minus1sum

119892=0119899119892

119871minus1sum

119892=0ℎ (119892) = 1

(15)

where 119899119892denotes the number of pixels with gray level 119892

and Np the total number of pixels in the image Under suchcircumstances ℎ(119892) can be modeled by using a mixture 119901(119909)of Gaussian functions of the form

119901 (119909) =

119870

sum

119894=1

119875119894

radic2120587120590119894

exp[minus (119909 minus 120583

119894)2

21205902119894

] (16)

where119870 symbolizes the number of Gaussian functions of themodel whereas 119875

119894is the a priori probability of function 119894 120583

119894

and 120590119894represent the mean and standard deviation of the 119894th

Gaussian function respectively Furthermore the constraintsum119870

119894=1 119875119894 = 1must be satisfied by themodel In order to evaluatethe similarity between the image histogram and a candidatemixture model the mean square error can be used as follows

119869 =1119899

119871

sum

119895=1[119901 (119909119895) minus ℎ (119909

119895)]

2+120596 sdot

1003816100381610038161003816100381610038161003816100381610038161003816

(

119870

sum

119894=1119875119894)minus 1

1003816100381610038161003816100381610038161003816100381610038161003816

(17)

where 120596 represents the penalty associated with the constrainsum119870

119894=1 119875119894 = 1 Therefore 119869 is considered as the objective func-tion which must be minimized in the estimation problem Inorder to illustrate the histogram modeling through a Gaus-sian mixture Figure 6 presents an example assuming threeclasses that is 119870 = 3 Considering Figure 6(a) as the imagehistogram ℎ(119909) the Gaussian mixture 119901(119909) that is shown inFigure 6(c) is produced by adding the Gaussian functions1199011(119909) 1199012(119909) and 1199013(119909) in the configuration presented inFigure 6(b) Once themodel parameters that bettermodel theimage histogram have been determined the final step is todefine the threshold values 119879

119894(119894 isin [1 119870]) which can be

calculated by simple standard methods just as it is presentedin [19ndash21]

6 Segmentation Algorithm Based on LS

In the proposed method the segmentation process isapproached as an optimization problem Computationaloptimization generally involves two distinct elements (1) asearch strategy to produce candidate solutions (individualsparticles insects locust etc) and (2) an objective functionthat evaluates the quality of each selected candidate solutionSeveral computational algorithms are available to performthe first element The second element where the objectivefunction is designed is unquestionably the most criticalIn the objective function it is expected to embody the full

Mathematical Problems in Engineering 11

Freq

uenc

y

Gray level

h(x)

(a)

Gray level

Freq

uenc

y

p1(x)p2(x) p3(x)

(b)

Freq

uenc

y

Gray level

p(x)

(c)

Figure 6 Histogram approximation through a Gaussian mixture (a) Original histogram (b) configuration of the Gaussian components1199011(119909) 1199012(119909) and 1199013(119909) and (c) final Gaussian mixture 119901(119909)

complexity of the performance biases and restrictions of theproblem to be solved In the segmentation problem candidatesolutions represent Gaussian mixtures The objective func-tion 119869 (17) is used as a fitness value to evaluate the similaritypresented between the Gaussian mixture and the imagehistogram Therefore guided by the fitness values (119869 values)a set of encoded candidate solutions are evolved using theevolutionary operators until the best possible resemblancecan be found

Over the last decade several algorithms based on evo-lutionary and swarm principles [19ndash22] have been proposedto solve the problem of segmentation through a Gaussianmixture model Although these algorithms own certain goodperformance indexes they present two important limitations(1) They frequently obtain suboptimal approximations asa consequence of an inappropriate balance between explo-ration and exploitation in their search strategies (2) Theyare based on the assumption that the number of Gaussians(classes) in the mixture is preknown and fixed otherwisethey cannot work

In order to eliminate such flaws the proposed approachincludes (A) a new search strategy and (B) the definitionof a new objective function For the search strategy the LSmethod (Section 4) is adopted Under LS the concentrationof individuals in the current best positions is explicitlyavoided Such fact allows reducing critical problems such asthe premature convergence to suboptimal solutions and theincorrect exploration-exploitation balance

61 New Objective Function 119869new Previous segmentation

algorithms based on evolutionary and swarm methods use(17) as objective function Under these circumstances each

candidate solution (Gaussian mixture) is only evaluated interms of its approximation with the image histogram

Since the proposed approach aims to automatically selectthe number ofGaussian functions119870 in the finalmixture119901(119909)the objective function must be modified The new objectivefunction 119869new is defined as follows

119869new

= 119869 + 120582 sdot119876 (18)

where 120582 is a scaling constant The new objective function isdivided into two partsThe first part 119869 evaluates the quality ofeach candidate solution in terms of its similarity with regardto the image histogram (17) The second part 119876 penalizes theoverlapped area among Gaussian functions (classes) with 119876being defined as follows

119876 =

119870

sum

119894=1

119870

sum

119895=1119895 =119894

119871

sum

119897=1min (119875

119894sdot 119901119894(119897) 119875119895sdot 119901119895(119897)) (19)

where 119870 and 119871 represent the number of classes and the graylevels respectively The parameters 119901

119894(119897) and 119901

119895(119897) symbolize

the Gaussian functions 119894 and 119895 respectively that are to beevaluated on the point 119897 (gray level) whereas the elements119875119894and 119875

119895represent the a priori probabilities of the Gaussian

functions 119894 and 119895 respectively Under such circumstancesmixtures with Gaussian functions that do not ldquopositivelyrdquoparticipate in the histogram approximation are severelypenalized

Figure 7 illustrates the effect of the new objective function119869new in the evaluation of Gaussian mixtures (candidatesolutions) From the image histogram (Figure 7(a)) it isevident that two Gaussian functions are enough to accurately

12 Mathematical Problems in Engineering

Gray level

Freq

uenc

y h(x)

(a)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)Freq

uenc

y

(b)

Gray level

Freq

uenc

y p1(x)

p2(x)

p3(x)

p4(x)

Q12 Q23 Q34

(c)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)

Q12

Q23

Q34

Freq

uenc

y(d)

Figure 7 Effect of the new objective function 119869new in the evaluation of Gaussian mixtures (candidate solutions) (a) Original histogram (b)Gaussian mixture considering four classes (c) penalization areas and (d) Gaussian mixture of better quality solution

approximate the original histogramHowever if theGaussianmixture is modeled by using a greater number of functions(eg four as it is shown in Figure 7(b)) the original objectivefunction 119869 is unable to obtain a reasonable approximationUnder the new objective function 119869

new the overlapped areaamong Gaussian functions (classes) is penalized Such areasin Figure 7(c) correspond to 11987612 11987623 and 11987634 where11987612 represents the penalization value produced between theGaussian function 1199011(119909) and 1199012(119909) Therefore due to thepenalization theGaussianmixture shown in Figures 7(b) and7(c) provides a solution of low quality On the other hand theGaussian mixture presented in Figure 7(d) maintains a lowpenalty thus it represents a solution of high quality FromFigure 7(d) it is easy to see that functions 1199011(119909) and 1199014(119909)can be removed from the final mixture This eliminationcould be performed by a simple comparison with a thresholdvalue 120579 since 1199011(119909) and 1199014(119909) have a reduced amplitude(1199011(119909) asymp 1199012(119909) asymp 0) Therefore under 119869new it is possibleto find the reduced Gaussian mixture model starting from aconsiderable number of functions

Since the proposed segmentation method is conceivedas an optimization problem the overall operation can bereduced to solve the formulation of (20) by using the LSalgorithm

minimize 119869new

(x) = 119869 (x) + 120582 sdot 119876 (x)

x = (1198751 1205831 1205901 119875119870 120583119870 120590119870) isin R3sdot119870

subject to 0 le 119875119889le 1 119889 isin (1 119870)

0 le 120583119889le 255

0 le 120590119889le 25

(20)

where 119875119889 120583119889 and 120590

119889represent the probability mean and

standard deviation of the class 119889 It is important to remarkthat the new objective function 119869

new allows the evaluationof a candidate solution in terms of the necessary numberof Gaussian functions and its approximation quality Undersuch circumstances it can be used in combination with anyother evolutionary method and not only with the proposedLS algorithm

62 Complete Segmentation Algorithm Once the new searchstrategy (LS) and objective function (119869new) have been definedthe proposed segmentation algorithm can be summarized byAlgorithm 2The new algorithm combines operators definedby LS and operations for calculating the threshold values

(Line 1) The algorithm sets the operative parameters 119865119871 119892 119892119890119899 119873 119870 120582 and 120579 They rule the behavior of thesegmentation algorithm (Line 2) Afterwards the populationL0 is initialized considering 119873 different random Gaussianmixtures of119870 functionsThe idea is to generate an119873-randomGaussian mixture subjected to the constraints formulatedin (20) The parameter 119870 must hold a high value in orderto correctly segment all images (recall that the algorithmis able to reduce the Gaussian mixture to its minimumexpression) (Line 3)Then the Gaussianmixtures are evolvedby using the LS operators and the new objective function119869newThis process is repeated during119892119890119899 cycles (Line 8) Afterthis procedure the best Gaussian mixture l119892119890119899

119887119890119904119905is selected

according to its objective function 119869new (Line 9) Then

the Gaussian mixture l119892119890119899119887119890119904119905

is reduced by eliminating thosefunctions whose amplitudes are lower than 120579 (119901

119894(119909) le 120579)

(Line 10) Then the threshold values 119879119894from the reduced

model are calculated (Line 11) Finally the calculated 119879119894

values are employed to segment the image Figure 8 shows aflowchart of the complete segmentation procedure

Mathematical Problems in Engineering 13

(1) Input 119865 119871 119892 119892119890119899119873 119870 120582 and 120579(2) Initialize L0 (119896 = 0)(3) until (119896 = 119892119890119899)(4) F larr SolitaryOperation(L119896) Solitary operator (Section 31)(5) L119896+1 larr SocialOperation(L119896 F) Social operator (Section 32)(6) 119896 = k + 1(7) end until(8) Obtain I119892119890119899

119887119890119904119905

(9) Reduce I119892119890119899119887119890119904119905

(10) Calculate the threshold values 119879119894from the reduced model

(11) Use 119879119894to segment the image

Algorithm 2 Segmentation LS algorithm

The proposed segmentation algorithm permits to auto-matically detect the number of segmentation partitions(classes) Furthermore due to its remarkable search capaci-ties the LSmethodmaintains a better accuracy than previousalgorithms based on evolutionary principles However theproposed method presents two disadvantages first it isrelated to its implementation which in general is morecomplex than most of the other segmentators based onevolutionary basics The second refers to the segmentationprocedure of the proposed approach which does not considerany spatial pixel characteristics As a consequence pixelsthat may belong to a determined region due to its positionare labeled as a part of another region due to its graylevel intensity Such a fact adversely affects the segmentationperformance of the method

7 Segmentation Results

This section analyses the performance of the proposedsegmentation algorithmThe discussion is divided into threeparts the first one shows the performance of the proposedLS segmentation algorithm while the second presents a com-parison between the proposed approach and others segmen-tation algorithms that are based on evolutionary and swammethods The comparison mainly considers the capacities ofeach algorithm to accurately and robustly approximate theimage histogram Finally the third part presents an objectiveevaluation of segmentation results produced by all algorithmsthat have been employed in the comparisons

71 Performance of LS Algorithm in Image SegmentationThis section presents two experiments that analyze theperformance of the proposed approach Table 5 presentsthe algorithmrsquos parameters that have been experimentallydetermined and kept for all the test images through allexperiments

First Image The first test considers the histogram shownby Figure 9(b) while Figure 9(a) presents the original imageAfter applying the proposed algorithm just as it has beenconfigured according to parameters in Table 5 a minimummodel of four classes emerges after the start from Gaussian

Table 5 Parameter setup for the LS segmentation algorithm

119865 119871 119892 119892119890119899 119873 119870 120582 120579

06 02 20 1000 40 10 001 00001

Table 6 Results of the reduced Gaussian mixture for the first andthe second image

Parameters First image Second image1198751 0004 00321205831 181 1211205901 82 291198752 00035 000151205832 699 4511205902 184 2491198753 001 00061205833 949 13011205903 89 1791198754 0007 0021205834 1631 16721205904 299 101

mixtures of 10 classes Considering 30 independent exe-cutions the averaged parameters of the resultant Gaussianmixture are presented in Table 6 One final Gaussian mixture(ten classes) which has been obtained by LS is presentedin Figure 10 Furthermore the approximation of the reducedGaussian mixture is also visually compared with the originalhistogram in Figure 10 On the other hand Figure 11 presentsthe segmented image after calculating the threshold points

Second Image For the second experiment the image inFigure 12 is testedThemethod aims to segment the image byusing a reduced Gaussian mixture model obtained by the LSapproach After executing the algorithm according to param-eters from Table 5 the resulting averaged parameters of theresultant Gaussian mixture are presented in Table 6 In orderto assure consistency the results are averaged considering 30independent executions Figure 13 shows the approximationquality that is obtained by the reduced Gaussian mixturemodel in (a) and the segmented image in (b)

14 Mathematical Problems in Engineering

Input parameters

ImageI(xy)

from the reduced model

YesLS method

No

k = k + 1

k gt gen

Reduce 119845genbest

Calculate the threshold values Ti

Use Ti to segment the image

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

119813 larr SolitaryOperation (119819k)

119819k+1 larr SocialOperation (119819k 119813)

Initialize 1198190 (k = 0)

F L g gen N K 120582 and 120579

Figure 8 Flowchart of the complete segmentation procedure

(a)

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray level

Freq

uenc

y

(b)

Figure 9 (a) Original first image used on the first experiment and (b) its histogram

Mathematical Problems in Engineering 15

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray levelFr

eque

ncy

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

Figure 10 Gaussian mixture obtained by LS for the first image

Figure 11 Image segmented with the reduced Gaussian mixture

(a)

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

(b)

Figure 12 (a) Original second image used on the first experiment and (b) its histogram

72 Histogram Approximation Comparisons This sectiondiscusses the comparison between the proposed segmenta-tion algorithm and other evolutionary-segmentation meth-ods that have been proposed in the literature The set ofmethods used in the experiments includes the 119869 + ABC[19] 119869 + AIS [20] and 119869 + DE [21] These algorithmsconsider the combination between the original objectivefunction 119869 (17) and an evolutionary technique such asArtificial Bee Colony (ABC) the Artificial Immune Systems(AIS) and the Differential Evolution (DE) [21] respectively

Since the proposed segmentation approach considers thecombination of the new objective function 119869

new (18) andthe LS algorithm it will be referred to as 119869

new+ LS

The comparison focuses mainly on the capacities of eachalgorithm to accurately and robustly approximate the imagehistogram

In the experiments the populations have been set to 40(119873 = 40) individuals The maximum iteration number for allfunctions has been set to 1000 Such stop criterion has beenconsidered to maintain compatibility to similar experiments

16 Mathematical Problems in Engineering

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

(a) (b)

Figure 13 (a) Segmentation result obtained by LS for the first image and (b) the segmented image

(1) (2)

(3) (4)

(a)

0015

0005001

0020025

0035003

0045004

005

00 50 100 150 200 250

Gray level

Freq

uenc

y

(b)

Figure 14 (a) Synthetic image used in the comparison and (b) its histogram

that are reported in the literature [18] The parameter settingfor each of the segmentation algorithms in the comparison isdescribed as follows

(1) 119869 + ABC [19] in the algorithm its parameters areconfigured as follows the abandonment limit = 100120572 = 005 and limit = 30

(2) 119869 + AIS [20] it presents the following parameters ℎ =120119873

119888= 80 120588 = 10 119875

119903= 20 119871 = 22 and 119879

119890= 001

(3) 119869 + DE [21] the DERand1 scheme is employed Theparameter settings follow the instructions in [21]Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(4) In 119869new +LS the method is set according to the valuesdescribed in Table 5

In order to conduct the experiments a synthetic image isdesigned to be used as a reference in the comparisons Themain idea is to know in advance the exact number of classes(and their parameters) that are contained in the image sothat the histogram can be considered as a ground truth The

Table 7 Employed parameters for the design of the reference image

119875119894

120583119894

120590119894

(1) 005 40 8(2) 004 100 10(3) 005 160 8(4) 0027 220 15

synthetic image is divided into four sections Each sectioncorresponds to a different class which is produced by settingeach gray level pixel 119875V119894 to a value that is determined by thefollowing model

119875V119894= 119890minus((119909minus1205831)

221205902119894) (21)

where 119894 represents the section whereas 120583119894and 120590119894are themean

and the dispersion of the gray level pixels respectively Thecomparative study employs the image of 512 times 512 that isshown in Figure 14(a) and the algorithmrsquos parameters thathave been presented in Table 7 Figure 14(b) illustrates theresultant histogram

Mathematical Problems in Engineering 17

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2Class 3Class 4

Class 5Class 6Class 7Class 8

(a)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(b)

Figure 15 Convergence results (a) Convergence of the following methods 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes (b) Convergence of the proposed method (reduced Gaussian mixture)

(a) (b)

Figure 16 Segmentation results obtained by (a) several methods including 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes and (b) the proposed method (reduced Gaussian mixture)

In the comparison the discussion focuses on the follow-ing issues first of all accuracy second convergence andthird computational cost

Convergence This section analyzes the approximation con-vergence when the number of classes that are used bythe algorithm during the evolution process is different tothe actual number of classes in the image Recall that theproposed approach automatically finds the reduced Gaussianmixture which better adapts to the image histogram

In the experiment the methods 119869 + ABC 119869 + AISand 119869 + DE are executed considering Gaussian mixturescomposed of 8 functions Under such circumstances thenumber of classes to be detected is higher than the actualnumber of classes in the image On the other hand the

proposed algorithm maintains the same configuration ofTable 5Therefore it can detect and calculate up to ten classes(119870 = 10)

As a result the techniques 119869 + ABC 119869 + AIS and 119869 +DE tend to overestimate the image histogram This effectcan be seen in Figure 15(a) where the resulting Gaussianfunctions are concentrated within actual classes Such abehavior is a consequence of the evaluation that is consideredby the original objective function 119869 which privileges onlythe approximation between the Gaussian mixture and theimage histogram This effect can be graphically illustrated byFigure 16(a) that shows the pixel misclassification producedby the wrong segmentation of Figure 14(a) On the otherhand the proposed approach obtains a reduced Gaussianmixture model which allows the detection of each class from

18 Mathematical Problems in Engineering

006

004

005

002

003

001

00 50 100 150 200 250

Gray level

Freq

uenc

y

(a)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

(b)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

HistogramClass 1Class 2

Class 3Class 4

(c)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(d)

Figure 17 Approximation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

the actual histogram (see Figure 15(b)) As a consequencethe segmentation is significantly improved by eliminating thepixel misclassification as it is shown by Figure 16(b)

It is evident from Figure 15 that the techniques 119869 + ABC119869 + AIS and 119869 + DE all need an a priori knowledge of thenumber of classes that are contained in the actual histogramin order to obtain a satisfactory result On the other handthe proposed algorithm is able to find a reduced Gaussianmixture whose classes coincide with the actual number ofclasses that are contained in the image histogram

Accuracy In this section the comparison among the algo-rithms in terms of accuracy is reported Most of the reportedcomparisons [19ndash26] are concerned about comparing theparameters of the resultant Gaussian mixtures by using realimages Under such circumstances it is difficult to considera clear reference in order to define a meaningful errorTherefore the image defined in Figure 14 has been used in theexperiments because its construction parameters are clearlydefined in Table 7

Since the parameter values from Table 7 act as groundtruth a simple averaged difference between them and thevalues that are computed by each algorithm could be used as

comparison error However as each parametermaintains dif-ferent active intervals it is necessary to express the differencesin terms of percentage Therefore if Δ120573 is the parametricdifference and 120573 the ground truth parameter the percentageerror Δ120573 can be defined as follows

Δ120573 =Δ120573

120573sdot 100 (22)

In the segmentation problem each Gaussian mixture repre-sents a 119870-dimensional model where each dimension corre-sponds to a Gaussian function of the optimization problemto be solved Since each Gaussian function possesses threeparameters 119875

119894 120583119894 and 120590

119894 the complete number of parameters

is 3 sdot 119870 dimensions Therefore the final error 119864 produced bythe final Gaussian mixture is

119864 =1

119870 sdot 3

119870sdot3sum

V=1Δ120573V (23)

where 120573V isin (119875119894 120583119894 120590119894)In order to compare accuracy the algorithms 119869 + ABC

119869 + AIS 119869 + DE and the proposed approach are all executedover the image shown by Figure 14(a) The experiment aims

Mathematical Problems in Engineering 19

(a) (b)

(c) (d)

Figure 18 Segmentation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

to estimate the Gaussian mixture that better approximatesthe actual image histogram Methods 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures composed of 4functions (119870 = 4) In case of the 119869new + LS method althoughthe algorithm finds a reduced Gaussian mixture of fourfunctions it is initially set with ten functions (119870 = 10)Table 8 presents the final Gaussian mixture parameters andthe final error 119864 The final Gaussian mixture parametershave been averaged over 30 independent executions in orderto assure consistency A close inspection of Table 8 revealsthat the proposed method is able to achieve the smallesterror (119864) in comparison to the other algorithms Figure 16presents the histogram approximations that are produced byeach algorithm whereas Figure 17 shows their correspondentsegmented images Both illustrations present the mediancase obtained throughout 30 runs Figure 18 exhibits that 119869 +ABC 119869 + AIS and 119869 + DE present different levels of misclas-sifications which are nearly absent in the proposed approachcase

Computational Cost The experiment aims to measure thecomplexity and the computing time spent by the 119869 + ABC

Table 8 Results of the reduced Gaussian mixture in terms ofaccuracy

Algorithm Gaussian function 119875119894

120583119894

120590119894

119864

J + ABC

(1) 0052 445 64

1179(2) 0084 9812 1287(3) 0058 16350 894(4) 0025 21884 175

J + AIS

(1) 0072 3101 614

2201(2) 0054 8852 1221(3) 0039 14921 914(4) 0034 24841 1384

J + DE

(1) 0041 3574 786

1357(2) 0036 9057 1197(3) 0059 14847 901(4) 0020 20134 1302

119869new + LS

(1) 0049 4012 75

398(2) 0041 10204 104(3) 0052 16866 83(4) 0025 11092 152

20 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 19 Images employed in the computational cost analysis

the 119869 + AIS the 119869 + DE and the 119869new + LS algorithm whilecalculating the parameters of the Gaussian mixture in bench-mark images (see Figures 19(a)ndash19(d)) 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures that are composedof 4 functions (119870 = 4) In case of the 119869new + LS methodalthough the algorithm finds a reduced Gaussian mixture offour functions despite being initialized with ten functions(119870 = 10) Table 9 shows the averaged measurements after30 experiments It is evident that the 119869 + ABC and 119869 +

DE are the slowest to converge (iterations) and the 119869 + AISshows the highest computational cost (time elapsed) becauseit requires operators which demand long times On the otherhand the 119869new + LS shows an acceptable trade-off betweenits convergence time and its computational cost Thereforealthough the implementation of 119869new + LS in general requiresmore code than most of other evolution-based segmentatorssuch a fact is not reflected in the execution time FinallyFigure 19 below shows the segmented images as they are

Mathematical Problems in Engineering 21

(a) (b) (c) (d)

Figure 20 Experimental set used in the evaluation of the segmentation results

Table 9 Iterations and time requirements of the 119869 + ABC the 119869 +AIS the 119869 + DE and the 119869new + LS algorithm as they are applied tosegment benchmark images (see Figure 17)

Iterations (a) (b) (c) (d)Time elapsed

119869 + ABC 855 833 870 997272 s 270 s 273 s 31 s

119869 + AIS 725 704 754 812178 s 161 s 141 s 201 s

119869 + DE 657 627 694 742125 s 112 s 145 s 188 s

119869new + LS 314 298 307 402

098 s 084 s 072 s 102 s

Table 10 Evaluation of the segmentation results in terms of the ROSindex

Number of classesImage

119873119877= 4

(a)119873119877= 3

(b)119873119877= 4

(c)119873119877= 4

(d)119869 + ABC 0534 0328 0411 0457119869 + AIS 0522 0321 0427 0437119869 + DE 0512 0312 0408 0418119869new + LS 0674 0401 0514 0527

generated by each algorithm It can be seen that the proposedapproach generate more homogeneous regions whereas 119869 +ABC 119869 + AIS and 119869 + DE present several artifacts that areproduced by an incorrect pixel classification

73 Performance Evaluation of the Segmentation Results Thissection presents an objective evaluation of segmentationresults that are produced by all algorithms in the compar-isons The ill-defined nature of the segmentation problemmakes the evaluation of a candidate algorithm difficult[57] Traditionally the evaluation has been conducted byusing some supervised criteria [58] which are based on thecomputation of a dissimilarity measure between a segmen-tation result and a ground truth image Recently the use ofunsupervised measures has substituted supervised indexesfor the objective evaluation of segmentation results [59]They

Table 11 Unimodal test functions

Test function S 119891opt

1198911(x) =119899

sum

119894=11199092119894

[minus100 100]119899 0

1198912(x) =119899

sum

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 +

119899

prod

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 [minus10 10]119899 0

1198913(x) =119899

sum

119894=1(

119894

sum

119895=1119909119895)

2

[minus100 100]119899 0

1198914 (x) = max119894

10038161003816100381610038161199091198941003816100381610038161003816 1 le 119894 le 119899 [minus100 100]119899 0

1198915(x) =119899minus1sum

119894=1[100 (119909

119894+1 minus 1199092119894)2+ (119909119894minus 1)2] [minus30 30]119899 0

1198916(x) =119899

sum

119894=1(119909119894+ 05)2 [minus100 100]119899 0

1198917(x) =119899

sum

119894=1119894119909

4119894+ rand(0 1) [minus128 128]119899 0

enable the quantification of the quality of a segmentationresult without a priori knowledge (ground truth image)

Evaluation Criteria In this paper the unsupervised indexROS proposed by Chabrier et al [60] has been used toobjectively evaluate the performance of each candidate algo-rithm This index evaluates the segmentation quality interms of the homogeneity within segmented regions andthe heterogeneity among the different regions ROS can becomputed as follows

ROS =119863 minus 119863

2 (24)

where 119863 quantifies the homogeneity within segmentedregions Similarly 119863 measures the disparity among theregions A segmentation result 1198781 is considered better than 1198782if ROS

1198781gt ROS

1198782 The interregion homogeneity character-

ized by119863 is calculated considering the following formulation

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot

120590119888

(sum119873119877

119897=1 120590119897) (25)

where 119873119877represents the number of partitions in which the

image has been segmented 119877119888symbolizes the number of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

6 Mathematical Problems in Engineering

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(a)

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(b)

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(c)

1

2

3

4

5

minus1

minus2

minus3

minus4

minus5

0

5minus5 0

(d)

Figure 4 Examples of different distributions (a) Initial condition (b) distribution after applying 25 (c) 50 and (d) 100 operationsThe greenasterisk represents the minimum value so far

Finally the total social force on each individual l119896119894is mod-

eled as the superposition of all of the pairwise interactionsexerted over it

S119898119894=

119873

sum

119895=1119895 =119894

s119898119894119895 (9)

Therefore the change of positionΔl119894is considered as the total

social force experimented by l119896119894as the superposition of all of

the pairwise interactions Thus Δl119894is defined as follows

Δl119894= S119898119894 (10)

After calculating the new positions P (p1 p2 p119873)of the population L119896 (l1198961 l

119896

2 l119896

119873) the final positions

F (f1 f2 f119873) must be calculated The idea is to admitonly the changes that guarantee an improvement in thesearch strategy If the fitness value of p

119894(119891(p119894)) is better than

l119896119894(119891(l119896119894)) then p

119894is accepted as the final solution Otherwise

l119896119894is retainedThis procedure can be resumed by the following

statement (considering a minimization problem)

f119894=

p119894

if 119891 (p119894) lt 119891 (l119896

119894)

l119896119894

otherwise(11)

In order to illustrate the performance of the solitary oper-ator Figure 4 presents a simple example with the solitaryoperator being iteratively applied A population of 50 dif-ferent members (119873 = 50) is assumed which adopt aconcentrated configuration as initial condition (Figure 4(a))As a consequence of the social forces the set of elementstends to distribute throughout the search space Examplesof different distributions are shown in Figures 4(b) 4(c)and 4(d) after applying 25 50 and 100 different solitaryoperations respectively

32 Social Operation (B) The social procedure representsthe exploitation phase of the LS algorithm Exploitation is

Mathematical Problems in Engineering 7

the process of refining existent individuals within a smallneighborhood in order to improve their solution quality

The social procedure is a selective operation which isapplied only to a subset E of the final positions F (whereE sube F) The operation starts by sorting F with respect tofitness values storing the elements in a temporary populationB = b1 b2 b119873 The elements in B are sorted so thatthe best individual receives the position b1 whereas the worstindividual obtains the location b

119873 Therefore the subset E

is integrated by only the first 119892 locations of B (promisingsolutions) Under this operation a subspace 119862

119895is created

around each selected particle f119895isin E The size of 119862

119895depends

on the distance 119890119889which is defined as follows

119890119889=

sum119899

119902=1 (119906119887119902 minus 119897119887119902)

119899sdot 120573 (12)

where 119906119887119902and 119897119887

119902are the upper and lower bounds in the

119902th dimension and 119899 is the number of dimensions of theoptimization problem whereas 120573 isin [0 1] is a tuning factorTherefore the limits of 119862

119895can be modeled as follows

119906119904119904119902

119895= 119887119895119902+ 119890119889

119897119904119904119902

119895= 119887119895119902minus 119890119889

(13)

where 119906119904119904119902119895and 119897119904119904119902

119895are the upper and lower bounds of the 119902th

dimension for the subspace 119862119895 respectively

Considering the subspace 119862119895around each element f

119895isin

E a set of ℎ new particles (Mℎ119895

= m1119895m2119895 mℎ

119895) are

randomly generated inside bounds fixed by (13) Once theℎ samples are generated the individual l119896+1

119895of the next

population L119896+1 must be created In order to calculate l119896+1119895

the best particlem119887119890119904119905

119895 in terms of its fitness value from the ℎ

samples (wherem119887119890119904119905119895

isin [m1119895m2119895 mℎ

119895]) is compared to f

119895

Ifm119887119890119904119905119895

is better than f119895according to their fitness values l119896+1

119895

is updated with m119887119890119904119905119895

otherwise f119895is selected The elements

of F that have not been processed by the procedure (f119908notin E)

transfer their corresponding values to L119896+1 with no changeThe social operation is used to exploit only prominent

solutions According to the proposed method inside eachsubspace 119862

119895 ℎ random samples are selected Since the

number of selected samples at each subspace is very small(typically ℎ lt 4) the use of this operator substantially reducesthe number of fitness function evaluations

In order to demonstrate the social operation a numericalexample has been set by applying the proposed process to asimple function Such function considers the interval of minus3 le1198891 1198892 le 3 whereas the function possesses one global maximaof value 81 at (0 16) Notice that 1198891 and 1198892 correspond tothe axis coordinates (commonly 119909 and 119910) For this examplea final position population F of six 2-dimensional members(119873 = 6) is assumed Figure 5 shows the initial configurationof the proposed example with the black points representinghalf of the particles with the best fitness values (the first threeelements of B 119892 = 3) whereas the grey points (f2 f4 f6 notin E)correspond to the remaining individuals From Figure 5 it

0

0

0

d1

d2

minus3

minus2

minus1

minus2

minus2

1198396

11983931198391

1198394

1198395

1198392 119846211198461

1

11984623

11984613

11984625

11984615

C1

C5

C3

2

2

2

2

22

2

0

0 0

0

minus2

minus2

minus2

minus4

minus4minus6

4

44

6

68

3

2

1

0

minus3 minus2 minus1 3210

Figure 5 Operation of the social procedure

can be seen that the social procedure is applied to all blackparticles (f5 = b1 f3 = b2 and f1 = b3 f5 f3 f1 isin

E) yielding two new random particles (ℎ = 2) which arecharacterized by white points m1

1 m21 m

13 m

23 m

15 and m2

5for each black point inside of their corresponding subspaces1198621 1198623 and 1198625 Considering the particle f3 in Figure 7 theparticle m2

3 corresponds to the best particle (m1198871198901199041199053 ) from thetwo randomly generated particles (according to their fitnessvalues) within 1198623 Therefore the particlem1198871198901199041199053 will substitutef3 in the individual l119896+13 for the next generation since it holdsa better fitness value than f3(119891(f3)lt119891(m1198871198901199041199053 ))

The LS optimization procedure is defined over a boundedsearch space S Search points that do not belong to sucharea are considered to be infeasible However during theevolution process some candidate solutions could fall outsidethe search space In the proposed approach such infeasiblesolutions are arbitrarily placed with a random position insidethe search space S

33 Complete LS Algorithm LS is a simple algorithm withonly seven adjustable parameters the strength of attraction119865 the attractive length 119871 number of promising solutions 119892the population size 119873 the tuning factor 120573 the number ofrandom samples ℎ and the number of generations 119892119890119899 Theoperation of LS is divided into three parts initialization of thesolitary and social operations In the initialization (119896 = 0) thefirst population L0 (l01 l

02 l

0119873) is produced The values

11989701198941 119897

01198942 119897

0119894119899 of each individual l119896

119894and each dimension

119889 are randomly and uniformly distributed between theprespecified lower initial parameter bound 119897119887

119889and the upper

initial parameter bound 119906119887119889

1198970119894119895= 119897119887119889+ rand sdot (119906119887

119889minus 119897119887119889)

119894 = 1 2 119873 119889 = 1 2 119899(14)

In the evolution process the solitary (A) and social (B) oper-ations are iteratively applied until the number of iterations119896 = 119892119890119899 has been reached The complete LS procedure isillustrated in Algorithm 1

8 Mathematical Problems in Engineering

(1) Input 119865 119871 119892119873 119892119890119899 ℎ and 120573(2) Initialize L0 (119896 = 0)(3) until (119896 = 119892119890119899)(4) F larr SolitaryOperation(L119896) Solitary operator (Section 31)(5) L119896+1 larr SocialOperation(L119896 F) Social operator (Section 32)(6) 119896 = k + 1(7) end until

Algorithm 1 Locust Search (LS) algorithm

34 Discussion about the LS Algorithm Evolutionary algo-rithms (EA) have been widely employed for solving complexoptimization problems These methods are found to bemore powerful than conventional methods that are basedon formal logics or mathematical programming [46] Inthe EA algorithm search agents have to decide whetherto explore unknown search positions or to exploit alreadytested positions in order to improve their solution qualityPure exploration degrades the precision of the evolutionaryprocess but increases its capacity to find new potentially solu-tions On the other hand pure exploitation allows refiningexistent solutions but adversely drives the process to localoptimal solutions Therefore the ability of an EA to find aglobal optimal solution depends on its capacity to find a goodbalance between the exploitation of found-so-far elementsand the exploration of the search space [47]

Most of swarm algorithms and other evolutionary algo-rithms tend to exclusively concentrate the individuals in thecurrent best positions Under such circumstances such algo-rithms seriously limit their exploration-exploitation capaci-ties

Different to most of existent evolutionary algorithmsin the proposed approach the modeled behavior explicitlyavoids the concentration of individuals in the current bestpositions Such fact allows not only to emulate the coop-erative behavior of the locust colony in a good realisticway but also to incorporate a computational mechanism toavoid critical flaws that are commonly present in the popularevolutionary algorithms such as the premature convergenceand the incorrect exploration-exploitation balance

It is important to emphasize that the proposed approachconducts two operators (solitary and social) within a singleiteration Such operators are similar to those that are usedby other evolutionary methods such as ABC (employed beesonlooker bees and scout bees) AIS (clonal proliferationoperator affinity maturation operator and clonal selectionoperator) andDE (mutation crossover and selection) whichare all executed in a single iteration

4 Numerical Experiments overBenchmark Functions

A comprehensive set of 13 functions all collected from [48ndash50] has been used to test the performance of the LS approachas an optimization method Tables 11 and 12 present thebenchmark functions used in our experimental study Such

functions are classified into two different categories uni-modal test functions (Table 11) andmultimodal test functions(Table 12) In these tables 119899 is the function dimension 119891optis the minimum value of the function with S being a subsetof 119877119899 The optimum locations (xopt) for functions in Tables11 and 12 are in [0]119899 except for 1198915 11989112 and 11989113 with xopt in[1]119899 and 1198918 in [42096]

119899 A detailed description of optimumlocations is given in Tables 11 and 12

We have applied the LS algorithm to 13 functions whoseresults have been compared to those produced by the ParticleSwarmOptimization (PSO) method [27] and the DifferentialEvolution (DE) algorithm [28] both considered as the mostpopular algorithms formany optimization applications In allcomparisons the population has been set to 40 (119873 = 40)individualsThemaximum iteration number for all functionshas been set to 1000 Such stop criterion has been selectedto maintain compatibility to similar works reported in theliterature [48 49]

The parameter setting for each of the algorithms in thecomparison is described as follows

(1) PSO in the algorithm 1198881 = 1198882 = 2 while the inertiafactor (120596) is decreased linearly from 09 to 02

(2) DE the DERand1 scheme is employed The param-eter settings follow the instructions in [28 51] Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(3) In LS 119865 and 119871 are set to 06 and 119871 respectivelyBesides 119892 is fixed to 20 (1198732) ℎ = 2 120573 = 06 whereas119892119890119899 and119873 are configured to 1000 and 40 respectivelyOnce such parameters have been experimentallydetermined they are kept for all experiments in thissection

In the comparison three indexes are considered the averagebest-so-far solution (ABS) the standard deviation (SD)and the number of function evaluations (NFE) The firsttwo indexes assess the accuracy of the solution whereasthe last one measures the computational cost The averagebest-so-far solution (ABS) expresses the average value ofthe best function evaluations that have been obtained from30 independent executions The standard deviation (SD)indicates the dispersion of the ABS values Evolutionarymethods are in general complex pieces of software withseveral operators and stochastic branches Therefore it isdifficult to conduct a complexity analysis fromadeterministicperspective Under such circumstances it ismore appropriate

Mathematical Problems in Engineering 9

Table 1 Minimization results from the benchmark functions test inTable 11 with 119899 = 30 Maximum number of iterations = 1000

PSO DE LS

1198911

ABS 166 times 10minus1 627 times 10minus3 455 times 10minus4

SD 379 times 10minus1 168 times 10minus1 698 times 10minus4

NFE 28610 20534 16780

1198912

ABS 483 times 10minus1 202 times 10minus1 541 times 10minus3

SD 159 times 10minus1 066 145 times 10minus2

NFE 28745 21112 16324

1198913

ABS 275 572 times 10minus1 161 times 10minus3

SD 101 015 132 times 10minus3

NFE 38320 36894 20462

1198914

ABS 184 011 105 times 10minus2

SD 087 005 663 times 10minus3

NFE 37028 36450 21158

1198915

ABS 307 239 411 times 10minus2

SD 042 036 274 times 10minus3

NFE 39432 37264 21678

1198916

ABS 636 651 588 times 10minus2

SD 074 087 167 times 10minus2

NFE 38490 36564 22238

1198917

ABS 614 012 271 times 10minus2

SD 073 002 118 times 10minus2

NFE 37274 35486 21842

to use the number of function evaluations (NFE) just asit is used in the literature [52 53] to evaluate and assessthe computational effort (time) and the complexity amongoptimizers It represents how many times an algorithm usesthe objective function to evaluate the objective (fitness)function until the best solution of a determined executionhas been found Since the experiments require 30 differentexecutions the NFE index corresponds to the averaged valueobtained from these executions A small NFE value indicatesthat less time is needed to reach the global optimum

41 Unimodal Test Functions Functions 1198911 to 1198917 are uni-modal functions The results for unimodal functions over30 runs are reported in Table 1 considering the averagebest-so-far solution (ABS) the standard deviation (SD) andthe number of function evaluations (NFE) According tothis table LS provides better results than PSO and DE forall functions in terms of ABS and SD In particular thetest yields the largest performance difference in functions1198914ndash1198917 Such functions maintain a narrow curving valley that

is hard to optimize in case the search space cannot beexplored properly and the direction changes cannot be keptup with [54] For this reason the performance differencesare directly related to a better trade-off between explorationand exploitation that is produced by LS operators In thepractice a main goal of an optimization algorithm is to finda solution as good as possible within a small time windowThe computational cost for the optimizer is represented byits NFE values According to Table 1 the NFE values thatare obtained by the proposed method are smaller than its

Table 2 119901 values produced by Wilcoxonrsquos test that compares LSversus PSO and DE over the ldquoaverage best-so-farrdquo values fromTable 1

LS versus PSO DE1198911

183 times 10minus4 173 times 10minus2

1198912

385 times 10minus3 183 times 10minus4

1198913

173 times 10minus4 623 times 10minus3

1198914

257 times 10minus4 521 times 10minus3

1198915

473 times 10minus4 183 times 10minus3

1198916

639 times 10minus5 215 times 10minus3

1198917

183 times 10minus4 221 times 10minus3

counterparts LowerNFE values aremore desirable since theycorrespond to less computational overload and thereforefaster results In the results perspective it is clear that PSOand DE needmore than 1000 generations in order to producebetter solutions However this number of generations isconsidered in the experiments aiming for producing a visiblecontrast among the approaches If the number of generationshas been set to an exaggerated value then all methods wouldconverge to the best solution with no significant troubles

A nonparametric statistical significance proof known asWilcoxonrsquos rank sum test for independent samples [55 56]has been conducted with an 5 significance level over theldquoaverage best-so-farrdquo data of Table 1 Table 2 reports the 119901values produced byWilcoxonrsquos test for the pairwise compari-son of the ldquoaverage best-so-farrdquo of two groups Such groupsare formed by LS versus PSO and LS versus DE As a nullhypothesis it is assumed that there is no significant differencebetween mean values of the two algorithms The alternativehypothesis considers a significant difference between theldquoaverage best-so-farrdquo values of both approaches All 119901 valuesreported in the table are less than 005 (5 significancelevel) which is a strong evidence against the null hypothesisindicating that the LS results are statistically significant andthat it has not occurred by coincidence (ie due to the normalnoise contained in the process)

42 Multimodal Test Functions Multimodal functions pos-sess many local minima which make the optimization adifficult task to be accomplished In multimodal functionsthe results reflect the algorithmrsquos ability to escape fromlocal optima We have applied the algorithms over functions1198918 to 11989113 where the number of local minima increasesexponentially as the dimension of the function increasesThe dimension of such functions is set to 30 The resultsare averaged over 30 runs with performance indexes beingreported in Table 3 as follows the average best-so-far solution(ABS) the standard deviation (SD) and the number of func-tion evaluations (NFE) Likewise 119901 values of the Wilcoxonsigned-rank test of 30 independent runs are listed in Table 4From the results it is clear that LS yields better solutionsthan others algorithms for functions 1198919 11989110 11989111 and 11989112 interms of the indexes ABS and SD However for functions 1198918and 11989113 LS produces similar results to DE The Wilcoxonrank test results that are presented in Table 6 confirm that

10 Mathematical Problems in Engineering

Table 3Minimization results from the benchmark functions test inTable 12 with 119899 = 30 Maximum number of iterations = 1000

PSO DE LS

1198918

ABS minus67 times 103 minus126 times 104 minus126 times 104

SD 63 times 102 37 times 102 11 times 102

NFE 38452 35240 21846

1198919

ABS 148 401 times 10minus1 249 times 10minus3

SD 139 51 times 10minus2 48 times 10minus4

NFE 37672 34576 20784

11989110

ABS 147 466 times 10minus2 215 times 10minus3

SD 144 127 times 10minus2 318 times 10minus4

NFE 39475 37080 21235

11989111

ABS 1201 115 147 times 10minus4

SD 312 006 148 times 10minus5

NFE 38542 34875 22126

11989112

ABS 687 times 10minus1 374 times 10minus1 558 times 10minus3

SD 707 times 10minus1 155 times 10minus1 418 times 10minus4

NFE 35248 30540 16984

11989113

ABS 187 times 10minus1 181 times 10minus2 178 times 10minus2

SD 574 times 10minus1 166 times 10minus2 164 times 10minus3

NFE 36022 31968 18802

Table 4 119901 values produced byWilcoxonrsquos test comparing LS versusPSO and DE over the ldquoaverage best-so-farrdquo values from Table 3

LS versus PSO DE1198918

183 times 10minus4 00611198919

117 times 10minus4 241 times 10minus4

11989110

143 times 10minus4 312 times 10minus3

11989111

625 times 10minus4 114 times 10minus3

11989112

234 times 10minus5 715 times 10minus4

11989113

473 times 10minus4 0071

LS performed better than PSO and DE considering fourproblems 119891

9ndash11989112 whereas from a statistical viewpoint there

is no difference between results from LS and DE for 1198918 and11989113 According to Table 3 the NFE values obtained by theproposed method are smaller than those produced by otheroptimizers The reason of this remarkable performance isassociated with its two operators (i) the solitary operatorallows a better particle distribution in the search spaceincreasing the algorithmrsquos ability to find the global optimaand (ii) the use of the social operation provides a simpleexploitation operator that intensifies the capacity of findingbetter solutions during the evolution process

5 Gaussian Mixture Modelling

In this section the modeling of image histograms throughGaussian mixture models is presented Let one consider animage holding 119871 gray levels [0 119871 minus 1] whose distribution

is defined by a histogram ℎ(119892) represented by the followingformulation

ℎ (119892) =

119899119892

119873119901 ℎ (119892) gt 0

Np =119871minus1sum

119892=0119899119892

119871minus1sum

119892=0ℎ (119892) = 1

(15)

where 119899119892denotes the number of pixels with gray level 119892

and Np the total number of pixels in the image Under suchcircumstances ℎ(119892) can be modeled by using a mixture 119901(119909)of Gaussian functions of the form

119901 (119909) =

119870

sum

119894=1

119875119894

radic2120587120590119894

exp[minus (119909 minus 120583

119894)2

21205902119894

] (16)

where119870 symbolizes the number of Gaussian functions of themodel whereas 119875

119894is the a priori probability of function 119894 120583

119894

and 120590119894represent the mean and standard deviation of the 119894th

Gaussian function respectively Furthermore the constraintsum119870

119894=1 119875119894 = 1must be satisfied by themodel In order to evaluatethe similarity between the image histogram and a candidatemixture model the mean square error can be used as follows

119869 =1119899

119871

sum

119895=1[119901 (119909119895) minus ℎ (119909

119895)]

2+120596 sdot

1003816100381610038161003816100381610038161003816100381610038161003816

(

119870

sum

119894=1119875119894)minus 1

1003816100381610038161003816100381610038161003816100381610038161003816

(17)

where 120596 represents the penalty associated with the constrainsum119870

119894=1 119875119894 = 1 Therefore 119869 is considered as the objective func-tion which must be minimized in the estimation problem Inorder to illustrate the histogram modeling through a Gaus-sian mixture Figure 6 presents an example assuming threeclasses that is 119870 = 3 Considering Figure 6(a) as the imagehistogram ℎ(119909) the Gaussian mixture 119901(119909) that is shown inFigure 6(c) is produced by adding the Gaussian functions1199011(119909) 1199012(119909) and 1199013(119909) in the configuration presented inFigure 6(b) Once themodel parameters that bettermodel theimage histogram have been determined the final step is todefine the threshold values 119879

119894(119894 isin [1 119870]) which can be

calculated by simple standard methods just as it is presentedin [19ndash21]

6 Segmentation Algorithm Based on LS

In the proposed method the segmentation process isapproached as an optimization problem Computationaloptimization generally involves two distinct elements (1) asearch strategy to produce candidate solutions (individualsparticles insects locust etc) and (2) an objective functionthat evaluates the quality of each selected candidate solutionSeveral computational algorithms are available to performthe first element The second element where the objectivefunction is designed is unquestionably the most criticalIn the objective function it is expected to embody the full

Mathematical Problems in Engineering 11

Freq

uenc

y

Gray level

h(x)

(a)

Gray level

Freq

uenc

y

p1(x)p2(x) p3(x)

(b)

Freq

uenc

y

Gray level

p(x)

(c)

Figure 6 Histogram approximation through a Gaussian mixture (a) Original histogram (b) configuration of the Gaussian components1199011(119909) 1199012(119909) and 1199013(119909) and (c) final Gaussian mixture 119901(119909)

complexity of the performance biases and restrictions of theproblem to be solved In the segmentation problem candidatesolutions represent Gaussian mixtures The objective func-tion 119869 (17) is used as a fitness value to evaluate the similaritypresented between the Gaussian mixture and the imagehistogram Therefore guided by the fitness values (119869 values)a set of encoded candidate solutions are evolved using theevolutionary operators until the best possible resemblancecan be found

Over the last decade several algorithms based on evo-lutionary and swarm principles [19ndash22] have been proposedto solve the problem of segmentation through a Gaussianmixture model Although these algorithms own certain goodperformance indexes they present two important limitations(1) They frequently obtain suboptimal approximations asa consequence of an inappropriate balance between explo-ration and exploitation in their search strategies (2) Theyare based on the assumption that the number of Gaussians(classes) in the mixture is preknown and fixed otherwisethey cannot work

In order to eliminate such flaws the proposed approachincludes (A) a new search strategy and (B) the definitionof a new objective function For the search strategy the LSmethod (Section 4) is adopted Under LS the concentrationof individuals in the current best positions is explicitlyavoided Such fact allows reducing critical problems such asthe premature convergence to suboptimal solutions and theincorrect exploration-exploitation balance

61 New Objective Function 119869new Previous segmentation

algorithms based on evolutionary and swarm methods use(17) as objective function Under these circumstances each

candidate solution (Gaussian mixture) is only evaluated interms of its approximation with the image histogram

Since the proposed approach aims to automatically selectthe number ofGaussian functions119870 in the finalmixture119901(119909)the objective function must be modified The new objectivefunction 119869new is defined as follows

119869new

= 119869 + 120582 sdot119876 (18)

where 120582 is a scaling constant The new objective function isdivided into two partsThe first part 119869 evaluates the quality ofeach candidate solution in terms of its similarity with regardto the image histogram (17) The second part 119876 penalizes theoverlapped area among Gaussian functions (classes) with 119876being defined as follows

119876 =

119870

sum

119894=1

119870

sum

119895=1119895 =119894

119871

sum

119897=1min (119875

119894sdot 119901119894(119897) 119875119895sdot 119901119895(119897)) (19)

where 119870 and 119871 represent the number of classes and the graylevels respectively The parameters 119901

119894(119897) and 119901

119895(119897) symbolize

the Gaussian functions 119894 and 119895 respectively that are to beevaluated on the point 119897 (gray level) whereas the elements119875119894and 119875

119895represent the a priori probabilities of the Gaussian

functions 119894 and 119895 respectively Under such circumstancesmixtures with Gaussian functions that do not ldquopositivelyrdquoparticipate in the histogram approximation are severelypenalized

Figure 7 illustrates the effect of the new objective function119869new in the evaluation of Gaussian mixtures (candidatesolutions) From the image histogram (Figure 7(a)) it isevident that two Gaussian functions are enough to accurately

12 Mathematical Problems in Engineering

Gray level

Freq

uenc

y h(x)

(a)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)Freq

uenc

y

(b)

Gray level

Freq

uenc

y p1(x)

p2(x)

p3(x)

p4(x)

Q12 Q23 Q34

(c)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)

Q12

Q23

Q34

Freq

uenc

y(d)

Figure 7 Effect of the new objective function 119869new in the evaluation of Gaussian mixtures (candidate solutions) (a) Original histogram (b)Gaussian mixture considering four classes (c) penalization areas and (d) Gaussian mixture of better quality solution

approximate the original histogramHowever if theGaussianmixture is modeled by using a greater number of functions(eg four as it is shown in Figure 7(b)) the original objectivefunction 119869 is unable to obtain a reasonable approximationUnder the new objective function 119869

new the overlapped areaamong Gaussian functions (classes) is penalized Such areasin Figure 7(c) correspond to 11987612 11987623 and 11987634 where11987612 represents the penalization value produced between theGaussian function 1199011(119909) and 1199012(119909) Therefore due to thepenalization theGaussianmixture shown in Figures 7(b) and7(c) provides a solution of low quality On the other hand theGaussian mixture presented in Figure 7(d) maintains a lowpenalty thus it represents a solution of high quality FromFigure 7(d) it is easy to see that functions 1199011(119909) and 1199014(119909)can be removed from the final mixture This eliminationcould be performed by a simple comparison with a thresholdvalue 120579 since 1199011(119909) and 1199014(119909) have a reduced amplitude(1199011(119909) asymp 1199012(119909) asymp 0) Therefore under 119869new it is possibleto find the reduced Gaussian mixture model starting from aconsiderable number of functions

Since the proposed segmentation method is conceivedas an optimization problem the overall operation can bereduced to solve the formulation of (20) by using the LSalgorithm

minimize 119869new

(x) = 119869 (x) + 120582 sdot 119876 (x)

x = (1198751 1205831 1205901 119875119870 120583119870 120590119870) isin R3sdot119870

subject to 0 le 119875119889le 1 119889 isin (1 119870)

0 le 120583119889le 255

0 le 120590119889le 25

(20)

where 119875119889 120583119889 and 120590

119889represent the probability mean and

standard deviation of the class 119889 It is important to remarkthat the new objective function 119869

new allows the evaluationof a candidate solution in terms of the necessary numberof Gaussian functions and its approximation quality Undersuch circumstances it can be used in combination with anyother evolutionary method and not only with the proposedLS algorithm

62 Complete Segmentation Algorithm Once the new searchstrategy (LS) and objective function (119869new) have been definedthe proposed segmentation algorithm can be summarized byAlgorithm 2The new algorithm combines operators definedby LS and operations for calculating the threshold values

(Line 1) The algorithm sets the operative parameters 119865119871 119892 119892119890119899 119873 119870 120582 and 120579 They rule the behavior of thesegmentation algorithm (Line 2) Afterwards the populationL0 is initialized considering 119873 different random Gaussianmixtures of119870 functionsThe idea is to generate an119873-randomGaussian mixture subjected to the constraints formulatedin (20) The parameter 119870 must hold a high value in orderto correctly segment all images (recall that the algorithmis able to reduce the Gaussian mixture to its minimumexpression) (Line 3)Then the Gaussianmixtures are evolvedby using the LS operators and the new objective function119869newThis process is repeated during119892119890119899 cycles (Line 8) Afterthis procedure the best Gaussian mixture l119892119890119899

119887119890119904119905is selected

according to its objective function 119869new (Line 9) Then

the Gaussian mixture l119892119890119899119887119890119904119905

is reduced by eliminating thosefunctions whose amplitudes are lower than 120579 (119901

119894(119909) le 120579)

(Line 10) Then the threshold values 119879119894from the reduced

model are calculated (Line 11) Finally the calculated 119879119894

values are employed to segment the image Figure 8 shows aflowchart of the complete segmentation procedure

Mathematical Problems in Engineering 13

(1) Input 119865 119871 119892 119892119890119899119873 119870 120582 and 120579(2) Initialize L0 (119896 = 0)(3) until (119896 = 119892119890119899)(4) F larr SolitaryOperation(L119896) Solitary operator (Section 31)(5) L119896+1 larr SocialOperation(L119896 F) Social operator (Section 32)(6) 119896 = k + 1(7) end until(8) Obtain I119892119890119899

119887119890119904119905

(9) Reduce I119892119890119899119887119890119904119905

(10) Calculate the threshold values 119879119894from the reduced model

(11) Use 119879119894to segment the image

Algorithm 2 Segmentation LS algorithm

The proposed segmentation algorithm permits to auto-matically detect the number of segmentation partitions(classes) Furthermore due to its remarkable search capaci-ties the LSmethodmaintains a better accuracy than previousalgorithms based on evolutionary principles However theproposed method presents two disadvantages first it isrelated to its implementation which in general is morecomplex than most of the other segmentators based onevolutionary basics The second refers to the segmentationprocedure of the proposed approach which does not considerany spatial pixel characteristics As a consequence pixelsthat may belong to a determined region due to its positionare labeled as a part of another region due to its graylevel intensity Such a fact adversely affects the segmentationperformance of the method

7 Segmentation Results

This section analyses the performance of the proposedsegmentation algorithmThe discussion is divided into threeparts the first one shows the performance of the proposedLS segmentation algorithm while the second presents a com-parison between the proposed approach and others segmen-tation algorithms that are based on evolutionary and swammethods The comparison mainly considers the capacities ofeach algorithm to accurately and robustly approximate theimage histogram Finally the third part presents an objectiveevaluation of segmentation results produced by all algorithmsthat have been employed in the comparisons

71 Performance of LS Algorithm in Image SegmentationThis section presents two experiments that analyze theperformance of the proposed approach Table 5 presentsthe algorithmrsquos parameters that have been experimentallydetermined and kept for all the test images through allexperiments

First Image The first test considers the histogram shownby Figure 9(b) while Figure 9(a) presents the original imageAfter applying the proposed algorithm just as it has beenconfigured according to parameters in Table 5 a minimummodel of four classes emerges after the start from Gaussian

Table 5 Parameter setup for the LS segmentation algorithm

119865 119871 119892 119892119890119899 119873 119870 120582 120579

06 02 20 1000 40 10 001 00001

Table 6 Results of the reduced Gaussian mixture for the first andthe second image

Parameters First image Second image1198751 0004 00321205831 181 1211205901 82 291198752 00035 000151205832 699 4511205902 184 2491198753 001 00061205833 949 13011205903 89 1791198754 0007 0021205834 1631 16721205904 299 101

mixtures of 10 classes Considering 30 independent exe-cutions the averaged parameters of the resultant Gaussianmixture are presented in Table 6 One final Gaussian mixture(ten classes) which has been obtained by LS is presentedin Figure 10 Furthermore the approximation of the reducedGaussian mixture is also visually compared with the originalhistogram in Figure 10 On the other hand Figure 11 presentsthe segmented image after calculating the threshold points

Second Image For the second experiment the image inFigure 12 is testedThemethod aims to segment the image byusing a reduced Gaussian mixture model obtained by the LSapproach After executing the algorithm according to param-eters from Table 5 the resulting averaged parameters of theresultant Gaussian mixture are presented in Table 6 In orderto assure consistency the results are averaged considering 30independent executions Figure 13 shows the approximationquality that is obtained by the reduced Gaussian mixturemodel in (a) and the segmented image in (b)

14 Mathematical Problems in Engineering

Input parameters

ImageI(xy)

from the reduced model

YesLS method

No

k = k + 1

k gt gen

Reduce 119845genbest

Calculate the threshold values Ti

Use Ti to segment the image

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

119813 larr SolitaryOperation (119819k)

119819k+1 larr SocialOperation (119819k 119813)

Initialize 1198190 (k = 0)

F L g gen N K 120582 and 120579

Figure 8 Flowchart of the complete segmentation procedure

(a)

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray level

Freq

uenc

y

(b)

Figure 9 (a) Original first image used on the first experiment and (b) its histogram

Mathematical Problems in Engineering 15

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray levelFr

eque

ncy

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

Figure 10 Gaussian mixture obtained by LS for the first image

Figure 11 Image segmented with the reduced Gaussian mixture

(a)

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

(b)

Figure 12 (a) Original second image used on the first experiment and (b) its histogram

72 Histogram Approximation Comparisons This sectiondiscusses the comparison between the proposed segmenta-tion algorithm and other evolutionary-segmentation meth-ods that have been proposed in the literature The set ofmethods used in the experiments includes the 119869 + ABC[19] 119869 + AIS [20] and 119869 + DE [21] These algorithmsconsider the combination between the original objectivefunction 119869 (17) and an evolutionary technique such asArtificial Bee Colony (ABC) the Artificial Immune Systems(AIS) and the Differential Evolution (DE) [21] respectively

Since the proposed segmentation approach considers thecombination of the new objective function 119869

new (18) andthe LS algorithm it will be referred to as 119869

new+ LS

The comparison focuses mainly on the capacities of eachalgorithm to accurately and robustly approximate the imagehistogram

In the experiments the populations have been set to 40(119873 = 40) individuals The maximum iteration number for allfunctions has been set to 1000 Such stop criterion has beenconsidered to maintain compatibility to similar experiments

16 Mathematical Problems in Engineering

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

(a) (b)

Figure 13 (a) Segmentation result obtained by LS for the first image and (b) the segmented image

(1) (2)

(3) (4)

(a)

0015

0005001

0020025

0035003

0045004

005

00 50 100 150 200 250

Gray level

Freq

uenc

y

(b)

Figure 14 (a) Synthetic image used in the comparison and (b) its histogram

that are reported in the literature [18] The parameter settingfor each of the segmentation algorithms in the comparison isdescribed as follows

(1) 119869 + ABC [19] in the algorithm its parameters areconfigured as follows the abandonment limit = 100120572 = 005 and limit = 30

(2) 119869 + AIS [20] it presents the following parameters ℎ =120119873

119888= 80 120588 = 10 119875

119903= 20 119871 = 22 and 119879

119890= 001

(3) 119869 + DE [21] the DERand1 scheme is employed Theparameter settings follow the instructions in [21]Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(4) In 119869new +LS the method is set according to the valuesdescribed in Table 5

In order to conduct the experiments a synthetic image isdesigned to be used as a reference in the comparisons Themain idea is to know in advance the exact number of classes(and their parameters) that are contained in the image sothat the histogram can be considered as a ground truth The

Table 7 Employed parameters for the design of the reference image

119875119894

120583119894

120590119894

(1) 005 40 8(2) 004 100 10(3) 005 160 8(4) 0027 220 15

synthetic image is divided into four sections Each sectioncorresponds to a different class which is produced by settingeach gray level pixel 119875V119894 to a value that is determined by thefollowing model

119875V119894= 119890minus((119909minus1205831)

221205902119894) (21)

where 119894 represents the section whereas 120583119894and 120590119894are themean

and the dispersion of the gray level pixels respectively Thecomparative study employs the image of 512 times 512 that isshown in Figure 14(a) and the algorithmrsquos parameters thathave been presented in Table 7 Figure 14(b) illustrates theresultant histogram

Mathematical Problems in Engineering 17

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2Class 3Class 4

Class 5Class 6Class 7Class 8

(a)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(b)

Figure 15 Convergence results (a) Convergence of the following methods 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes (b) Convergence of the proposed method (reduced Gaussian mixture)

(a) (b)

Figure 16 Segmentation results obtained by (a) several methods including 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes and (b) the proposed method (reduced Gaussian mixture)

In the comparison the discussion focuses on the follow-ing issues first of all accuracy second convergence andthird computational cost

Convergence This section analyzes the approximation con-vergence when the number of classes that are used bythe algorithm during the evolution process is different tothe actual number of classes in the image Recall that theproposed approach automatically finds the reduced Gaussianmixture which better adapts to the image histogram

In the experiment the methods 119869 + ABC 119869 + AISand 119869 + DE are executed considering Gaussian mixturescomposed of 8 functions Under such circumstances thenumber of classes to be detected is higher than the actualnumber of classes in the image On the other hand the

proposed algorithm maintains the same configuration ofTable 5Therefore it can detect and calculate up to ten classes(119870 = 10)

As a result the techniques 119869 + ABC 119869 + AIS and 119869 +DE tend to overestimate the image histogram This effectcan be seen in Figure 15(a) where the resulting Gaussianfunctions are concentrated within actual classes Such abehavior is a consequence of the evaluation that is consideredby the original objective function 119869 which privileges onlythe approximation between the Gaussian mixture and theimage histogram This effect can be graphically illustrated byFigure 16(a) that shows the pixel misclassification producedby the wrong segmentation of Figure 14(a) On the otherhand the proposed approach obtains a reduced Gaussianmixture model which allows the detection of each class from

18 Mathematical Problems in Engineering

006

004

005

002

003

001

00 50 100 150 200 250

Gray level

Freq

uenc

y

(a)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

(b)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

HistogramClass 1Class 2

Class 3Class 4

(c)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(d)

Figure 17 Approximation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

the actual histogram (see Figure 15(b)) As a consequencethe segmentation is significantly improved by eliminating thepixel misclassification as it is shown by Figure 16(b)

It is evident from Figure 15 that the techniques 119869 + ABC119869 + AIS and 119869 + DE all need an a priori knowledge of thenumber of classes that are contained in the actual histogramin order to obtain a satisfactory result On the other handthe proposed algorithm is able to find a reduced Gaussianmixture whose classes coincide with the actual number ofclasses that are contained in the image histogram

Accuracy In this section the comparison among the algo-rithms in terms of accuracy is reported Most of the reportedcomparisons [19ndash26] are concerned about comparing theparameters of the resultant Gaussian mixtures by using realimages Under such circumstances it is difficult to considera clear reference in order to define a meaningful errorTherefore the image defined in Figure 14 has been used in theexperiments because its construction parameters are clearlydefined in Table 7

Since the parameter values from Table 7 act as groundtruth a simple averaged difference between them and thevalues that are computed by each algorithm could be used as

comparison error However as each parametermaintains dif-ferent active intervals it is necessary to express the differencesin terms of percentage Therefore if Δ120573 is the parametricdifference and 120573 the ground truth parameter the percentageerror Δ120573 can be defined as follows

Δ120573 =Δ120573

120573sdot 100 (22)

In the segmentation problem each Gaussian mixture repre-sents a 119870-dimensional model where each dimension corre-sponds to a Gaussian function of the optimization problemto be solved Since each Gaussian function possesses threeparameters 119875

119894 120583119894 and 120590

119894 the complete number of parameters

is 3 sdot 119870 dimensions Therefore the final error 119864 produced bythe final Gaussian mixture is

119864 =1

119870 sdot 3

119870sdot3sum

V=1Δ120573V (23)

where 120573V isin (119875119894 120583119894 120590119894)In order to compare accuracy the algorithms 119869 + ABC

119869 + AIS 119869 + DE and the proposed approach are all executedover the image shown by Figure 14(a) The experiment aims

Mathematical Problems in Engineering 19

(a) (b)

(c) (d)

Figure 18 Segmentation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

to estimate the Gaussian mixture that better approximatesthe actual image histogram Methods 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures composed of 4functions (119870 = 4) In case of the 119869new + LS method althoughthe algorithm finds a reduced Gaussian mixture of fourfunctions it is initially set with ten functions (119870 = 10)Table 8 presents the final Gaussian mixture parameters andthe final error 119864 The final Gaussian mixture parametershave been averaged over 30 independent executions in orderto assure consistency A close inspection of Table 8 revealsthat the proposed method is able to achieve the smallesterror (119864) in comparison to the other algorithms Figure 16presents the histogram approximations that are produced byeach algorithm whereas Figure 17 shows their correspondentsegmented images Both illustrations present the mediancase obtained throughout 30 runs Figure 18 exhibits that 119869 +ABC 119869 + AIS and 119869 + DE present different levels of misclas-sifications which are nearly absent in the proposed approachcase

Computational Cost The experiment aims to measure thecomplexity and the computing time spent by the 119869 + ABC

Table 8 Results of the reduced Gaussian mixture in terms ofaccuracy

Algorithm Gaussian function 119875119894

120583119894

120590119894

119864

J + ABC

(1) 0052 445 64

1179(2) 0084 9812 1287(3) 0058 16350 894(4) 0025 21884 175

J + AIS

(1) 0072 3101 614

2201(2) 0054 8852 1221(3) 0039 14921 914(4) 0034 24841 1384

J + DE

(1) 0041 3574 786

1357(2) 0036 9057 1197(3) 0059 14847 901(4) 0020 20134 1302

119869new + LS

(1) 0049 4012 75

398(2) 0041 10204 104(3) 0052 16866 83(4) 0025 11092 152

20 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 19 Images employed in the computational cost analysis

the 119869 + AIS the 119869 + DE and the 119869new + LS algorithm whilecalculating the parameters of the Gaussian mixture in bench-mark images (see Figures 19(a)ndash19(d)) 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures that are composedof 4 functions (119870 = 4) In case of the 119869new + LS methodalthough the algorithm finds a reduced Gaussian mixture offour functions despite being initialized with ten functions(119870 = 10) Table 9 shows the averaged measurements after30 experiments It is evident that the 119869 + ABC and 119869 +

DE are the slowest to converge (iterations) and the 119869 + AISshows the highest computational cost (time elapsed) becauseit requires operators which demand long times On the otherhand the 119869new + LS shows an acceptable trade-off betweenits convergence time and its computational cost Thereforealthough the implementation of 119869new + LS in general requiresmore code than most of other evolution-based segmentatorssuch a fact is not reflected in the execution time FinallyFigure 19 below shows the segmented images as they are

Mathematical Problems in Engineering 21

(a) (b) (c) (d)

Figure 20 Experimental set used in the evaluation of the segmentation results

Table 9 Iterations and time requirements of the 119869 + ABC the 119869 +AIS the 119869 + DE and the 119869new + LS algorithm as they are applied tosegment benchmark images (see Figure 17)

Iterations (a) (b) (c) (d)Time elapsed

119869 + ABC 855 833 870 997272 s 270 s 273 s 31 s

119869 + AIS 725 704 754 812178 s 161 s 141 s 201 s

119869 + DE 657 627 694 742125 s 112 s 145 s 188 s

119869new + LS 314 298 307 402

098 s 084 s 072 s 102 s

Table 10 Evaluation of the segmentation results in terms of the ROSindex

Number of classesImage

119873119877= 4

(a)119873119877= 3

(b)119873119877= 4

(c)119873119877= 4

(d)119869 + ABC 0534 0328 0411 0457119869 + AIS 0522 0321 0427 0437119869 + DE 0512 0312 0408 0418119869new + LS 0674 0401 0514 0527

generated by each algorithm It can be seen that the proposedapproach generate more homogeneous regions whereas 119869 +ABC 119869 + AIS and 119869 + DE present several artifacts that areproduced by an incorrect pixel classification

73 Performance Evaluation of the Segmentation Results Thissection presents an objective evaluation of segmentationresults that are produced by all algorithms in the compar-isons The ill-defined nature of the segmentation problemmakes the evaluation of a candidate algorithm difficult[57] Traditionally the evaluation has been conducted byusing some supervised criteria [58] which are based on thecomputation of a dissimilarity measure between a segmen-tation result and a ground truth image Recently the use ofunsupervised measures has substituted supervised indexesfor the objective evaluation of segmentation results [59]They

Table 11 Unimodal test functions

Test function S 119891opt

1198911(x) =119899

sum

119894=11199092119894

[minus100 100]119899 0

1198912(x) =119899

sum

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 +

119899

prod

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 [minus10 10]119899 0

1198913(x) =119899

sum

119894=1(

119894

sum

119895=1119909119895)

2

[minus100 100]119899 0

1198914 (x) = max119894

10038161003816100381610038161199091198941003816100381610038161003816 1 le 119894 le 119899 [minus100 100]119899 0

1198915(x) =119899minus1sum

119894=1[100 (119909

119894+1 minus 1199092119894)2+ (119909119894minus 1)2] [minus30 30]119899 0

1198916(x) =119899

sum

119894=1(119909119894+ 05)2 [minus100 100]119899 0

1198917(x) =119899

sum

119894=1119894119909

4119894+ rand(0 1) [minus128 128]119899 0

enable the quantification of the quality of a segmentationresult without a priori knowledge (ground truth image)

Evaluation Criteria In this paper the unsupervised indexROS proposed by Chabrier et al [60] has been used toobjectively evaluate the performance of each candidate algo-rithm This index evaluates the segmentation quality interms of the homogeneity within segmented regions andthe heterogeneity among the different regions ROS can becomputed as follows

ROS =119863 minus 119863

2 (24)

where 119863 quantifies the homogeneity within segmentedregions Similarly 119863 measures the disparity among theregions A segmentation result 1198781 is considered better than 1198782if ROS

1198781gt ROS

1198782 The interregion homogeneity character-

ized by119863 is calculated considering the following formulation

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot

120590119888

(sum119873119877

119897=1 120590119897) (25)

where 119873119877represents the number of partitions in which the

image has been segmented 119877119888symbolizes the number of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 7

the process of refining existent individuals within a smallneighborhood in order to improve their solution quality

The social procedure is a selective operation which isapplied only to a subset E of the final positions F (whereE sube F) The operation starts by sorting F with respect tofitness values storing the elements in a temporary populationB = b1 b2 b119873 The elements in B are sorted so thatthe best individual receives the position b1 whereas the worstindividual obtains the location b

119873 Therefore the subset E

is integrated by only the first 119892 locations of B (promisingsolutions) Under this operation a subspace 119862

119895is created

around each selected particle f119895isin E The size of 119862

119895depends

on the distance 119890119889which is defined as follows

119890119889=

sum119899

119902=1 (119906119887119902 minus 119897119887119902)

119899sdot 120573 (12)

where 119906119887119902and 119897119887

119902are the upper and lower bounds in the

119902th dimension and 119899 is the number of dimensions of theoptimization problem whereas 120573 isin [0 1] is a tuning factorTherefore the limits of 119862

119895can be modeled as follows

119906119904119904119902

119895= 119887119895119902+ 119890119889

119897119904119904119902

119895= 119887119895119902minus 119890119889

(13)

where 119906119904119904119902119895and 119897119904119904119902

119895are the upper and lower bounds of the 119902th

dimension for the subspace 119862119895 respectively

Considering the subspace 119862119895around each element f

119895isin

E a set of ℎ new particles (Mℎ119895

= m1119895m2119895 mℎ

119895) are

randomly generated inside bounds fixed by (13) Once theℎ samples are generated the individual l119896+1

119895of the next

population L119896+1 must be created In order to calculate l119896+1119895

the best particlem119887119890119904119905

119895 in terms of its fitness value from the ℎ

samples (wherem119887119890119904119905119895

isin [m1119895m2119895 mℎ

119895]) is compared to f

119895

Ifm119887119890119904119905119895

is better than f119895according to their fitness values l119896+1

119895

is updated with m119887119890119904119905119895

otherwise f119895is selected The elements

of F that have not been processed by the procedure (f119908notin E)

transfer their corresponding values to L119896+1 with no changeThe social operation is used to exploit only prominent

solutions According to the proposed method inside eachsubspace 119862

119895 ℎ random samples are selected Since the

number of selected samples at each subspace is very small(typically ℎ lt 4) the use of this operator substantially reducesthe number of fitness function evaluations

In order to demonstrate the social operation a numericalexample has been set by applying the proposed process to asimple function Such function considers the interval of minus3 le1198891 1198892 le 3 whereas the function possesses one global maximaof value 81 at (0 16) Notice that 1198891 and 1198892 correspond tothe axis coordinates (commonly 119909 and 119910) For this examplea final position population F of six 2-dimensional members(119873 = 6) is assumed Figure 5 shows the initial configurationof the proposed example with the black points representinghalf of the particles with the best fitness values (the first threeelements of B 119892 = 3) whereas the grey points (f2 f4 f6 notin E)correspond to the remaining individuals From Figure 5 it

0

0

0

d1

d2

minus3

minus2

minus1

minus2

minus2

1198396

11983931198391

1198394

1198395

1198392 119846211198461

1

11984623

11984613

11984625

11984615

C1

C5

C3

2

2

2

2

22

2

0

0 0

0

minus2

minus2

minus2

minus4

minus4minus6

4

44

6

68

3

2

1

0

minus3 minus2 minus1 3210

Figure 5 Operation of the social procedure

can be seen that the social procedure is applied to all blackparticles (f5 = b1 f3 = b2 and f1 = b3 f5 f3 f1 isin

E) yielding two new random particles (ℎ = 2) which arecharacterized by white points m1

1 m21 m

13 m

23 m

15 and m2

5for each black point inside of their corresponding subspaces1198621 1198623 and 1198625 Considering the particle f3 in Figure 7 theparticle m2

3 corresponds to the best particle (m1198871198901199041199053 ) from thetwo randomly generated particles (according to their fitnessvalues) within 1198623 Therefore the particlem1198871198901199041199053 will substitutef3 in the individual l119896+13 for the next generation since it holdsa better fitness value than f3(119891(f3)lt119891(m1198871198901199041199053 ))

The LS optimization procedure is defined over a boundedsearch space S Search points that do not belong to sucharea are considered to be infeasible However during theevolution process some candidate solutions could fall outsidethe search space In the proposed approach such infeasiblesolutions are arbitrarily placed with a random position insidethe search space S

33 Complete LS Algorithm LS is a simple algorithm withonly seven adjustable parameters the strength of attraction119865 the attractive length 119871 number of promising solutions 119892the population size 119873 the tuning factor 120573 the number ofrandom samples ℎ and the number of generations 119892119890119899 Theoperation of LS is divided into three parts initialization of thesolitary and social operations In the initialization (119896 = 0) thefirst population L0 (l01 l

02 l

0119873) is produced The values

11989701198941 119897

01198942 119897

0119894119899 of each individual l119896

119894and each dimension

119889 are randomly and uniformly distributed between theprespecified lower initial parameter bound 119897119887

119889and the upper

initial parameter bound 119906119887119889

1198970119894119895= 119897119887119889+ rand sdot (119906119887

119889minus 119897119887119889)

119894 = 1 2 119873 119889 = 1 2 119899(14)

In the evolution process the solitary (A) and social (B) oper-ations are iteratively applied until the number of iterations119896 = 119892119890119899 has been reached The complete LS procedure isillustrated in Algorithm 1

8 Mathematical Problems in Engineering

(1) Input 119865 119871 119892119873 119892119890119899 ℎ and 120573(2) Initialize L0 (119896 = 0)(3) until (119896 = 119892119890119899)(4) F larr SolitaryOperation(L119896) Solitary operator (Section 31)(5) L119896+1 larr SocialOperation(L119896 F) Social operator (Section 32)(6) 119896 = k + 1(7) end until

Algorithm 1 Locust Search (LS) algorithm

34 Discussion about the LS Algorithm Evolutionary algo-rithms (EA) have been widely employed for solving complexoptimization problems These methods are found to bemore powerful than conventional methods that are basedon formal logics or mathematical programming [46] Inthe EA algorithm search agents have to decide whetherto explore unknown search positions or to exploit alreadytested positions in order to improve their solution qualityPure exploration degrades the precision of the evolutionaryprocess but increases its capacity to find new potentially solu-tions On the other hand pure exploitation allows refiningexistent solutions but adversely drives the process to localoptimal solutions Therefore the ability of an EA to find aglobal optimal solution depends on its capacity to find a goodbalance between the exploitation of found-so-far elementsand the exploration of the search space [47]

Most of swarm algorithms and other evolutionary algo-rithms tend to exclusively concentrate the individuals in thecurrent best positions Under such circumstances such algo-rithms seriously limit their exploration-exploitation capaci-ties

Different to most of existent evolutionary algorithmsin the proposed approach the modeled behavior explicitlyavoids the concentration of individuals in the current bestpositions Such fact allows not only to emulate the coop-erative behavior of the locust colony in a good realisticway but also to incorporate a computational mechanism toavoid critical flaws that are commonly present in the popularevolutionary algorithms such as the premature convergenceand the incorrect exploration-exploitation balance

It is important to emphasize that the proposed approachconducts two operators (solitary and social) within a singleiteration Such operators are similar to those that are usedby other evolutionary methods such as ABC (employed beesonlooker bees and scout bees) AIS (clonal proliferationoperator affinity maturation operator and clonal selectionoperator) andDE (mutation crossover and selection) whichare all executed in a single iteration

4 Numerical Experiments overBenchmark Functions

A comprehensive set of 13 functions all collected from [48ndash50] has been used to test the performance of the LS approachas an optimization method Tables 11 and 12 present thebenchmark functions used in our experimental study Such

functions are classified into two different categories uni-modal test functions (Table 11) andmultimodal test functions(Table 12) In these tables 119899 is the function dimension 119891optis the minimum value of the function with S being a subsetof 119877119899 The optimum locations (xopt) for functions in Tables11 and 12 are in [0]119899 except for 1198915 11989112 and 11989113 with xopt in[1]119899 and 1198918 in [42096]

119899 A detailed description of optimumlocations is given in Tables 11 and 12

We have applied the LS algorithm to 13 functions whoseresults have been compared to those produced by the ParticleSwarmOptimization (PSO) method [27] and the DifferentialEvolution (DE) algorithm [28] both considered as the mostpopular algorithms formany optimization applications In allcomparisons the population has been set to 40 (119873 = 40)individualsThemaximum iteration number for all functionshas been set to 1000 Such stop criterion has been selectedto maintain compatibility to similar works reported in theliterature [48 49]

The parameter setting for each of the algorithms in thecomparison is described as follows

(1) PSO in the algorithm 1198881 = 1198882 = 2 while the inertiafactor (120596) is decreased linearly from 09 to 02

(2) DE the DERand1 scheme is employed The param-eter settings follow the instructions in [28 51] Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(3) In LS 119865 and 119871 are set to 06 and 119871 respectivelyBesides 119892 is fixed to 20 (1198732) ℎ = 2 120573 = 06 whereas119892119890119899 and119873 are configured to 1000 and 40 respectivelyOnce such parameters have been experimentallydetermined they are kept for all experiments in thissection

In the comparison three indexes are considered the averagebest-so-far solution (ABS) the standard deviation (SD)and the number of function evaluations (NFE) The firsttwo indexes assess the accuracy of the solution whereasthe last one measures the computational cost The averagebest-so-far solution (ABS) expresses the average value ofthe best function evaluations that have been obtained from30 independent executions The standard deviation (SD)indicates the dispersion of the ABS values Evolutionarymethods are in general complex pieces of software withseveral operators and stochastic branches Therefore it isdifficult to conduct a complexity analysis fromadeterministicperspective Under such circumstances it ismore appropriate

Mathematical Problems in Engineering 9

Table 1 Minimization results from the benchmark functions test inTable 11 with 119899 = 30 Maximum number of iterations = 1000

PSO DE LS

1198911

ABS 166 times 10minus1 627 times 10minus3 455 times 10minus4

SD 379 times 10minus1 168 times 10minus1 698 times 10minus4

NFE 28610 20534 16780

1198912

ABS 483 times 10minus1 202 times 10minus1 541 times 10minus3

SD 159 times 10minus1 066 145 times 10minus2

NFE 28745 21112 16324

1198913

ABS 275 572 times 10minus1 161 times 10minus3

SD 101 015 132 times 10minus3

NFE 38320 36894 20462

1198914

ABS 184 011 105 times 10minus2

SD 087 005 663 times 10minus3

NFE 37028 36450 21158

1198915

ABS 307 239 411 times 10minus2

SD 042 036 274 times 10minus3

NFE 39432 37264 21678

1198916

ABS 636 651 588 times 10minus2

SD 074 087 167 times 10minus2

NFE 38490 36564 22238

1198917

ABS 614 012 271 times 10minus2

SD 073 002 118 times 10minus2

NFE 37274 35486 21842

to use the number of function evaluations (NFE) just asit is used in the literature [52 53] to evaluate and assessthe computational effort (time) and the complexity amongoptimizers It represents how many times an algorithm usesthe objective function to evaluate the objective (fitness)function until the best solution of a determined executionhas been found Since the experiments require 30 differentexecutions the NFE index corresponds to the averaged valueobtained from these executions A small NFE value indicatesthat less time is needed to reach the global optimum

41 Unimodal Test Functions Functions 1198911 to 1198917 are uni-modal functions The results for unimodal functions over30 runs are reported in Table 1 considering the averagebest-so-far solution (ABS) the standard deviation (SD) andthe number of function evaluations (NFE) According tothis table LS provides better results than PSO and DE forall functions in terms of ABS and SD In particular thetest yields the largest performance difference in functions1198914ndash1198917 Such functions maintain a narrow curving valley that

is hard to optimize in case the search space cannot beexplored properly and the direction changes cannot be keptup with [54] For this reason the performance differencesare directly related to a better trade-off between explorationand exploitation that is produced by LS operators In thepractice a main goal of an optimization algorithm is to finda solution as good as possible within a small time windowThe computational cost for the optimizer is represented byits NFE values According to Table 1 the NFE values thatare obtained by the proposed method are smaller than its

Table 2 119901 values produced by Wilcoxonrsquos test that compares LSversus PSO and DE over the ldquoaverage best-so-farrdquo values fromTable 1

LS versus PSO DE1198911

183 times 10minus4 173 times 10minus2

1198912

385 times 10minus3 183 times 10minus4

1198913

173 times 10minus4 623 times 10minus3

1198914

257 times 10minus4 521 times 10minus3

1198915

473 times 10minus4 183 times 10minus3

1198916

639 times 10minus5 215 times 10minus3

1198917

183 times 10minus4 221 times 10minus3

counterparts LowerNFE values aremore desirable since theycorrespond to less computational overload and thereforefaster results In the results perspective it is clear that PSOand DE needmore than 1000 generations in order to producebetter solutions However this number of generations isconsidered in the experiments aiming for producing a visiblecontrast among the approaches If the number of generationshas been set to an exaggerated value then all methods wouldconverge to the best solution with no significant troubles

A nonparametric statistical significance proof known asWilcoxonrsquos rank sum test for independent samples [55 56]has been conducted with an 5 significance level over theldquoaverage best-so-farrdquo data of Table 1 Table 2 reports the 119901values produced byWilcoxonrsquos test for the pairwise compari-son of the ldquoaverage best-so-farrdquo of two groups Such groupsare formed by LS versus PSO and LS versus DE As a nullhypothesis it is assumed that there is no significant differencebetween mean values of the two algorithms The alternativehypothesis considers a significant difference between theldquoaverage best-so-farrdquo values of both approaches All 119901 valuesreported in the table are less than 005 (5 significancelevel) which is a strong evidence against the null hypothesisindicating that the LS results are statistically significant andthat it has not occurred by coincidence (ie due to the normalnoise contained in the process)

42 Multimodal Test Functions Multimodal functions pos-sess many local minima which make the optimization adifficult task to be accomplished In multimodal functionsthe results reflect the algorithmrsquos ability to escape fromlocal optima We have applied the algorithms over functions1198918 to 11989113 where the number of local minima increasesexponentially as the dimension of the function increasesThe dimension of such functions is set to 30 The resultsare averaged over 30 runs with performance indexes beingreported in Table 3 as follows the average best-so-far solution(ABS) the standard deviation (SD) and the number of func-tion evaluations (NFE) Likewise 119901 values of the Wilcoxonsigned-rank test of 30 independent runs are listed in Table 4From the results it is clear that LS yields better solutionsthan others algorithms for functions 1198919 11989110 11989111 and 11989112 interms of the indexes ABS and SD However for functions 1198918and 11989113 LS produces similar results to DE The Wilcoxonrank test results that are presented in Table 6 confirm that

10 Mathematical Problems in Engineering

Table 3Minimization results from the benchmark functions test inTable 12 with 119899 = 30 Maximum number of iterations = 1000

PSO DE LS

1198918

ABS minus67 times 103 minus126 times 104 minus126 times 104

SD 63 times 102 37 times 102 11 times 102

NFE 38452 35240 21846

1198919

ABS 148 401 times 10minus1 249 times 10minus3

SD 139 51 times 10minus2 48 times 10minus4

NFE 37672 34576 20784

11989110

ABS 147 466 times 10minus2 215 times 10minus3

SD 144 127 times 10minus2 318 times 10minus4

NFE 39475 37080 21235

11989111

ABS 1201 115 147 times 10minus4

SD 312 006 148 times 10minus5

NFE 38542 34875 22126

11989112

ABS 687 times 10minus1 374 times 10minus1 558 times 10minus3

SD 707 times 10minus1 155 times 10minus1 418 times 10minus4

NFE 35248 30540 16984

11989113

ABS 187 times 10minus1 181 times 10minus2 178 times 10minus2

SD 574 times 10minus1 166 times 10minus2 164 times 10minus3

NFE 36022 31968 18802

Table 4 119901 values produced byWilcoxonrsquos test comparing LS versusPSO and DE over the ldquoaverage best-so-farrdquo values from Table 3

LS versus PSO DE1198918

183 times 10minus4 00611198919

117 times 10minus4 241 times 10minus4

11989110

143 times 10minus4 312 times 10minus3

11989111

625 times 10minus4 114 times 10minus3

11989112

234 times 10minus5 715 times 10minus4

11989113

473 times 10minus4 0071

LS performed better than PSO and DE considering fourproblems 119891

9ndash11989112 whereas from a statistical viewpoint there

is no difference between results from LS and DE for 1198918 and11989113 According to Table 3 the NFE values obtained by theproposed method are smaller than those produced by otheroptimizers The reason of this remarkable performance isassociated with its two operators (i) the solitary operatorallows a better particle distribution in the search spaceincreasing the algorithmrsquos ability to find the global optimaand (ii) the use of the social operation provides a simpleexploitation operator that intensifies the capacity of findingbetter solutions during the evolution process

5 Gaussian Mixture Modelling

In this section the modeling of image histograms throughGaussian mixture models is presented Let one consider animage holding 119871 gray levels [0 119871 minus 1] whose distribution

is defined by a histogram ℎ(119892) represented by the followingformulation

ℎ (119892) =

119899119892

119873119901 ℎ (119892) gt 0

Np =119871minus1sum

119892=0119899119892

119871minus1sum

119892=0ℎ (119892) = 1

(15)

where 119899119892denotes the number of pixels with gray level 119892

and Np the total number of pixels in the image Under suchcircumstances ℎ(119892) can be modeled by using a mixture 119901(119909)of Gaussian functions of the form

119901 (119909) =

119870

sum

119894=1

119875119894

radic2120587120590119894

exp[minus (119909 minus 120583

119894)2

21205902119894

] (16)

where119870 symbolizes the number of Gaussian functions of themodel whereas 119875

119894is the a priori probability of function 119894 120583

119894

and 120590119894represent the mean and standard deviation of the 119894th

Gaussian function respectively Furthermore the constraintsum119870

119894=1 119875119894 = 1must be satisfied by themodel In order to evaluatethe similarity between the image histogram and a candidatemixture model the mean square error can be used as follows

119869 =1119899

119871

sum

119895=1[119901 (119909119895) minus ℎ (119909

119895)]

2+120596 sdot

1003816100381610038161003816100381610038161003816100381610038161003816

(

119870

sum

119894=1119875119894)minus 1

1003816100381610038161003816100381610038161003816100381610038161003816

(17)

where 120596 represents the penalty associated with the constrainsum119870

119894=1 119875119894 = 1 Therefore 119869 is considered as the objective func-tion which must be minimized in the estimation problem Inorder to illustrate the histogram modeling through a Gaus-sian mixture Figure 6 presents an example assuming threeclasses that is 119870 = 3 Considering Figure 6(a) as the imagehistogram ℎ(119909) the Gaussian mixture 119901(119909) that is shown inFigure 6(c) is produced by adding the Gaussian functions1199011(119909) 1199012(119909) and 1199013(119909) in the configuration presented inFigure 6(b) Once themodel parameters that bettermodel theimage histogram have been determined the final step is todefine the threshold values 119879

119894(119894 isin [1 119870]) which can be

calculated by simple standard methods just as it is presentedin [19ndash21]

6 Segmentation Algorithm Based on LS

In the proposed method the segmentation process isapproached as an optimization problem Computationaloptimization generally involves two distinct elements (1) asearch strategy to produce candidate solutions (individualsparticles insects locust etc) and (2) an objective functionthat evaluates the quality of each selected candidate solutionSeveral computational algorithms are available to performthe first element The second element where the objectivefunction is designed is unquestionably the most criticalIn the objective function it is expected to embody the full

Mathematical Problems in Engineering 11

Freq

uenc

y

Gray level

h(x)

(a)

Gray level

Freq

uenc

y

p1(x)p2(x) p3(x)

(b)

Freq

uenc

y

Gray level

p(x)

(c)

Figure 6 Histogram approximation through a Gaussian mixture (a) Original histogram (b) configuration of the Gaussian components1199011(119909) 1199012(119909) and 1199013(119909) and (c) final Gaussian mixture 119901(119909)

complexity of the performance biases and restrictions of theproblem to be solved In the segmentation problem candidatesolutions represent Gaussian mixtures The objective func-tion 119869 (17) is used as a fitness value to evaluate the similaritypresented between the Gaussian mixture and the imagehistogram Therefore guided by the fitness values (119869 values)a set of encoded candidate solutions are evolved using theevolutionary operators until the best possible resemblancecan be found

Over the last decade several algorithms based on evo-lutionary and swarm principles [19ndash22] have been proposedto solve the problem of segmentation through a Gaussianmixture model Although these algorithms own certain goodperformance indexes they present two important limitations(1) They frequently obtain suboptimal approximations asa consequence of an inappropriate balance between explo-ration and exploitation in their search strategies (2) Theyare based on the assumption that the number of Gaussians(classes) in the mixture is preknown and fixed otherwisethey cannot work

In order to eliminate such flaws the proposed approachincludes (A) a new search strategy and (B) the definitionof a new objective function For the search strategy the LSmethod (Section 4) is adopted Under LS the concentrationof individuals in the current best positions is explicitlyavoided Such fact allows reducing critical problems such asthe premature convergence to suboptimal solutions and theincorrect exploration-exploitation balance

61 New Objective Function 119869new Previous segmentation

algorithms based on evolutionary and swarm methods use(17) as objective function Under these circumstances each

candidate solution (Gaussian mixture) is only evaluated interms of its approximation with the image histogram

Since the proposed approach aims to automatically selectthe number ofGaussian functions119870 in the finalmixture119901(119909)the objective function must be modified The new objectivefunction 119869new is defined as follows

119869new

= 119869 + 120582 sdot119876 (18)

where 120582 is a scaling constant The new objective function isdivided into two partsThe first part 119869 evaluates the quality ofeach candidate solution in terms of its similarity with regardto the image histogram (17) The second part 119876 penalizes theoverlapped area among Gaussian functions (classes) with 119876being defined as follows

119876 =

119870

sum

119894=1

119870

sum

119895=1119895 =119894

119871

sum

119897=1min (119875

119894sdot 119901119894(119897) 119875119895sdot 119901119895(119897)) (19)

where 119870 and 119871 represent the number of classes and the graylevels respectively The parameters 119901

119894(119897) and 119901

119895(119897) symbolize

the Gaussian functions 119894 and 119895 respectively that are to beevaluated on the point 119897 (gray level) whereas the elements119875119894and 119875

119895represent the a priori probabilities of the Gaussian

functions 119894 and 119895 respectively Under such circumstancesmixtures with Gaussian functions that do not ldquopositivelyrdquoparticipate in the histogram approximation are severelypenalized

Figure 7 illustrates the effect of the new objective function119869new in the evaluation of Gaussian mixtures (candidatesolutions) From the image histogram (Figure 7(a)) it isevident that two Gaussian functions are enough to accurately

12 Mathematical Problems in Engineering

Gray level

Freq

uenc

y h(x)

(a)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)Freq

uenc

y

(b)

Gray level

Freq

uenc

y p1(x)

p2(x)

p3(x)

p4(x)

Q12 Q23 Q34

(c)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)

Q12

Q23

Q34

Freq

uenc

y(d)

Figure 7 Effect of the new objective function 119869new in the evaluation of Gaussian mixtures (candidate solutions) (a) Original histogram (b)Gaussian mixture considering four classes (c) penalization areas and (d) Gaussian mixture of better quality solution

approximate the original histogramHowever if theGaussianmixture is modeled by using a greater number of functions(eg four as it is shown in Figure 7(b)) the original objectivefunction 119869 is unable to obtain a reasonable approximationUnder the new objective function 119869

new the overlapped areaamong Gaussian functions (classes) is penalized Such areasin Figure 7(c) correspond to 11987612 11987623 and 11987634 where11987612 represents the penalization value produced between theGaussian function 1199011(119909) and 1199012(119909) Therefore due to thepenalization theGaussianmixture shown in Figures 7(b) and7(c) provides a solution of low quality On the other hand theGaussian mixture presented in Figure 7(d) maintains a lowpenalty thus it represents a solution of high quality FromFigure 7(d) it is easy to see that functions 1199011(119909) and 1199014(119909)can be removed from the final mixture This eliminationcould be performed by a simple comparison with a thresholdvalue 120579 since 1199011(119909) and 1199014(119909) have a reduced amplitude(1199011(119909) asymp 1199012(119909) asymp 0) Therefore under 119869new it is possibleto find the reduced Gaussian mixture model starting from aconsiderable number of functions

Since the proposed segmentation method is conceivedas an optimization problem the overall operation can bereduced to solve the formulation of (20) by using the LSalgorithm

minimize 119869new

(x) = 119869 (x) + 120582 sdot 119876 (x)

x = (1198751 1205831 1205901 119875119870 120583119870 120590119870) isin R3sdot119870

subject to 0 le 119875119889le 1 119889 isin (1 119870)

0 le 120583119889le 255

0 le 120590119889le 25

(20)

where 119875119889 120583119889 and 120590

119889represent the probability mean and

standard deviation of the class 119889 It is important to remarkthat the new objective function 119869

new allows the evaluationof a candidate solution in terms of the necessary numberof Gaussian functions and its approximation quality Undersuch circumstances it can be used in combination with anyother evolutionary method and not only with the proposedLS algorithm

62 Complete Segmentation Algorithm Once the new searchstrategy (LS) and objective function (119869new) have been definedthe proposed segmentation algorithm can be summarized byAlgorithm 2The new algorithm combines operators definedby LS and operations for calculating the threshold values

(Line 1) The algorithm sets the operative parameters 119865119871 119892 119892119890119899 119873 119870 120582 and 120579 They rule the behavior of thesegmentation algorithm (Line 2) Afterwards the populationL0 is initialized considering 119873 different random Gaussianmixtures of119870 functionsThe idea is to generate an119873-randomGaussian mixture subjected to the constraints formulatedin (20) The parameter 119870 must hold a high value in orderto correctly segment all images (recall that the algorithmis able to reduce the Gaussian mixture to its minimumexpression) (Line 3)Then the Gaussianmixtures are evolvedby using the LS operators and the new objective function119869newThis process is repeated during119892119890119899 cycles (Line 8) Afterthis procedure the best Gaussian mixture l119892119890119899

119887119890119904119905is selected

according to its objective function 119869new (Line 9) Then

the Gaussian mixture l119892119890119899119887119890119904119905

is reduced by eliminating thosefunctions whose amplitudes are lower than 120579 (119901

119894(119909) le 120579)

(Line 10) Then the threshold values 119879119894from the reduced

model are calculated (Line 11) Finally the calculated 119879119894

values are employed to segment the image Figure 8 shows aflowchart of the complete segmentation procedure

Mathematical Problems in Engineering 13

(1) Input 119865 119871 119892 119892119890119899119873 119870 120582 and 120579(2) Initialize L0 (119896 = 0)(3) until (119896 = 119892119890119899)(4) F larr SolitaryOperation(L119896) Solitary operator (Section 31)(5) L119896+1 larr SocialOperation(L119896 F) Social operator (Section 32)(6) 119896 = k + 1(7) end until(8) Obtain I119892119890119899

119887119890119904119905

(9) Reduce I119892119890119899119887119890119904119905

(10) Calculate the threshold values 119879119894from the reduced model

(11) Use 119879119894to segment the image

Algorithm 2 Segmentation LS algorithm

The proposed segmentation algorithm permits to auto-matically detect the number of segmentation partitions(classes) Furthermore due to its remarkable search capaci-ties the LSmethodmaintains a better accuracy than previousalgorithms based on evolutionary principles However theproposed method presents two disadvantages first it isrelated to its implementation which in general is morecomplex than most of the other segmentators based onevolutionary basics The second refers to the segmentationprocedure of the proposed approach which does not considerany spatial pixel characteristics As a consequence pixelsthat may belong to a determined region due to its positionare labeled as a part of another region due to its graylevel intensity Such a fact adversely affects the segmentationperformance of the method

7 Segmentation Results

This section analyses the performance of the proposedsegmentation algorithmThe discussion is divided into threeparts the first one shows the performance of the proposedLS segmentation algorithm while the second presents a com-parison between the proposed approach and others segmen-tation algorithms that are based on evolutionary and swammethods The comparison mainly considers the capacities ofeach algorithm to accurately and robustly approximate theimage histogram Finally the third part presents an objectiveevaluation of segmentation results produced by all algorithmsthat have been employed in the comparisons

71 Performance of LS Algorithm in Image SegmentationThis section presents two experiments that analyze theperformance of the proposed approach Table 5 presentsthe algorithmrsquos parameters that have been experimentallydetermined and kept for all the test images through allexperiments

First Image The first test considers the histogram shownby Figure 9(b) while Figure 9(a) presents the original imageAfter applying the proposed algorithm just as it has beenconfigured according to parameters in Table 5 a minimummodel of four classes emerges after the start from Gaussian

Table 5 Parameter setup for the LS segmentation algorithm

119865 119871 119892 119892119890119899 119873 119870 120582 120579

06 02 20 1000 40 10 001 00001

Table 6 Results of the reduced Gaussian mixture for the first andthe second image

Parameters First image Second image1198751 0004 00321205831 181 1211205901 82 291198752 00035 000151205832 699 4511205902 184 2491198753 001 00061205833 949 13011205903 89 1791198754 0007 0021205834 1631 16721205904 299 101

mixtures of 10 classes Considering 30 independent exe-cutions the averaged parameters of the resultant Gaussianmixture are presented in Table 6 One final Gaussian mixture(ten classes) which has been obtained by LS is presentedin Figure 10 Furthermore the approximation of the reducedGaussian mixture is also visually compared with the originalhistogram in Figure 10 On the other hand Figure 11 presentsthe segmented image after calculating the threshold points

Second Image For the second experiment the image inFigure 12 is testedThemethod aims to segment the image byusing a reduced Gaussian mixture model obtained by the LSapproach After executing the algorithm according to param-eters from Table 5 the resulting averaged parameters of theresultant Gaussian mixture are presented in Table 6 In orderto assure consistency the results are averaged considering 30independent executions Figure 13 shows the approximationquality that is obtained by the reduced Gaussian mixturemodel in (a) and the segmented image in (b)

14 Mathematical Problems in Engineering

Input parameters

ImageI(xy)

from the reduced model

YesLS method

No

k = k + 1

k gt gen

Reduce 119845genbest

Calculate the threshold values Ti

Use Ti to segment the image

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

119813 larr SolitaryOperation (119819k)

119819k+1 larr SocialOperation (119819k 119813)

Initialize 1198190 (k = 0)

F L g gen N K 120582 and 120579

Figure 8 Flowchart of the complete segmentation procedure

(a)

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray level

Freq

uenc

y

(b)

Figure 9 (a) Original first image used on the first experiment and (b) its histogram

Mathematical Problems in Engineering 15

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray levelFr

eque

ncy

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

Figure 10 Gaussian mixture obtained by LS for the first image

Figure 11 Image segmented with the reduced Gaussian mixture

(a)

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

(b)

Figure 12 (a) Original second image used on the first experiment and (b) its histogram

72 Histogram Approximation Comparisons This sectiondiscusses the comparison between the proposed segmenta-tion algorithm and other evolutionary-segmentation meth-ods that have been proposed in the literature The set ofmethods used in the experiments includes the 119869 + ABC[19] 119869 + AIS [20] and 119869 + DE [21] These algorithmsconsider the combination between the original objectivefunction 119869 (17) and an evolutionary technique such asArtificial Bee Colony (ABC) the Artificial Immune Systems(AIS) and the Differential Evolution (DE) [21] respectively

Since the proposed segmentation approach considers thecombination of the new objective function 119869

new (18) andthe LS algorithm it will be referred to as 119869

new+ LS

The comparison focuses mainly on the capacities of eachalgorithm to accurately and robustly approximate the imagehistogram

In the experiments the populations have been set to 40(119873 = 40) individuals The maximum iteration number for allfunctions has been set to 1000 Such stop criterion has beenconsidered to maintain compatibility to similar experiments

16 Mathematical Problems in Engineering

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

(a) (b)

Figure 13 (a) Segmentation result obtained by LS for the first image and (b) the segmented image

(1) (2)

(3) (4)

(a)

0015

0005001

0020025

0035003

0045004

005

00 50 100 150 200 250

Gray level

Freq

uenc

y

(b)

Figure 14 (a) Synthetic image used in the comparison and (b) its histogram

that are reported in the literature [18] The parameter settingfor each of the segmentation algorithms in the comparison isdescribed as follows

(1) 119869 + ABC [19] in the algorithm its parameters areconfigured as follows the abandonment limit = 100120572 = 005 and limit = 30

(2) 119869 + AIS [20] it presents the following parameters ℎ =120119873

119888= 80 120588 = 10 119875

119903= 20 119871 = 22 and 119879

119890= 001

(3) 119869 + DE [21] the DERand1 scheme is employed Theparameter settings follow the instructions in [21]Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(4) In 119869new +LS the method is set according to the valuesdescribed in Table 5

In order to conduct the experiments a synthetic image isdesigned to be used as a reference in the comparisons Themain idea is to know in advance the exact number of classes(and their parameters) that are contained in the image sothat the histogram can be considered as a ground truth The

Table 7 Employed parameters for the design of the reference image

119875119894

120583119894

120590119894

(1) 005 40 8(2) 004 100 10(3) 005 160 8(4) 0027 220 15

synthetic image is divided into four sections Each sectioncorresponds to a different class which is produced by settingeach gray level pixel 119875V119894 to a value that is determined by thefollowing model

119875V119894= 119890minus((119909minus1205831)

221205902119894) (21)

where 119894 represents the section whereas 120583119894and 120590119894are themean

and the dispersion of the gray level pixels respectively Thecomparative study employs the image of 512 times 512 that isshown in Figure 14(a) and the algorithmrsquos parameters thathave been presented in Table 7 Figure 14(b) illustrates theresultant histogram

Mathematical Problems in Engineering 17

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2Class 3Class 4

Class 5Class 6Class 7Class 8

(a)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(b)

Figure 15 Convergence results (a) Convergence of the following methods 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes (b) Convergence of the proposed method (reduced Gaussian mixture)

(a) (b)

Figure 16 Segmentation results obtained by (a) several methods including 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes and (b) the proposed method (reduced Gaussian mixture)

In the comparison the discussion focuses on the follow-ing issues first of all accuracy second convergence andthird computational cost

Convergence This section analyzes the approximation con-vergence when the number of classes that are used bythe algorithm during the evolution process is different tothe actual number of classes in the image Recall that theproposed approach automatically finds the reduced Gaussianmixture which better adapts to the image histogram

In the experiment the methods 119869 + ABC 119869 + AISand 119869 + DE are executed considering Gaussian mixturescomposed of 8 functions Under such circumstances thenumber of classes to be detected is higher than the actualnumber of classes in the image On the other hand the

proposed algorithm maintains the same configuration ofTable 5Therefore it can detect and calculate up to ten classes(119870 = 10)

As a result the techniques 119869 + ABC 119869 + AIS and 119869 +DE tend to overestimate the image histogram This effectcan be seen in Figure 15(a) where the resulting Gaussianfunctions are concentrated within actual classes Such abehavior is a consequence of the evaluation that is consideredby the original objective function 119869 which privileges onlythe approximation between the Gaussian mixture and theimage histogram This effect can be graphically illustrated byFigure 16(a) that shows the pixel misclassification producedby the wrong segmentation of Figure 14(a) On the otherhand the proposed approach obtains a reduced Gaussianmixture model which allows the detection of each class from

18 Mathematical Problems in Engineering

006

004

005

002

003

001

00 50 100 150 200 250

Gray level

Freq

uenc

y

(a)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

(b)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

HistogramClass 1Class 2

Class 3Class 4

(c)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(d)

Figure 17 Approximation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

the actual histogram (see Figure 15(b)) As a consequencethe segmentation is significantly improved by eliminating thepixel misclassification as it is shown by Figure 16(b)

It is evident from Figure 15 that the techniques 119869 + ABC119869 + AIS and 119869 + DE all need an a priori knowledge of thenumber of classes that are contained in the actual histogramin order to obtain a satisfactory result On the other handthe proposed algorithm is able to find a reduced Gaussianmixture whose classes coincide with the actual number ofclasses that are contained in the image histogram

Accuracy In this section the comparison among the algo-rithms in terms of accuracy is reported Most of the reportedcomparisons [19ndash26] are concerned about comparing theparameters of the resultant Gaussian mixtures by using realimages Under such circumstances it is difficult to considera clear reference in order to define a meaningful errorTherefore the image defined in Figure 14 has been used in theexperiments because its construction parameters are clearlydefined in Table 7

Since the parameter values from Table 7 act as groundtruth a simple averaged difference between them and thevalues that are computed by each algorithm could be used as

comparison error However as each parametermaintains dif-ferent active intervals it is necessary to express the differencesin terms of percentage Therefore if Δ120573 is the parametricdifference and 120573 the ground truth parameter the percentageerror Δ120573 can be defined as follows

Δ120573 =Δ120573

120573sdot 100 (22)

In the segmentation problem each Gaussian mixture repre-sents a 119870-dimensional model where each dimension corre-sponds to a Gaussian function of the optimization problemto be solved Since each Gaussian function possesses threeparameters 119875

119894 120583119894 and 120590

119894 the complete number of parameters

is 3 sdot 119870 dimensions Therefore the final error 119864 produced bythe final Gaussian mixture is

119864 =1

119870 sdot 3

119870sdot3sum

V=1Δ120573V (23)

where 120573V isin (119875119894 120583119894 120590119894)In order to compare accuracy the algorithms 119869 + ABC

119869 + AIS 119869 + DE and the proposed approach are all executedover the image shown by Figure 14(a) The experiment aims

Mathematical Problems in Engineering 19

(a) (b)

(c) (d)

Figure 18 Segmentation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

to estimate the Gaussian mixture that better approximatesthe actual image histogram Methods 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures composed of 4functions (119870 = 4) In case of the 119869new + LS method althoughthe algorithm finds a reduced Gaussian mixture of fourfunctions it is initially set with ten functions (119870 = 10)Table 8 presents the final Gaussian mixture parameters andthe final error 119864 The final Gaussian mixture parametershave been averaged over 30 independent executions in orderto assure consistency A close inspection of Table 8 revealsthat the proposed method is able to achieve the smallesterror (119864) in comparison to the other algorithms Figure 16presents the histogram approximations that are produced byeach algorithm whereas Figure 17 shows their correspondentsegmented images Both illustrations present the mediancase obtained throughout 30 runs Figure 18 exhibits that 119869 +ABC 119869 + AIS and 119869 + DE present different levels of misclas-sifications which are nearly absent in the proposed approachcase

Computational Cost The experiment aims to measure thecomplexity and the computing time spent by the 119869 + ABC

Table 8 Results of the reduced Gaussian mixture in terms ofaccuracy

Algorithm Gaussian function 119875119894

120583119894

120590119894

119864

J + ABC

(1) 0052 445 64

1179(2) 0084 9812 1287(3) 0058 16350 894(4) 0025 21884 175

J + AIS

(1) 0072 3101 614

2201(2) 0054 8852 1221(3) 0039 14921 914(4) 0034 24841 1384

J + DE

(1) 0041 3574 786

1357(2) 0036 9057 1197(3) 0059 14847 901(4) 0020 20134 1302

119869new + LS

(1) 0049 4012 75

398(2) 0041 10204 104(3) 0052 16866 83(4) 0025 11092 152

20 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 19 Images employed in the computational cost analysis

the 119869 + AIS the 119869 + DE and the 119869new + LS algorithm whilecalculating the parameters of the Gaussian mixture in bench-mark images (see Figures 19(a)ndash19(d)) 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures that are composedof 4 functions (119870 = 4) In case of the 119869new + LS methodalthough the algorithm finds a reduced Gaussian mixture offour functions despite being initialized with ten functions(119870 = 10) Table 9 shows the averaged measurements after30 experiments It is evident that the 119869 + ABC and 119869 +

DE are the slowest to converge (iterations) and the 119869 + AISshows the highest computational cost (time elapsed) becauseit requires operators which demand long times On the otherhand the 119869new + LS shows an acceptable trade-off betweenits convergence time and its computational cost Thereforealthough the implementation of 119869new + LS in general requiresmore code than most of other evolution-based segmentatorssuch a fact is not reflected in the execution time FinallyFigure 19 below shows the segmented images as they are

Mathematical Problems in Engineering 21

(a) (b) (c) (d)

Figure 20 Experimental set used in the evaluation of the segmentation results

Table 9 Iterations and time requirements of the 119869 + ABC the 119869 +AIS the 119869 + DE and the 119869new + LS algorithm as they are applied tosegment benchmark images (see Figure 17)

Iterations (a) (b) (c) (d)Time elapsed

119869 + ABC 855 833 870 997272 s 270 s 273 s 31 s

119869 + AIS 725 704 754 812178 s 161 s 141 s 201 s

119869 + DE 657 627 694 742125 s 112 s 145 s 188 s

119869new + LS 314 298 307 402

098 s 084 s 072 s 102 s

Table 10 Evaluation of the segmentation results in terms of the ROSindex

Number of classesImage

119873119877= 4

(a)119873119877= 3

(b)119873119877= 4

(c)119873119877= 4

(d)119869 + ABC 0534 0328 0411 0457119869 + AIS 0522 0321 0427 0437119869 + DE 0512 0312 0408 0418119869new + LS 0674 0401 0514 0527

generated by each algorithm It can be seen that the proposedapproach generate more homogeneous regions whereas 119869 +ABC 119869 + AIS and 119869 + DE present several artifacts that areproduced by an incorrect pixel classification

73 Performance Evaluation of the Segmentation Results Thissection presents an objective evaluation of segmentationresults that are produced by all algorithms in the compar-isons The ill-defined nature of the segmentation problemmakes the evaluation of a candidate algorithm difficult[57] Traditionally the evaluation has been conducted byusing some supervised criteria [58] which are based on thecomputation of a dissimilarity measure between a segmen-tation result and a ground truth image Recently the use ofunsupervised measures has substituted supervised indexesfor the objective evaluation of segmentation results [59]They

Table 11 Unimodal test functions

Test function S 119891opt

1198911(x) =119899

sum

119894=11199092119894

[minus100 100]119899 0

1198912(x) =119899

sum

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 +

119899

prod

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 [minus10 10]119899 0

1198913(x) =119899

sum

119894=1(

119894

sum

119895=1119909119895)

2

[minus100 100]119899 0

1198914 (x) = max119894

10038161003816100381610038161199091198941003816100381610038161003816 1 le 119894 le 119899 [minus100 100]119899 0

1198915(x) =119899minus1sum

119894=1[100 (119909

119894+1 minus 1199092119894)2+ (119909119894minus 1)2] [minus30 30]119899 0

1198916(x) =119899

sum

119894=1(119909119894+ 05)2 [minus100 100]119899 0

1198917(x) =119899

sum

119894=1119894119909

4119894+ rand(0 1) [minus128 128]119899 0

enable the quantification of the quality of a segmentationresult without a priori knowledge (ground truth image)

Evaluation Criteria In this paper the unsupervised indexROS proposed by Chabrier et al [60] has been used toobjectively evaluate the performance of each candidate algo-rithm This index evaluates the segmentation quality interms of the homogeneity within segmented regions andthe heterogeneity among the different regions ROS can becomputed as follows

ROS =119863 minus 119863

2 (24)

where 119863 quantifies the homogeneity within segmentedregions Similarly 119863 measures the disparity among theregions A segmentation result 1198781 is considered better than 1198782if ROS

1198781gt ROS

1198782 The interregion homogeneity character-

ized by119863 is calculated considering the following formulation

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot

120590119888

(sum119873119877

119897=1 120590119897) (25)

where 119873119877represents the number of partitions in which the

image has been segmented 119877119888symbolizes the number of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

8 Mathematical Problems in Engineering

(1) Input 119865 119871 119892119873 119892119890119899 ℎ and 120573(2) Initialize L0 (119896 = 0)(3) until (119896 = 119892119890119899)(4) F larr SolitaryOperation(L119896) Solitary operator (Section 31)(5) L119896+1 larr SocialOperation(L119896 F) Social operator (Section 32)(6) 119896 = k + 1(7) end until

Algorithm 1 Locust Search (LS) algorithm

34 Discussion about the LS Algorithm Evolutionary algo-rithms (EA) have been widely employed for solving complexoptimization problems These methods are found to bemore powerful than conventional methods that are basedon formal logics or mathematical programming [46] Inthe EA algorithm search agents have to decide whetherto explore unknown search positions or to exploit alreadytested positions in order to improve their solution qualityPure exploration degrades the precision of the evolutionaryprocess but increases its capacity to find new potentially solu-tions On the other hand pure exploitation allows refiningexistent solutions but adversely drives the process to localoptimal solutions Therefore the ability of an EA to find aglobal optimal solution depends on its capacity to find a goodbalance between the exploitation of found-so-far elementsand the exploration of the search space [47]

Most of swarm algorithms and other evolutionary algo-rithms tend to exclusively concentrate the individuals in thecurrent best positions Under such circumstances such algo-rithms seriously limit their exploration-exploitation capaci-ties

Different to most of existent evolutionary algorithmsin the proposed approach the modeled behavior explicitlyavoids the concentration of individuals in the current bestpositions Such fact allows not only to emulate the coop-erative behavior of the locust colony in a good realisticway but also to incorporate a computational mechanism toavoid critical flaws that are commonly present in the popularevolutionary algorithms such as the premature convergenceand the incorrect exploration-exploitation balance

It is important to emphasize that the proposed approachconducts two operators (solitary and social) within a singleiteration Such operators are similar to those that are usedby other evolutionary methods such as ABC (employed beesonlooker bees and scout bees) AIS (clonal proliferationoperator affinity maturation operator and clonal selectionoperator) andDE (mutation crossover and selection) whichare all executed in a single iteration

4 Numerical Experiments overBenchmark Functions

A comprehensive set of 13 functions all collected from [48ndash50] has been used to test the performance of the LS approachas an optimization method Tables 11 and 12 present thebenchmark functions used in our experimental study Such

functions are classified into two different categories uni-modal test functions (Table 11) andmultimodal test functions(Table 12) In these tables 119899 is the function dimension 119891optis the minimum value of the function with S being a subsetof 119877119899 The optimum locations (xopt) for functions in Tables11 and 12 are in [0]119899 except for 1198915 11989112 and 11989113 with xopt in[1]119899 and 1198918 in [42096]

119899 A detailed description of optimumlocations is given in Tables 11 and 12

We have applied the LS algorithm to 13 functions whoseresults have been compared to those produced by the ParticleSwarmOptimization (PSO) method [27] and the DifferentialEvolution (DE) algorithm [28] both considered as the mostpopular algorithms formany optimization applications In allcomparisons the population has been set to 40 (119873 = 40)individualsThemaximum iteration number for all functionshas been set to 1000 Such stop criterion has been selectedto maintain compatibility to similar works reported in theliterature [48 49]

The parameter setting for each of the algorithms in thecomparison is described as follows

(1) PSO in the algorithm 1198881 = 1198882 = 2 while the inertiafactor (120596) is decreased linearly from 09 to 02

(2) DE the DERand1 scheme is employed The param-eter settings follow the instructions in [28 51] Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(3) In LS 119865 and 119871 are set to 06 and 119871 respectivelyBesides 119892 is fixed to 20 (1198732) ℎ = 2 120573 = 06 whereas119892119890119899 and119873 are configured to 1000 and 40 respectivelyOnce such parameters have been experimentallydetermined they are kept for all experiments in thissection

In the comparison three indexes are considered the averagebest-so-far solution (ABS) the standard deviation (SD)and the number of function evaluations (NFE) The firsttwo indexes assess the accuracy of the solution whereasthe last one measures the computational cost The averagebest-so-far solution (ABS) expresses the average value ofthe best function evaluations that have been obtained from30 independent executions The standard deviation (SD)indicates the dispersion of the ABS values Evolutionarymethods are in general complex pieces of software withseveral operators and stochastic branches Therefore it isdifficult to conduct a complexity analysis fromadeterministicperspective Under such circumstances it ismore appropriate

Mathematical Problems in Engineering 9

Table 1 Minimization results from the benchmark functions test inTable 11 with 119899 = 30 Maximum number of iterations = 1000

PSO DE LS

1198911

ABS 166 times 10minus1 627 times 10minus3 455 times 10minus4

SD 379 times 10minus1 168 times 10minus1 698 times 10minus4

NFE 28610 20534 16780

1198912

ABS 483 times 10minus1 202 times 10minus1 541 times 10minus3

SD 159 times 10minus1 066 145 times 10minus2

NFE 28745 21112 16324

1198913

ABS 275 572 times 10minus1 161 times 10minus3

SD 101 015 132 times 10minus3

NFE 38320 36894 20462

1198914

ABS 184 011 105 times 10minus2

SD 087 005 663 times 10minus3

NFE 37028 36450 21158

1198915

ABS 307 239 411 times 10minus2

SD 042 036 274 times 10minus3

NFE 39432 37264 21678

1198916

ABS 636 651 588 times 10minus2

SD 074 087 167 times 10minus2

NFE 38490 36564 22238

1198917

ABS 614 012 271 times 10minus2

SD 073 002 118 times 10minus2

NFE 37274 35486 21842

to use the number of function evaluations (NFE) just asit is used in the literature [52 53] to evaluate and assessthe computational effort (time) and the complexity amongoptimizers It represents how many times an algorithm usesthe objective function to evaluate the objective (fitness)function until the best solution of a determined executionhas been found Since the experiments require 30 differentexecutions the NFE index corresponds to the averaged valueobtained from these executions A small NFE value indicatesthat less time is needed to reach the global optimum

41 Unimodal Test Functions Functions 1198911 to 1198917 are uni-modal functions The results for unimodal functions over30 runs are reported in Table 1 considering the averagebest-so-far solution (ABS) the standard deviation (SD) andthe number of function evaluations (NFE) According tothis table LS provides better results than PSO and DE forall functions in terms of ABS and SD In particular thetest yields the largest performance difference in functions1198914ndash1198917 Such functions maintain a narrow curving valley that

is hard to optimize in case the search space cannot beexplored properly and the direction changes cannot be keptup with [54] For this reason the performance differencesare directly related to a better trade-off between explorationand exploitation that is produced by LS operators In thepractice a main goal of an optimization algorithm is to finda solution as good as possible within a small time windowThe computational cost for the optimizer is represented byits NFE values According to Table 1 the NFE values thatare obtained by the proposed method are smaller than its

Table 2 119901 values produced by Wilcoxonrsquos test that compares LSversus PSO and DE over the ldquoaverage best-so-farrdquo values fromTable 1

LS versus PSO DE1198911

183 times 10minus4 173 times 10minus2

1198912

385 times 10minus3 183 times 10minus4

1198913

173 times 10minus4 623 times 10minus3

1198914

257 times 10minus4 521 times 10minus3

1198915

473 times 10minus4 183 times 10minus3

1198916

639 times 10minus5 215 times 10minus3

1198917

183 times 10minus4 221 times 10minus3

counterparts LowerNFE values aremore desirable since theycorrespond to less computational overload and thereforefaster results In the results perspective it is clear that PSOand DE needmore than 1000 generations in order to producebetter solutions However this number of generations isconsidered in the experiments aiming for producing a visiblecontrast among the approaches If the number of generationshas been set to an exaggerated value then all methods wouldconverge to the best solution with no significant troubles

A nonparametric statistical significance proof known asWilcoxonrsquos rank sum test for independent samples [55 56]has been conducted with an 5 significance level over theldquoaverage best-so-farrdquo data of Table 1 Table 2 reports the 119901values produced byWilcoxonrsquos test for the pairwise compari-son of the ldquoaverage best-so-farrdquo of two groups Such groupsare formed by LS versus PSO and LS versus DE As a nullhypothesis it is assumed that there is no significant differencebetween mean values of the two algorithms The alternativehypothesis considers a significant difference between theldquoaverage best-so-farrdquo values of both approaches All 119901 valuesreported in the table are less than 005 (5 significancelevel) which is a strong evidence against the null hypothesisindicating that the LS results are statistically significant andthat it has not occurred by coincidence (ie due to the normalnoise contained in the process)

42 Multimodal Test Functions Multimodal functions pos-sess many local minima which make the optimization adifficult task to be accomplished In multimodal functionsthe results reflect the algorithmrsquos ability to escape fromlocal optima We have applied the algorithms over functions1198918 to 11989113 where the number of local minima increasesexponentially as the dimension of the function increasesThe dimension of such functions is set to 30 The resultsare averaged over 30 runs with performance indexes beingreported in Table 3 as follows the average best-so-far solution(ABS) the standard deviation (SD) and the number of func-tion evaluations (NFE) Likewise 119901 values of the Wilcoxonsigned-rank test of 30 independent runs are listed in Table 4From the results it is clear that LS yields better solutionsthan others algorithms for functions 1198919 11989110 11989111 and 11989112 interms of the indexes ABS and SD However for functions 1198918and 11989113 LS produces similar results to DE The Wilcoxonrank test results that are presented in Table 6 confirm that

10 Mathematical Problems in Engineering

Table 3Minimization results from the benchmark functions test inTable 12 with 119899 = 30 Maximum number of iterations = 1000

PSO DE LS

1198918

ABS minus67 times 103 minus126 times 104 minus126 times 104

SD 63 times 102 37 times 102 11 times 102

NFE 38452 35240 21846

1198919

ABS 148 401 times 10minus1 249 times 10minus3

SD 139 51 times 10minus2 48 times 10minus4

NFE 37672 34576 20784

11989110

ABS 147 466 times 10minus2 215 times 10minus3

SD 144 127 times 10minus2 318 times 10minus4

NFE 39475 37080 21235

11989111

ABS 1201 115 147 times 10minus4

SD 312 006 148 times 10minus5

NFE 38542 34875 22126

11989112

ABS 687 times 10minus1 374 times 10minus1 558 times 10minus3

SD 707 times 10minus1 155 times 10minus1 418 times 10minus4

NFE 35248 30540 16984

11989113

ABS 187 times 10minus1 181 times 10minus2 178 times 10minus2

SD 574 times 10minus1 166 times 10minus2 164 times 10minus3

NFE 36022 31968 18802

Table 4 119901 values produced byWilcoxonrsquos test comparing LS versusPSO and DE over the ldquoaverage best-so-farrdquo values from Table 3

LS versus PSO DE1198918

183 times 10minus4 00611198919

117 times 10minus4 241 times 10minus4

11989110

143 times 10minus4 312 times 10minus3

11989111

625 times 10minus4 114 times 10minus3

11989112

234 times 10minus5 715 times 10minus4

11989113

473 times 10minus4 0071

LS performed better than PSO and DE considering fourproblems 119891

9ndash11989112 whereas from a statistical viewpoint there

is no difference between results from LS and DE for 1198918 and11989113 According to Table 3 the NFE values obtained by theproposed method are smaller than those produced by otheroptimizers The reason of this remarkable performance isassociated with its two operators (i) the solitary operatorallows a better particle distribution in the search spaceincreasing the algorithmrsquos ability to find the global optimaand (ii) the use of the social operation provides a simpleexploitation operator that intensifies the capacity of findingbetter solutions during the evolution process

5 Gaussian Mixture Modelling

In this section the modeling of image histograms throughGaussian mixture models is presented Let one consider animage holding 119871 gray levels [0 119871 minus 1] whose distribution

is defined by a histogram ℎ(119892) represented by the followingformulation

ℎ (119892) =

119899119892

119873119901 ℎ (119892) gt 0

Np =119871minus1sum

119892=0119899119892

119871minus1sum

119892=0ℎ (119892) = 1

(15)

where 119899119892denotes the number of pixels with gray level 119892

and Np the total number of pixels in the image Under suchcircumstances ℎ(119892) can be modeled by using a mixture 119901(119909)of Gaussian functions of the form

119901 (119909) =

119870

sum

119894=1

119875119894

radic2120587120590119894

exp[minus (119909 minus 120583

119894)2

21205902119894

] (16)

where119870 symbolizes the number of Gaussian functions of themodel whereas 119875

119894is the a priori probability of function 119894 120583

119894

and 120590119894represent the mean and standard deviation of the 119894th

Gaussian function respectively Furthermore the constraintsum119870

119894=1 119875119894 = 1must be satisfied by themodel In order to evaluatethe similarity between the image histogram and a candidatemixture model the mean square error can be used as follows

119869 =1119899

119871

sum

119895=1[119901 (119909119895) minus ℎ (119909

119895)]

2+120596 sdot

1003816100381610038161003816100381610038161003816100381610038161003816

(

119870

sum

119894=1119875119894)minus 1

1003816100381610038161003816100381610038161003816100381610038161003816

(17)

where 120596 represents the penalty associated with the constrainsum119870

119894=1 119875119894 = 1 Therefore 119869 is considered as the objective func-tion which must be minimized in the estimation problem Inorder to illustrate the histogram modeling through a Gaus-sian mixture Figure 6 presents an example assuming threeclasses that is 119870 = 3 Considering Figure 6(a) as the imagehistogram ℎ(119909) the Gaussian mixture 119901(119909) that is shown inFigure 6(c) is produced by adding the Gaussian functions1199011(119909) 1199012(119909) and 1199013(119909) in the configuration presented inFigure 6(b) Once themodel parameters that bettermodel theimage histogram have been determined the final step is todefine the threshold values 119879

119894(119894 isin [1 119870]) which can be

calculated by simple standard methods just as it is presentedin [19ndash21]

6 Segmentation Algorithm Based on LS

In the proposed method the segmentation process isapproached as an optimization problem Computationaloptimization generally involves two distinct elements (1) asearch strategy to produce candidate solutions (individualsparticles insects locust etc) and (2) an objective functionthat evaluates the quality of each selected candidate solutionSeveral computational algorithms are available to performthe first element The second element where the objectivefunction is designed is unquestionably the most criticalIn the objective function it is expected to embody the full

Mathematical Problems in Engineering 11

Freq

uenc

y

Gray level

h(x)

(a)

Gray level

Freq

uenc

y

p1(x)p2(x) p3(x)

(b)

Freq

uenc

y

Gray level

p(x)

(c)

Figure 6 Histogram approximation through a Gaussian mixture (a) Original histogram (b) configuration of the Gaussian components1199011(119909) 1199012(119909) and 1199013(119909) and (c) final Gaussian mixture 119901(119909)

complexity of the performance biases and restrictions of theproblem to be solved In the segmentation problem candidatesolutions represent Gaussian mixtures The objective func-tion 119869 (17) is used as a fitness value to evaluate the similaritypresented between the Gaussian mixture and the imagehistogram Therefore guided by the fitness values (119869 values)a set of encoded candidate solutions are evolved using theevolutionary operators until the best possible resemblancecan be found

Over the last decade several algorithms based on evo-lutionary and swarm principles [19ndash22] have been proposedto solve the problem of segmentation through a Gaussianmixture model Although these algorithms own certain goodperformance indexes they present two important limitations(1) They frequently obtain suboptimal approximations asa consequence of an inappropriate balance between explo-ration and exploitation in their search strategies (2) Theyare based on the assumption that the number of Gaussians(classes) in the mixture is preknown and fixed otherwisethey cannot work

In order to eliminate such flaws the proposed approachincludes (A) a new search strategy and (B) the definitionof a new objective function For the search strategy the LSmethod (Section 4) is adopted Under LS the concentrationof individuals in the current best positions is explicitlyavoided Such fact allows reducing critical problems such asthe premature convergence to suboptimal solutions and theincorrect exploration-exploitation balance

61 New Objective Function 119869new Previous segmentation

algorithms based on evolutionary and swarm methods use(17) as objective function Under these circumstances each

candidate solution (Gaussian mixture) is only evaluated interms of its approximation with the image histogram

Since the proposed approach aims to automatically selectthe number ofGaussian functions119870 in the finalmixture119901(119909)the objective function must be modified The new objectivefunction 119869new is defined as follows

119869new

= 119869 + 120582 sdot119876 (18)

where 120582 is a scaling constant The new objective function isdivided into two partsThe first part 119869 evaluates the quality ofeach candidate solution in terms of its similarity with regardto the image histogram (17) The second part 119876 penalizes theoverlapped area among Gaussian functions (classes) with 119876being defined as follows

119876 =

119870

sum

119894=1

119870

sum

119895=1119895 =119894

119871

sum

119897=1min (119875

119894sdot 119901119894(119897) 119875119895sdot 119901119895(119897)) (19)

where 119870 and 119871 represent the number of classes and the graylevels respectively The parameters 119901

119894(119897) and 119901

119895(119897) symbolize

the Gaussian functions 119894 and 119895 respectively that are to beevaluated on the point 119897 (gray level) whereas the elements119875119894and 119875

119895represent the a priori probabilities of the Gaussian

functions 119894 and 119895 respectively Under such circumstancesmixtures with Gaussian functions that do not ldquopositivelyrdquoparticipate in the histogram approximation are severelypenalized

Figure 7 illustrates the effect of the new objective function119869new in the evaluation of Gaussian mixtures (candidatesolutions) From the image histogram (Figure 7(a)) it isevident that two Gaussian functions are enough to accurately

12 Mathematical Problems in Engineering

Gray level

Freq

uenc

y h(x)

(a)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)Freq

uenc

y

(b)

Gray level

Freq

uenc

y p1(x)

p2(x)

p3(x)

p4(x)

Q12 Q23 Q34

(c)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)

Q12

Q23

Q34

Freq

uenc

y(d)

Figure 7 Effect of the new objective function 119869new in the evaluation of Gaussian mixtures (candidate solutions) (a) Original histogram (b)Gaussian mixture considering four classes (c) penalization areas and (d) Gaussian mixture of better quality solution

approximate the original histogramHowever if theGaussianmixture is modeled by using a greater number of functions(eg four as it is shown in Figure 7(b)) the original objectivefunction 119869 is unable to obtain a reasonable approximationUnder the new objective function 119869

new the overlapped areaamong Gaussian functions (classes) is penalized Such areasin Figure 7(c) correspond to 11987612 11987623 and 11987634 where11987612 represents the penalization value produced between theGaussian function 1199011(119909) and 1199012(119909) Therefore due to thepenalization theGaussianmixture shown in Figures 7(b) and7(c) provides a solution of low quality On the other hand theGaussian mixture presented in Figure 7(d) maintains a lowpenalty thus it represents a solution of high quality FromFigure 7(d) it is easy to see that functions 1199011(119909) and 1199014(119909)can be removed from the final mixture This eliminationcould be performed by a simple comparison with a thresholdvalue 120579 since 1199011(119909) and 1199014(119909) have a reduced amplitude(1199011(119909) asymp 1199012(119909) asymp 0) Therefore under 119869new it is possibleto find the reduced Gaussian mixture model starting from aconsiderable number of functions

Since the proposed segmentation method is conceivedas an optimization problem the overall operation can bereduced to solve the formulation of (20) by using the LSalgorithm

minimize 119869new

(x) = 119869 (x) + 120582 sdot 119876 (x)

x = (1198751 1205831 1205901 119875119870 120583119870 120590119870) isin R3sdot119870

subject to 0 le 119875119889le 1 119889 isin (1 119870)

0 le 120583119889le 255

0 le 120590119889le 25

(20)

where 119875119889 120583119889 and 120590

119889represent the probability mean and

standard deviation of the class 119889 It is important to remarkthat the new objective function 119869

new allows the evaluationof a candidate solution in terms of the necessary numberof Gaussian functions and its approximation quality Undersuch circumstances it can be used in combination with anyother evolutionary method and not only with the proposedLS algorithm

62 Complete Segmentation Algorithm Once the new searchstrategy (LS) and objective function (119869new) have been definedthe proposed segmentation algorithm can be summarized byAlgorithm 2The new algorithm combines operators definedby LS and operations for calculating the threshold values

(Line 1) The algorithm sets the operative parameters 119865119871 119892 119892119890119899 119873 119870 120582 and 120579 They rule the behavior of thesegmentation algorithm (Line 2) Afterwards the populationL0 is initialized considering 119873 different random Gaussianmixtures of119870 functionsThe idea is to generate an119873-randomGaussian mixture subjected to the constraints formulatedin (20) The parameter 119870 must hold a high value in orderto correctly segment all images (recall that the algorithmis able to reduce the Gaussian mixture to its minimumexpression) (Line 3)Then the Gaussianmixtures are evolvedby using the LS operators and the new objective function119869newThis process is repeated during119892119890119899 cycles (Line 8) Afterthis procedure the best Gaussian mixture l119892119890119899

119887119890119904119905is selected

according to its objective function 119869new (Line 9) Then

the Gaussian mixture l119892119890119899119887119890119904119905

is reduced by eliminating thosefunctions whose amplitudes are lower than 120579 (119901

119894(119909) le 120579)

(Line 10) Then the threshold values 119879119894from the reduced

model are calculated (Line 11) Finally the calculated 119879119894

values are employed to segment the image Figure 8 shows aflowchart of the complete segmentation procedure

Mathematical Problems in Engineering 13

(1) Input 119865 119871 119892 119892119890119899119873 119870 120582 and 120579(2) Initialize L0 (119896 = 0)(3) until (119896 = 119892119890119899)(4) F larr SolitaryOperation(L119896) Solitary operator (Section 31)(5) L119896+1 larr SocialOperation(L119896 F) Social operator (Section 32)(6) 119896 = k + 1(7) end until(8) Obtain I119892119890119899

119887119890119904119905

(9) Reduce I119892119890119899119887119890119904119905

(10) Calculate the threshold values 119879119894from the reduced model

(11) Use 119879119894to segment the image

Algorithm 2 Segmentation LS algorithm

The proposed segmentation algorithm permits to auto-matically detect the number of segmentation partitions(classes) Furthermore due to its remarkable search capaci-ties the LSmethodmaintains a better accuracy than previousalgorithms based on evolutionary principles However theproposed method presents two disadvantages first it isrelated to its implementation which in general is morecomplex than most of the other segmentators based onevolutionary basics The second refers to the segmentationprocedure of the proposed approach which does not considerany spatial pixel characteristics As a consequence pixelsthat may belong to a determined region due to its positionare labeled as a part of another region due to its graylevel intensity Such a fact adversely affects the segmentationperformance of the method

7 Segmentation Results

This section analyses the performance of the proposedsegmentation algorithmThe discussion is divided into threeparts the first one shows the performance of the proposedLS segmentation algorithm while the second presents a com-parison between the proposed approach and others segmen-tation algorithms that are based on evolutionary and swammethods The comparison mainly considers the capacities ofeach algorithm to accurately and robustly approximate theimage histogram Finally the third part presents an objectiveevaluation of segmentation results produced by all algorithmsthat have been employed in the comparisons

71 Performance of LS Algorithm in Image SegmentationThis section presents two experiments that analyze theperformance of the proposed approach Table 5 presentsthe algorithmrsquos parameters that have been experimentallydetermined and kept for all the test images through allexperiments

First Image The first test considers the histogram shownby Figure 9(b) while Figure 9(a) presents the original imageAfter applying the proposed algorithm just as it has beenconfigured according to parameters in Table 5 a minimummodel of four classes emerges after the start from Gaussian

Table 5 Parameter setup for the LS segmentation algorithm

119865 119871 119892 119892119890119899 119873 119870 120582 120579

06 02 20 1000 40 10 001 00001

Table 6 Results of the reduced Gaussian mixture for the first andthe second image

Parameters First image Second image1198751 0004 00321205831 181 1211205901 82 291198752 00035 000151205832 699 4511205902 184 2491198753 001 00061205833 949 13011205903 89 1791198754 0007 0021205834 1631 16721205904 299 101

mixtures of 10 classes Considering 30 independent exe-cutions the averaged parameters of the resultant Gaussianmixture are presented in Table 6 One final Gaussian mixture(ten classes) which has been obtained by LS is presentedin Figure 10 Furthermore the approximation of the reducedGaussian mixture is also visually compared with the originalhistogram in Figure 10 On the other hand Figure 11 presentsthe segmented image after calculating the threshold points

Second Image For the second experiment the image inFigure 12 is testedThemethod aims to segment the image byusing a reduced Gaussian mixture model obtained by the LSapproach After executing the algorithm according to param-eters from Table 5 the resulting averaged parameters of theresultant Gaussian mixture are presented in Table 6 In orderto assure consistency the results are averaged considering 30independent executions Figure 13 shows the approximationquality that is obtained by the reduced Gaussian mixturemodel in (a) and the segmented image in (b)

14 Mathematical Problems in Engineering

Input parameters

ImageI(xy)

from the reduced model

YesLS method

No

k = k + 1

k gt gen

Reduce 119845genbest

Calculate the threshold values Ti

Use Ti to segment the image

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

119813 larr SolitaryOperation (119819k)

119819k+1 larr SocialOperation (119819k 119813)

Initialize 1198190 (k = 0)

F L g gen N K 120582 and 120579

Figure 8 Flowchart of the complete segmentation procedure

(a)

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray level

Freq

uenc

y

(b)

Figure 9 (a) Original first image used on the first experiment and (b) its histogram

Mathematical Problems in Engineering 15

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray levelFr

eque

ncy

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

Figure 10 Gaussian mixture obtained by LS for the first image

Figure 11 Image segmented with the reduced Gaussian mixture

(a)

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

(b)

Figure 12 (a) Original second image used on the first experiment and (b) its histogram

72 Histogram Approximation Comparisons This sectiondiscusses the comparison between the proposed segmenta-tion algorithm and other evolutionary-segmentation meth-ods that have been proposed in the literature The set ofmethods used in the experiments includes the 119869 + ABC[19] 119869 + AIS [20] and 119869 + DE [21] These algorithmsconsider the combination between the original objectivefunction 119869 (17) and an evolutionary technique such asArtificial Bee Colony (ABC) the Artificial Immune Systems(AIS) and the Differential Evolution (DE) [21] respectively

Since the proposed segmentation approach considers thecombination of the new objective function 119869

new (18) andthe LS algorithm it will be referred to as 119869

new+ LS

The comparison focuses mainly on the capacities of eachalgorithm to accurately and robustly approximate the imagehistogram

In the experiments the populations have been set to 40(119873 = 40) individuals The maximum iteration number for allfunctions has been set to 1000 Such stop criterion has beenconsidered to maintain compatibility to similar experiments

16 Mathematical Problems in Engineering

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

(a) (b)

Figure 13 (a) Segmentation result obtained by LS for the first image and (b) the segmented image

(1) (2)

(3) (4)

(a)

0015

0005001

0020025

0035003

0045004

005

00 50 100 150 200 250

Gray level

Freq

uenc

y

(b)

Figure 14 (a) Synthetic image used in the comparison and (b) its histogram

that are reported in the literature [18] The parameter settingfor each of the segmentation algorithms in the comparison isdescribed as follows

(1) 119869 + ABC [19] in the algorithm its parameters areconfigured as follows the abandonment limit = 100120572 = 005 and limit = 30

(2) 119869 + AIS [20] it presents the following parameters ℎ =120119873

119888= 80 120588 = 10 119875

119903= 20 119871 = 22 and 119879

119890= 001

(3) 119869 + DE [21] the DERand1 scheme is employed Theparameter settings follow the instructions in [21]Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(4) In 119869new +LS the method is set according to the valuesdescribed in Table 5

In order to conduct the experiments a synthetic image isdesigned to be used as a reference in the comparisons Themain idea is to know in advance the exact number of classes(and their parameters) that are contained in the image sothat the histogram can be considered as a ground truth The

Table 7 Employed parameters for the design of the reference image

119875119894

120583119894

120590119894

(1) 005 40 8(2) 004 100 10(3) 005 160 8(4) 0027 220 15

synthetic image is divided into four sections Each sectioncorresponds to a different class which is produced by settingeach gray level pixel 119875V119894 to a value that is determined by thefollowing model

119875V119894= 119890minus((119909minus1205831)

221205902119894) (21)

where 119894 represents the section whereas 120583119894and 120590119894are themean

and the dispersion of the gray level pixels respectively Thecomparative study employs the image of 512 times 512 that isshown in Figure 14(a) and the algorithmrsquos parameters thathave been presented in Table 7 Figure 14(b) illustrates theresultant histogram

Mathematical Problems in Engineering 17

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2Class 3Class 4

Class 5Class 6Class 7Class 8

(a)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(b)

Figure 15 Convergence results (a) Convergence of the following methods 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes (b) Convergence of the proposed method (reduced Gaussian mixture)

(a) (b)

Figure 16 Segmentation results obtained by (a) several methods including 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes and (b) the proposed method (reduced Gaussian mixture)

In the comparison the discussion focuses on the follow-ing issues first of all accuracy second convergence andthird computational cost

Convergence This section analyzes the approximation con-vergence when the number of classes that are used bythe algorithm during the evolution process is different tothe actual number of classes in the image Recall that theproposed approach automatically finds the reduced Gaussianmixture which better adapts to the image histogram

In the experiment the methods 119869 + ABC 119869 + AISand 119869 + DE are executed considering Gaussian mixturescomposed of 8 functions Under such circumstances thenumber of classes to be detected is higher than the actualnumber of classes in the image On the other hand the

proposed algorithm maintains the same configuration ofTable 5Therefore it can detect and calculate up to ten classes(119870 = 10)

As a result the techniques 119869 + ABC 119869 + AIS and 119869 +DE tend to overestimate the image histogram This effectcan be seen in Figure 15(a) where the resulting Gaussianfunctions are concentrated within actual classes Such abehavior is a consequence of the evaluation that is consideredby the original objective function 119869 which privileges onlythe approximation between the Gaussian mixture and theimage histogram This effect can be graphically illustrated byFigure 16(a) that shows the pixel misclassification producedby the wrong segmentation of Figure 14(a) On the otherhand the proposed approach obtains a reduced Gaussianmixture model which allows the detection of each class from

18 Mathematical Problems in Engineering

006

004

005

002

003

001

00 50 100 150 200 250

Gray level

Freq

uenc

y

(a)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

(b)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

HistogramClass 1Class 2

Class 3Class 4

(c)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(d)

Figure 17 Approximation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

the actual histogram (see Figure 15(b)) As a consequencethe segmentation is significantly improved by eliminating thepixel misclassification as it is shown by Figure 16(b)

It is evident from Figure 15 that the techniques 119869 + ABC119869 + AIS and 119869 + DE all need an a priori knowledge of thenumber of classes that are contained in the actual histogramin order to obtain a satisfactory result On the other handthe proposed algorithm is able to find a reduced Gaussianmixture whose classes coincide with the actual number ofclasses that are contained in the image histogram

Accuracy In this section the comparison among the algo-rithms in terms of accuracy is reported Most of the reportedcomparisons [19ndash26] are concerned about comparing theparameters of the resultant Gaussian mixtures by using realimages Under such circumstances it is difficult to considera clear reference in order to define a meaningful errorTherefore the image defined in Figure 14 has been used in theexperiments because its construction parameters are clearlydefined in Table 7

Since the parameter values from Table 7 act as groundtruth a simple averaged difference between them and thevalues that are computed by each algorithm could be used as

comparison error However as each parametermaintains dif-ferent active intervals it is necessary to express the differencesin terms of percentage Therefore if Δ120573 is the parametricdifference and 120573 the ground truth parameter the percentageerror Δ120573 can be defined as follows

Δ120573 =Δ120573

120573sdot 100 (22)

In the segmentation problem each Gaussian mixture repre-sents a 119870-dimensional model where each dimension corre-sponds to a Gaussian function of the optimization problemto be solved Since each Gaussian function possesses threeparameters 119875

119894 120583119894 and 120590

119894 the complete number of parameters

is 3 sdot 119870 dimensions Therefore the final error 119864 produced bythe final Gaussian mixture is

119864 =1

119870 sdot 3

119870sdot3sum

V=1Δ120573V (23)

where 120573V isin (119875119894 120583119894 120590119894)In order to compare accuracy the algorithms 119869 + ABC

119869 + AIS 119869 + DE and the proposed approach are all executedover the image shown by Figure 14(a) The experiment aims

Mathematical Problems in Engineering 19

(a) (b)

(c) (d)

Figure 18 Segmentation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

to estimate the Gaussian mixture that better approximatesthe actual image histogram Methods 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures composed of 4functions (119870 = 4) In case of the 119869new + LS method althoughthe algorithm finds a reduced Gaussian mixture of fourfunctions it is initially set with ten functions (119870 = 10)Table 8 presents the final Gaussian mixture parameters andthe final error 119864 The final Gaussian mixture parametershave been averaged over 30 independent executions in orderto assure consistency A close inspection of Table 8 revealsthat the proposed method is able to achieve the smallesterror (119864) in comparison to the other algorithms Figure 16presents the histogram approximations that are produced byeach algorithm whereas Figure 17 shows their correspondentsegmented images Both illustrations present the mediancase obtained throughout 30 runs Figure 18 exhibits that 119869 +ABC 119869 + AIS and 119869 + DE present different levels of misclas-sifications which are nearly absent in the proposed approachcase

Computational Cost The experiment aims to measure thecomplexity and the computing time spent by the 119869 + ABC

Table 8 Results of the reduced Gaussian mixture in terms ofaccuracy

Algorithm Gaussian function 119875119894

120583119894

120590119894

119864

J + ABC

(1) 0052 445 64

1179(2) 0084 9812 1287(3) 0058 16350 894(4) 0025 21884 175

J + AIS

(1) 0072 3101 614

2201(2) 0054 8852 1221(3) 0039 14921 914(4) 0034 24841 1384

J + DE

(1) 0041 3574 786

1357(2) 0036 9057 1197(3) 0059 14847 901(4) 0020 20134 1302

119869new + LS

(1) 0049 4012 75

398(2) 0041 10204 104(3) 0052 16866 83(4) 0025 11092 152

20 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 19 Images employed in the computational cost analysis

the 119869 + AIS the 119869 + DE and the 119869new + LS algorithm whilecalculating the parameters of the Gaussian mixture in bench-mark images (see Figures 19(a)ndash19(d)) 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures that are composedof 4 functions (119870 = 4) In case of the 119869new + LS methodalthough the algorithm finds a reduced Gaussian mixture offour functions despite being initialized with ten functions(119870 = 10) Table 9 shows the averaged measurements after30 experiments It is evident that the 119869 + ABC and 119869 +

DE are the slowest to converge (iterations) and the 119869 + AISshows the highest computational cost (time elapsed) becauseit requires operators which demand long times On the otherhand the 119869new + LS shows an acceptable trade-off betweenits convergence time and its computational cost Thereforealthough the implementation of 119869new + LS in general requiresmore code than most of other evolution-based segmentatorssuch a fact is not reflected in the execution time FinallyFigure 19 below shows the segmented images as they are

Mathematical Problems in Engineering 21

(a) (b) (c) (d)

Figure 20 Experimental set used in the evaluation of the segmentation results

Table 9 Iterations and time requirements of the 119869 + ABC the 119869 +AIS the 119869 + DE and the 119869new + LS algorithm as they are applied tosegment benchmark images (see Figure 17)

Iterations (a) (b) (c) (d)Time elapsed

119869 + ABC 855 833 870 997272 s 270 s 273 s 31 s

119869 + AIS 725 704 754 812178 s 161 s 141 s 201 s

119869 + DE 657 627 694 742125 s 112 s 145 s 188 s

119869new + LS 314 298 307 402

098 s 084 s 072 s 102 s

Table 10 Evaluation of the segmentation results in terms of the ROSindex

Number of classesImage

119873119877= 4

(a)119873119877= 3

(b)119873119877= 4

(c)119873119877= 4

(d)119869 + ABC 0534 0328 0411 0457119869 + AIS 0522 0321 0427 0437119869 + DE 0512 0312 0408 0418119869new + LS 0674 0401 0514 0527

generated by each algorithm It can be seen that the proposedapproach generate more homogeneous regions whereas 119869 +ABC 119869 + AIS and 119869 + DE present several artifacts that areproduced by an incorrect pixel classification

73 Performance Evaluation of the Segmentation Results Thissection presents an objective evaluation of segmentationresults that are produced by all algorithms in the compar-isons The ill-defined nature of the segmentation problemmakes the evaluation of a candidate algorithm difficult[57] Traditionally the evaluation has been conducted byusing some supervised criteria [58] which are based on thecomputation of a dissimilarity measure between a segmen-tation result and a ground truth image Recently the use ofunsupervised measures has substituted supervised indexesfor the objective evaluation of segmentation results [59]They

Table 11 Unimodal test functions

Test function S 119891opt

1198911(x) =119899

sum

119894=11199092119894

[minus100 100]119899 0

1198912(x) =119899

sum

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 +

119899

prod

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 [minus10 10]119899 0

1198913(x) =119899

sum

119894=1(

119894

sum

119895=1119909119895)

2

[minus100 100]119899 0

1198914 (x) = max119894

10038161003816100381610038161199091198941003816100381610038161003816 1 le 119894 le 119899 [minus100 100]119899 0

1198915(x) =119899minus1sum

119894=1[100 (119909

119894+1 minus 1199092119894)2+ (119909119894minus 1)2] [minus30 30]119899 0

1198916(x) =119899

sum

119894=1(119909119894+ 05)2 [minus100 100]119899 0

1198917(x) =119899

sum

119894=1119894119909

4119894+ rand(0 1) [minus128 128]119899 0

enable the quantification of the quality of a segmentationresult without a priori knowledge (ground truth image)

Evaluation Criteria In this paper the unsupervised indexROS proposed by Chabrier et al [60] has been used toobjectively evaluate the performance of each candidate algo-rithm This index evaluates the segmentation quality interms of the homogeneity within segmented regions andthe heterogeneity among the different regions ROS can becomputed as follows

ROS =119863 minus 119863

2 (24)

where 119863 quantifies the homogeneity within segmentedregions Similarly 119863 measures the disparity among theregions A segmentation result 1198781 is considered better than 1198782if ROS

1198781gt ROS

1198782 The interregion homogeneity character-

ized by119863 is calculated considering the following formulation

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot

120590119888

(sum119873119877

119897=1 120590119897) (25)

where 119873119877represents the number of partitions in which the

image has been segmented 119877119888symbolizes the number of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 9

Table 1 Minimization results from the benchmark functions test inTable 11 with 119899 = 30 Maximum number of iterations = 1000

PSO DE LS

1198911

ABS 166 times 10minus1 627 times 10minus3 455 times 10minus4

SD 379 times 10minus1 168 times 10minus1 698 times 10minus4

NFE 28610 20534 16780

1198912

ABS 483 times 10minus1 202 times 10minus1 541 times 10minus3

SD 159 times 10minus1 066 145 times 10minus2

NFE 28745 21112 16324

1198913

ABS 275 572 times 10minus1 161 times 10minus3

SD 101 015 132 times 10minus3

NFE 38320 36894 20462

1198914

ABS 184 011 105 times 10minus2

SD 087 005 663 times 10minus3

NFE 37028 36450 21158

1198915

ABS 307 239 411 times 10minus2

SD 042 036 274 times 10minus3

NFE 39432 37264 21678

1198916

ABS 636 651 588 times 10minus2

SD 074 087 167 times 10minus2

NFE 38490 36564 22238

1198917

ABS 614 012 271 times 10minus2

SD 073 002 118 times 10minus2

NFE 37274 35486 21842

to use the number of function evaluations (NFE) just asit is used in the literature [52 53] to evaluate and assessthe computational effort (time) and the complexity amongoptimizers It represents how many times an algorithm usesthe objective function to evaluate the objective (fitness)function until the best solution of a determined executionhas been found Since the experiments require 30 differentexecutions the NFE index corresponds to the averaged valueobtained from these executions A small NFE value indicatesthat less time is needed to reach the global optimum

41 Unimodal Test Functions Functions 1198911 to 1198917 are uni-modal functions The results for unimodal functions over30 runs are reported in Table 1 considering the averagebest-so-far solution (ABS) the standard deviation (SD) andthe number of function evaluations (NFE) According tothis table LS provides better results than PSO and DE forall functions in terms of ABS and SD In particular thetest yields the largest performance difference in functions1198914ndash1198917 Such functions maintain a narrow curving valley that

is hard to optimize in case the search space cannot beexplored properly and the direction changes cannot be keptup with [54] For this reason the performance differencesare directly related to a better trade-off between explorationand exploitation that is produced by LS operators In thepractice a main goal of an optimization algorithm is to finda solution as good as possible within a small time windowThe computational cost for the optimizer is represented byits NFE values According to Table 1 the NFE values thatare obtained by the proposed method are smaller than its

Table 2 119901 values produced by Wilcoxonrsquos test that compares LSversus PSO and DE over the ldquoaverage best-so-farrdquo values fromTable 1

LS versus PSO DE1198911

183 times 10minus4 173 times 10minus2

1198912

385 times 10minus3 183 times 10minus4

1198913

173 times 10minus4 623 times 10minus3

1198914

257 times 10minus4 521 times 10minus3

1198915

473 times 10minus4 183 times 10minus3

1198916

639 times 10minus5 215 times 10minus3

1198917

183 times 10minus4 221 times 10minus3

counterparts LowerNFE values aremore desirable since theycorrespond to less computational overload and thereforefaster results In the results perspective it is clear that PSOand DE needmore than 1000 generations in order to producebetter solutions However this number of generations isconsidered in the experiments aiming for producing a visiblecontrast among the approaches If the number of generationshas been set to an exaggerated value then all methods wouldconverge to the best solution with no significant troubles

A nonparametric statistical significance proof known asWilcoxonrsquos rank sum test for independent samples [55 56]has been conducted with an 5 significance level over theldquoaverage best-so-farrdquo data of Table 1 Table 2 reports the 119901values produced byWilcoxonrsquos test for the pairwise compari-son of the ldquoaverage best-so-farrdquo of two groups Such groupsare formed by LS versus PSO and LS versus DE As a nullhypothesis it is assumed that there is no significant differencebetween mean values of the two algorithms The alternativehypothesis considers a significant difference between theldquoaverage best-so-farrdquo values of both approaches All 119901 valuesreported in the table are less than 005 (5 significancelevel) which is a strong evidence against the null hypothesisindicating that the LS results are statistically significant andthat it has not occurred by coincidence (ie due to the normalnoise contained in the process)

42 Multimodal Test Functions Multimodal functions pos-sess many local minima which make the optimization adifficult task to be accomplished In multimodal functionsthe results reflect the algorithmrsquos ability to escape fromlocal optima We have applied the algorithms over functions1198918 to 11989113 where the number of local minima increasesexponentially as the dimension of the function increasesThe dimension of such functions is set to 30 The resultsare averaged over 30 runs with performance indexes beingreported in Table 3 as follows the average best-so-far solution(ABS) the standard deviation (SD) and the number of func-tion evaluations (NFE) Likewise 119901 values of the Wilcoxonsigned-rank test of 30 independent runs are listed in Table 4From the results it is clear that LS yields better solutionsthan others algorithms for functions 1198919 11989110 11989111 and 11989112 interms of the indexes ABS and SD However for functions 1198918and 11989113 LS produces similar results to DE The Wilcoxonrank test results that are presented in Table 6 confirm that

10 Mathematical Problems in Engineering

Table 3Minimization results from the benchmark functions test inTable 12 with 119899 = 30 Maximum number of iterations = 1000

PSO DE LS

1198918

ABS minus67 times 103 minus126 times 104 minus126 times 104

SD 63 times 102 37 times 102 11 times 102

NFE 38452 35240 21846

1198919

ABS 148 401 times 10minus1 249 times 10minus3

SD 139 51 times 10minus2 48 times 10minus4

NFE 37672 34576 20784

11989110

ABS 147 466 times 10minus2 215 times 10minus3

SD 144 127 times 10minus2 318 times 10minus4

NFE 39475 37080 21235

11989111

ABS 1201 115 147 times 10minus4

SD 312 006 148 times 10minus5

NFE 38542 34875 22126

11989112

ABS 687 times 10minus1 374 times 10minus1 558 times 10minus3

SD 707 times 10minus1 155 times 10minus1 418 times 10minus4

NFE 35248 30540 16984

11989113

ABS 187 times 10minus1 181 times 10minus2 178 times 10minus2

SD 574 times 10minus1 166 times 10minus2 164 times 10minus3

NFE 36022 31968 18802

Table 4 119901 values produced byWilcoxonrsquos test comparing LS versusPSO and DE over the ldquoaverage best-so-farrdquo values from Table 3

LS versus PSO DE1198918

183 times 10minus4 00611198919

117 times 10minus4 241 times 10minus4

11989110

143 times 10minus4 312 times 10minus3

11989111

625 times 10minus4 114 times 10minus3

11989112

234 times 10minus5 715 times 10minus4

11989113

473 times 10minus4 0071

LS performed better than PSO and DE considering fourproblems 119891

9ndash11989112 whereas from a statistical viewpoint there

is no difference between results from LS and DE for 1198918 and11989113 According to Table 3 the NFE values obtained by theproposed method are smaller than those produced by otheroptimizers The reason of this remarkable performance isassociated with its two operators (i) the solitary operatorallows a better particle distribution in the search spaceincreasing the algorithmrsquos ability to find the global optimaand (ii) the use of the social operation provides a simpleexploitation operator that intensifies the capacity of findingbetter solutions during the evolution process

5 Gaussian Mixture Modelling

In this section the modeling of image histograms throughGaussian mixture models is presented Let one consider animage holding 119871 gray levels [0 119871 minus 1] whose distribution

is defined by a histogram ℎ(119892) represented by the followingformulation

ℎ (119892) =

119899119892

119873119901 ℎ (119892) gt 0

Np =119871minus1sum

119892=0119899119892

119871minus1sum

119892=0ℎ (119892) = 1

(15)

where 119899119892denotes the number of pixels with gray level 119892

and Np the total number of pixels in the image Under suchcircumstances ℎ(119892) can be modeled by using a mixture 119901(119909)of Gaussian functions of the form

119901 (119909) =

119870

sum

119894=1

119875119894

radic2120587120590119894

exp[minus (119909 minus 120583

119894)2

21205902119894

] (16)

where119870 symbolizes the number of Gaussian functions of themodel whereas 119875

119894is the a priori probability of function 119894 120583

119894

and 120590119894represent the mean and standard deviation of the 119894th

Gaussian function respectively Furthermore the constraintsum119870

119894=1 119875119894 = 1must be satisfied by themodel In order to evaluatethe similarity between the image histogram and a candidatemixture model the mean square error can be used as follows

119869 =1119899

119871

sum

119895=1[119901 (119909119895) minus ℎ (119909

119895)]

2+120596 sdot

1003816100381610038161003816100381610038161003816100381610038161003816

(

119870

sum

119894=1119875119894)minus 1

1003816100381610038161003816100381610038161003816100381610038161003816

(17)

where 120596 represents the penalty associated with the constrainsum119870

119894=1 119875119894 = 1 Therefore 119869 is considered as the objective func-tion which must be minimized in the estimation problem Inorder to illustrate the histogram modeling through a Gaus-sian mixture Figure 6 presents an example assuming threeclasses that is 119870 = 3 Considering Figure 6(a) as the imagehistogram ℎ(119909) the Gaussian mixture 119901(119909) that is shown inFigure 6(c) is produced by adding the Gaussian functions1199011(119909) 1199012(119909) and 1199013(119909) in the configuration presented inFigure 6(b) Once themodel parameters that bettermodel theimage histogram have been determined the final step is todefine the threshold values 119879

119894(119894 isin [1 119870]) which can be

calculated by simple standard methods just as it is presentedin [19ndash21]

6 Segmentation Algorithm Based on LS

In the proposed method the segmentation process isapproached as an optimization problem Computationaloptimization generally involves two distinct elements (1) asearch strategy to produce candidate solutions (individualsparticles insects locust etc) and (2) an objective functionthat evaluates the quality of each selected candidate solutionSeveral computational algorithms are available to performthe first element The second element where the objectivefunction is designed is unquestionably the most criticalIn the objective function it is expected to embody the full

Mathematical Problems in Engineering 11

Freq

uenc

y

Gray level

h(x)

(a)

Gray level

Freq

uenc

y

p1(x)p2(x) p3(x)

(b)

Freq

uenc

y

Gray level

p(x)

(c)

Figure 6 Histogram approximation through a Gaussian mixture (a) Original histogram (b) configuration of the Gaussian components1199011(119909) 1199012(119909) and 1199013(119909) and (c) final Gaussian mixture 119901(119909)

complexity of the performance biases and restrictions of theproblem to be solved In the segmentation problem candidatesolutions represent Gaussian mixtures The objective func-tion 119869 (17) is used as a fitness value to evaluate the similaritypresented between the Gaussian mixture and the imagehistogram Therefore guided by the fitness values (119869 values)a set of encoded candidate solutions are evolved using theevolutionary operators until the best possible resemblancecan be found

Over the last decade several algorithms based on evo-lutionary and swarm principles [19ndash22] have been proposedto solve the problem of segmentation through a Gaussianmixture model Although these algorithms own certain goodperformance indexes they present two important limitations(1) They frequently obtain suboptimal approximations asa consequence of an inappropriate balance between explo-ration and exploitation in their search strategies (2) Theyare based on the assumption that the number of Gaussians(classes) in the mixture is preknown and fixed otherwisethey cannot work

In order to eliminate such flaws the proposed approachincludes (A) a new search strategy and (B) the definitionof a new objective function For the search strategy the LSmethod (Section 4) is adopted Under LS the concentrationof individuals in the current best positions is explicitlyavoided Such fact allows reducing critical problems such asthe premature convergence to suboptimal solutions and theincorrect exploration-exploitation balance

61 New Objective Function 119869new Previous segmentation

algorithms based on evolutionary and swarm methods use(17) as objective function Under these circumstances each

candidate solution (Gaussian mixture) is only evaluated interms of its approximation with the image histogram

Since the proposed approach aims to automatically selectthe number ofGaussian functions119870 in the finalmixture119901(119909)the objective function must be modified The new objectivefunction 119869new is defined as follows

119869new

= 119869 + 120582 sdot119876 (18)

where 120582 is a scaling constant The new objective function isdivided into two partsThe first part 119869 evaluates the quality ofeach candidate solution in terms of its similarity with regardto the image histogram (17) The second part 119876 penalizes theoverlapped area among Gaussian functions (classes) with 119876being defined as follows

119876 =

119870

sum

119894=1

119870

sum

119895=1119895 =119894

119871

sum

119897=1min (119875

119894sdot 119901119894(119897) 119875119895sdot 119901119895(119897)) (19)

where 119870 and 119871 represent the number of classes and the graylevels respectively The parameters 119901

119894(119897) and 119901

119895(119897) symbolize

the Gaussian functions 119894 and 119895 respectively that are to beevaluated on the point 119897 (gray level) whereas the elements119875119894and 119875

119895represent the a priori probabilities of the Gaussian

functions 119894 and 119895 respectively Under such circumstancesmixtures with Gaussian functions that do not ldquopositivelyrdquoparticipate in the histogram approximation are severelypenalized

Figure 7 illustrates the effect of the new objective function119869new in the evaluation of Gaussian mixtures (candidatesolutions) From the image histogram (Figure 7(a)) it isevident that two Gaussian functions are enough to accurately

12 Mathematical Problems in Engineering

Gray level

Freq

uenc

y h(x)

(a)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)Freq

uenc

y

(b)

Gray level

Freq

uenc

y p1(x)

p2(x)

p3(x)

p4(x)

Q12 Q23 Q34

(c)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)

Q12

Q23

Q34

Freq

uenc

y(d)

Figure 7 Effect of the new objective function 119869new in the evaluation of Gaussian mixtures (candidate solutions) (a) Original histogram (b)Gaussian mixture considering four classes (c) penalization areas and (d) Gaussian mixture of better quality solution

approximate the original histogramHowever if theGaussianmixture is modeled by using a greater number of functions(eg four as it is shown in Figure 7(b)) the original objectivefunction 119869 is unable to obtain a reasonable approximationUnder the new objective function 119869

new the overlapped areaamong Gaussian functions (classes) is penalized Such areasin Figure 7(c) correspond to 11987612 11987623 and 11987634 where11987612 represents the penalization value produced between theGaussian function 1199011(119909) and 1199012(119909) Therefore due to thepenalization theGaussianmixture shown in Figures 7(b) and7(c) provides a solution of low quality On the other hand theGaussian mixture presented in Figure 7(d) maintains a lowpenalty thus it represents a solution of high quality FromFigure 7(d) it is easy to see that functions 1199011(119909) and 1199014(119909)can be removed from the final mixture This eliminationcould be performed by a simple comparison with a thresholdvalue 120579 since 1199011(119909) and 1199014(119909) have a reduced amplitude(1199011(119909) asymp 1199012(119909) asymp 0) Therefore under 119869new it is possibleto find the reduced Gaussian mixture model starting from aconsiderable number of functions

Since the proposed segmentation method is conceivedas an optimization problem the overall operation can bereduced to solve the formulation of (20) by using the LSalgorithm

minimize 119869new

(x) = 119869 (x) + 120582 sdot 119876 (x)

x = (1198751 1205831 1205901 119875119870 120583119870 120590119870) isin R3sdot119870

subject to 0 le 119875119889le 1 119889 isin (1 119870)

0 le 120583119889le 255

0 le 120590119889le 25

(20)

where 119875119889 120583119889 and 120590

119889represent the probability mean and

standard deviation of the class 119889 It is important to remarkthat the new objective function 119869

new allows the evaluationof a candidate solution in terms of the necessary numberof Gaussian functions and its approximation quality Undersuch circumstances it can be used in combination with anyother evolutionary method and not only with the proposedLS algorithm

62 Complete Segmentation Algorithm Once the new searchstrategy (LS) and objective function (119869new) have been definedthe proposed segmentation algorithm can be summarized byAlgorithm 2The new algorithm combines operators definedby LS and operations for calculating the threshold values

(Line 1) The algorithm sets the operative parameters 119865119871 119892 119892119890119899 119873 119870 120582 and 120579 They rule the behavior of thesegmentation algorithm (Line 2) Afterwards the populationL0 is initialized considering 119873 different random Gaussianmixtures of119870 functionsThe idea is to generate an119873-randomGaussian mixture subjected to the constraints formulatedin (20) The parameter 119870 must hold a high value in orderto correctly segment all images (recall that the algorithmis able to reduce the Gaussian mixture to its minimumexpression) (Line 3)Then the Gaussianmixtures are evolvedby using the LS operators and the new objective function119869newThis process is repeated during119892119890119899 cycles (Line 8) Afterthis procedure the best Gaussian mixture l119892119890119899

119887119890119904119905is selected

according to its objective function 119869new (Line 9) Then

the Gaussian mixture l119892119890119899119887119890119904119905

is reduced by eliminating thosefunctions whose amplitudes are lower than 120579 (119901

119894(119909) le 120579)

(Line 10) Then the threshold values 119879119894from the reduced

model are calculated (Line 11) Finally the calculated 119879119894

values are employed to segment the image Figure 8 shows aflowchart of the complete segmentation procedure

Mathematical Problems in Engineering 13

(1) Input 119865 119871 119892 119892119890119899119873 119870 120582 and 120579(2) Initialize L0 (119896 = 0)(3) until (119896 = 119892119890119899)(4) F larr SolitaryOperation(L119896) Solitary operator (Section 31)(5) L119896+1 larr SocialOperation(L119896 F) Social operator (Section 32)(6) 119896 = k + 1(7) end until(8) Obtain I119892119890119899

119887119890119904119905

(9) Reduce I119892119890119899119887119890119904119905

(10) Calculate the threshold values 119879119894from the reduced model

(11) Use 119879119894to segment the image

Algorithm 2 Segmentation LS algorithm

The proposed segmentation algorithm permits to auto-matically detect the number of segmentation partitions(classes) Furthermore due to its remarkable search capaci-ties the LSmethodmaintains a better accuracy than previousalgorithms based on evolutionary principles However theproposed method presents two disadvantages first it isrelated to its implementation which in general is morecomplex than most of the other segmentators based onevolutionary basics The second refers to the segmentationprocedure of the proposed approach which does not considerany spatial pixel characteristics As a consequence pixelsthat may belong to a determined region due to its positionare labeled as a part of another region due to its graylevel intensity Such a fact adversely affects the segmentationperformance of the method

7 Segmentation Results

This section analyses the performance of the proposedsegmentation algorithmThe discussion is divided into threeparts the first one shows the performance of the proposedLS segmentation algorithm while the second presents a com-parison between the proposed approach and others segmen-tation algorithms that are based on evolutionary and swammethods The comparison mainly considers the capacities ofeach algorithm to accurately and robustly approximate theimage histogram Finally the third part presents an objectiveevaluation of segmentation results produced by all algorithmsthat have been employed in the comparisons

71 Performance of LS Algorithm in Image SegmentationThis section presents two experiments that analyze theperformance of the proposed approach Table 5 presentsthe algorithmrsquos parameters that have been experimentallydetermined and kept for all the test images through allexperiments

First Image The first test considers the histogram shownby Figure 9(b) while Figure 9(a) presents the original imageAfter applying the proposed algorithm just as it has beenconfigured according to parameters in Table 5 a minimummodel of four classes emerges after the start from Gaussian

Table 5 Parameter setup for the LS segmentation algorithm

119865 119871 119892 119892119890119899 119873 119870 120582 120579

06 02 20 1000 40 10 001 00001

Table 6 Results of the reduced Gaussian mixture for the first andthe second image

Parameters First image Second image1198751 0004 00321205831 181 1211205901 82 291198752 00035 000151205832 699 4511205902 184 2491198753 001 00061205833 949 13011205903 89 1791198754 0007 0021205834 1631 16721205904 299 101

mixtures of 10 classes Considering 30 independent exe-cutions the averaged parameters of the resultant Gaussianmixture are presented in Table 6 One final Gaussian mixture(ten classes) which has been obtained by LS is presentedin Figure 10 Furthermore the approximation of the reducedGaussian mixture is also visually compared with the originalhistogram in Figure 10 On the other hand Figure 11 presentsthe segmented image after calculating the threshold points

Second Image For the second experiment the image inFigure 12 is testedThemethod aims to segment the image byusing a reduced Gaussian mixture model obtained by the LSapproach After executing the algorithm according to param-eters from Table 5 the resulting averaged parameters of theresultant Gaussian mixture are presented in Table 6 In orderto assure consistency the results are averaged considering 30independent executions Figure 13 shows the approximationquality that is obtained by the reduced Gaussian mixturemodel in (a) and the segmented image in (b)

14 Mathematical Problems in Engineering

Input parameters

ImageI(xy)

from the reduced model

YesLS method

No

k = k + 1

k gt gen

Reduce 119845genbest

Calculate the threshold values Ti

Use Ti to segment the image

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

119813 larr SolitaryOperation (119819k)

119819k+1 larr SocialOperation (119819k 119813)

Initialize 1198190 (k = 0)

F L g gen N K 120582 and 120579

Figure 8 Flowchart of the complete segmentation procedure

(a)

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray level

Freq

uenc

y

(b)

Figure 9 (a) Original first image used on the first experiment and (b) its histogram

Mathematical Problems in Engineering 15

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray levelFr

eque

ncy

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

Figure 10 Gaussian mixture obtained by LS for the first image

Figure 11 Image segmented with the reduced Gaussian mixture

(a)

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

(b)

Figure 12 (a) Original second image used on the first experiment and (b) its histogram

72 Histogram Approximation Comparisons This sectiondiscusses the comparison between the proposed segmenta-tion algorithm and other evolutionary-segmentation meth-ods that have been proposed in the literature The set ofmethods used in the experiments includes the 119869 + ABC[19] 119869 + AIS [20] and 119869 + DE [21] These algorithmsconsider the combination between the original objectivefunction 119869 (17) and an evolutionary technique such asArtificial Bee Colony (ABC) the Artificial Immune Systems(AIS) and the Differential Evolution (DE) [21] respectively

Since the proposed segmentation approach considers thecombination of the new objective function 119869

new (18) andthe LS algorithm it will be referred to as 119869

new+ LS

The comparison focuses mainly on the capacities of eachalgorithm to accurately and robustly approximate the imagehistogram

In the experiments the populations have been set to 40(119873 = 40) individuals The maximum iteration number for allfunctions has been set to 1000 Such stop criterion has beenconsidered to maintain compatibility to similar experiments

16 Mathematical Problems in Engineering

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

(a) (b)

Figure 13 (a) Segmentation result obtained by LS for the first image and (b) the segmented image

(1) (2)

(3) (4)

(a)

0015

0005001

0020025

0035003

0045004

005

00 50 100 150 200 250

Gray level

Freq

uenc

y

(b)

Figure 14 (a) Synthetic image used in the comparison and (b) its histogram

that are reported in the literature [18] The parameter settingfor each of the segmentation algorithms in the comparison isdescribed as follows

(1) 119869 + ABC [19] in the algorithm its parameters areconfigured as follows the abandonment limit = 100120572 = 005 and limit = 30

(2) 119869 + AIS [20] it presents the following parameters ℎ =120119873

119888= 80 120588 = 10 119875

119903= 20 119871 = 22 and 119879

119890= 001

(3) 119869 + DE [21] the DERand1 scheme is employed Theparameter settings follow the instructions in [21]Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(4) In 119869new +LS the method is set according to the valuesdescribed in Table 5

In order to conduct the experiments a synthetic image isdesigned to be used as a reference in the comparisons Themain idea is to know in advance the exact number of classes(and their parameters) that are contained in the image sothat the histogram can be considered as a ground truth The

Table 7 Employed parameters for the design of the reference image

119875119894

120583119894

120590119894

(1) 005 40 8(2) 004 100 10(3) 005 160 8(4) 0027 220 15

synthetic image is divided into four sections Each sectioncorresponds to a different class which is produced by settingeach gray level pixel 119875V119894 to a value that is determined by thefollowing model

119875V119894= 119890minus((119909minus1205831)

221205902119894) (21)

where 119894 represents the section whereas 120583119894and 120590119894are themean

and the dispersion of the gray level pixels respectively Thecomparative study employs the image of 512 times 512 that isshown in Figure 14(a) and the algorithmrsquos parameters thathave been presented in Table 7 Figure 14(b) illustrates theresultant histogram

Mathematical Problems in Engineering 17

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2Class 3Class 4

Class 5Class 6Class 7Class 8

(a)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(b)

Figure 15 Convergence results (a) Convergence of the following methods 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes (b) Convergence of the proposed method (reduced Gaussian mixture)

(a) (b)

Figure 16 Segmentation results obtained by (a) several methods including 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes and (b) the proposed method (reduced Gaussian mixture)

In the comparison the discussion focuses on the follow-ing issues first of all accuracy second convergence andthird computational cost

Convergence This section analyzes the approximation con-vergence when the number of classes that are used bythe algorithm during the evolution process is different tothe actual number of classes in the image Recall that theproposed approach automatically finds the reduced Gaussianmixture which better adapts to the image histogram

In the experiment the methods 119869 + ABC 119869 + AISand 119869 + DE are executed considering Gaussian mixturescomposed of 8 functions Under such circumstances thenumber of classes to be detected is higher than the actualnumber of classes in the image On the other hand the

proposed algorithm maintains the same configuration ofTable 5Therefore it can detect and calculate up to ten classes(119870 = 10)

As a result the techniques 119869 + ABC 119869 + AIS and 119869 +DE tend to overestimate the image histogram This effectcan be seen in Figure 15(a) where the resulting Gaussianfunctions are concentrated within actual classes Such abehavior is a consequence of the evaluation that is consideredby the original objective function 119869 which privileges onlythe approximation between the Gaussian mixture and theimage histogram This effect can be graphically illustrated byFigure 16(a) that shows the pixel misclassification producedby the wrong segmentation of Figure 14(a) On the otherhand the proposed approach obtains a reduced Gaussianmixture model which allows the detection of each class from

18 Mathematical Problems in Engineering

006

004

005

002

003

001

00 50 100 150 200 250

Gray level

Freq

uenc

y

(a)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

(b)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

HistogramClass 1Class 2

Class 3Class 4

(c)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(d)

Figure 17 Approximation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

the actual histogram (see Figure 15(b)) As a consequencethe segmentation is significantly improved by eliminating thepixel misclassification as it is shown by Figure 16(b)

It is evident from Figure 15 that the techniques 119869 + ABC119869 + AIS and 119869 + DE all need an a priori knowledge of thenumber of classes that are contained in the actual histogramin order to obtain a satisfactory result On the other handthe proposed algorithm is able to find a reduced Gaussianmixture whose classes coincide with the actual number ofclasses that are contained in the image histogram

Accuracy In this section the comparison among the algo-rithms in terms of accuracy is reported Most of the reportedcomparisons [19ndash26] are concerned about comparing theparameters of the resultant Gaussian mixtures by using realimages Under such circumstances it is difficult to considera clear reference in order to define a meaningful errorTherefore the image defined in Figure 14 has been used in theexperiments because its construction parameters are clearlydefined in Table 7

Since the parameter values from Table 7 act as groundtruth a simple averaged difference between them and thevalues that are computed by each algorithm could be used as

comparison error However as each parametermaintains dif-ferent active intervals it is necessary to express the differencesin terms of percentage Therefore if Δ120573 is the parametricdifference and 120573 the ground truth parameter the percentageerror Δ120573 can be defined as follows

Δ120573 =Δ120573

120573sdot 100 (22)

In the segmentation problem each Gaussian mixture repre-sents a 119870-dimensional model where each dimension corre-sponds to a Gaussian function of the optimization problemto be solved Since each Gaussian function possesses threeparameters 119875

119894 120583119894 and 120590

119894 the complete number of parameters

is 3 sdot 119870 dimensions Therefore the final error 119864 produced bythe final Gaussian mixture is

119864 =1

119870 sdot 3

119870sdot3sum

V=1Δ120573V (23)

where 120573V isin (119875119894 120583119894 120590119894)In order to compare accuracy the algorithms 119869 + ABC

119869 + AIS 119869 + DE and the proposed approach are all executedover the image shown by Figure 14(a) The experiment aims

Mathematical Problems in Engineering 19

(a) (b)

(c) (d)

Figure 18 Segmentation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

to estimate the Gaussian mixture that better approximatesthe actual image histogram Methods 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures composed of 4functions (119870 = 4) In case of the 119869new + LS method althoughthe algorithm finds a reduced Gaussian mixture of fourfunctions it is initially set with ten functions (119870 = 10)Table 8 presents the final Gaussian mixture parameters andthe final error 119864 The final Gaussian mixture parametershave been averaged over 30 independent executions in orderto assure consistency A close inspection of Table 8 revealsthat the proposed method is able to achieve the smallesterror (119864) in comparison to the other algorithms Figure 16presents the histogram approximations that are produced byeach algorithm whereas Figure 17 shows their correspondentsegmented images Both illustrations present the mediancase obtained throughout 30 runs Figure 18 exhibits that 119869 +ABC 119869 + AIS and 119869 + DE present different levels of misclas-sifications which are nearly absent in the proposed approachcase

Computational Cost The experiment aims to measure thecomplexity and the computing time spent by the 119869 + ABC

Table 8 Results of the reduced Gaussian mixture in terms ofaccuracy

Algorithm Gaussian function 119875119894

120583119894

120590119894

119864

J + ABC

(1) 0052 445 64

1179(2) 0084 9812 1287(3) 0058 16350 894(4) 0025 21884 175

J + AIS

(1) 0072 3101 614

2201(2) 0054 8852 1221(3) 0039 14921 914(4) 0034 24841 1384

J + DE

(1) 0041 3574 786

1357(2) 0036 9057 1197(3) 0059 14847 901(4) 0020 20134 1302

119869new + LS

(1) 0049 4012 75

398(2) 0041 10204 104(3) 0052 16866 83(4) 0025 11092 152

20 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 19 Images employed in the computational cost analysis

the 119869 + AIS the 119869 + DE and the 119869new + LS algorithm whilecalculating the parameters of the Gaussian mixture in bench-mark images (see Figures 19(a)ndash19(d)) 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures that are composedof 4 functions (119870 = 4) In case of the 119869new + LS methodalthough the algorithm finds a reduced Gaussian mixture offour functions despite being initialized with ten functions(119870 = 10) Table 9 shows the averaged measurements after30 experiments It is evident that the 119869 + ABC and 119869 +

DE are the slowest to converge (iterations) and the 119869 + AISshows the highest computational cost (time elapsed) becauseit requires operators which demand long times On the otherhand the 119869new + LS shows an acceptable trade-off betweenits convergence time and its computational cost Thereforealthough the implementation of 119869new + LS in general requiresmore code than most of other evolution-based segmentatorssuch a fact is not reflected in the execution time FinallyFigure 19 below shows the segmented images as they are

Mathematical Problems in Engineering 21

(a) (b) (c) (d)

Figure 20 Experimental set used in the evaluation of the segmentation results

Table 9 Iterations and time requirements of the 119869 + ABC the 119869 +AIS the 119869 + DE and the 119869new + LS algorithm as they are applied tosegment benchmark images (see Figure 17)

Iterations (a) (b) (c) (d)Time elapsed

119869 + ABC 855 833 870 997272 s 270 s 273 s 31 s

119869 + AIS 725 704 754 812178 s 161 s 141 s 201 s

119869 + DE 657 627 694 742125 s 112 s 145 s 188 s

119869new + LS 314 298 307 402

098 s 084 s 072 s 102 s

Table 10 Evaluation of the segmentation results in terms of the ROSindex

Number of classesImage

119873119877= 4

(a)119873119877= 3

(b)119873119877= 4

(c)119873119877= 4

(d)119869 + ABC 0534 0328 0411 0457119869 + AIS 0522 0321 0427 0437119869 + DE 0512 0312 0408 0418119869new + LS 0674 0401 0514 0527

generated by each algorithm It can be seen that the proposedapproach generate more homogeneous regions whereas 119869 +ABC 119869 + AIS and 119869 + DE present several artifacts that areproduced by an incorrect pixel classification

73 Performance Evaluation of the Segmentation Results Thissection presents an objective evaluation of segmentationresults that are produced by all algorithms in the compar-isons The ill-defined nature of the segmentation problemmakes the evaluation of a candidate algorithm difficult[57] Traditionally the evaluation has been conducted byusing some supervised criteria [58] which are based on thecomputation of a dissimilarity measure between a segmen-tation result and a ground truth image Recently the use ofunsupervised measures has substituted supervised indexesfor the objective evaluation of segmentation results [59]They

Table 11 Unimodal test functions

Test function S 119891opt

1198911(x) =119899

sum

119894=11199092119894

[minus100 100]119899 0

1198912(x) =119899

sum

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 +

119899

prod

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 [minus10 10]119899 0

1198913(x) =119899

sum

119894=1(

119894

sum

119895=1119909119895)

2

[minus100 100]119899 0

1198914 (x) = max119894

10038161003816100381610038161199091198941003816100381610038161003816 1 le 119894 le 119899 [minus100 100]119899 0

1198915(x) =119899minus1sum

119894=1[100 (119909

119894+1 minus 1199092119894)2+ (119909119894minus 1)2] [minus30 30]119899 0

1198916(x) =119899

sum

119894=1(119909119894+ 05)2 [minus100 100]119899 0

1198917(x) =119899

sum

119894=1119894119909

4119894+ rand(0 1) [minus128 128]119899 0

enable the quantification of the quality of a segmentationresult without a priori knowledge (ground truth image)

Evaluation Criteria In this paper the unsupervised indexROS proposed by Chabrier et al [60] has been used toobjectively evaluate the performance of each candidate algo-rithm This index evaluates the segmentation quality interms of the homogeneity within segmented regions andthe heterogeneity among the different regions ROS can becomputed as follows

ROS =119863 minus 119863

2 (24)

where 119863 quantifies the homogeneity within segmentedregions Similarly 119863 measures the disparity among theregions A segmentation result 1198781 is considered better than 1198782if ROS

1198781gt ROS

1198782 The interregion homogeneity character-

ized by119863 is calculated considering the following formulation

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot

120590119888

(sum119873119877

119897=1 120590119897) (25)

where 119873119877represents the number of partitions in which the

image has been segmented 119877119888symbolizes the number of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

10 Mathematical Problems in Engineering

Table 3Minimization results from the benchmark functions test inTable 12 with 119899 = 30 Maximum number of iterations = 1000

PSO DE LS

1198918

ABS minus67 times 103 minus126 times 104 minus126 times 104

SD 63 times 102 37 times 102 11 times 102

NFE 38452 35240 21846

1198919

ABS 148 401 times 10minus1 249 times 10minus3

SD 139 51 times 10minus2 48 times 10minus4

NFE 37672 34576 20784

11989110

ABS 147 466 times 10minus2 215 times 10minus3

SD 144 127 times 10minus2 318 times 10minus4

NFE 39475 37080 21235

11989111

ABS 1201 115 147 times 10minus4

SD 312 006 148 times 10minus5

NFE 38542 34875 22126

11989112

ABS 687 times 10minus1 374 times 10minus1 558 times 10minus3

SD 707 times 10minus1 155 times 10minus1 418 times 10minus4

NFE 35248 30540 16984

11989113

ABS 187 times 10minus1 181 times 10minus2 178 times 10minus2

SD 574 times 10minus1 166 times 10minus2 164 times 10minus3

NFE 36022 31968 18802

Table 4 119901 values produced byWilcoxonrsquos test comparing LS versusPSO and DE over the ldquoaverage best-so-farrdquo values from Table 3

LS versus PSO DE1198918

183 times 10minus4 00611198919

117 times 10minus4 241 times 10minus4

11989110

143 times 10minus4 312 times 10minus3

11989111

625 times 10minus4 114 times 10minus3

11989112

234 times 10minus5 715 times 10minus4

11989113

473 times 10minus4 0071

LS performed better than PSO and DE considering fourproblems 119891

9ndash11989112 whereas from a statistical viewpoint there

is no difference between results from LS and DE for 1198918 and11989113 According to Table 3 the NFE values obtained by theproposed method are smaller than those produced by otheroptimizers The reason of this remarkable performance isassociated with its two operators (i) the solitary operatorallows a better particle distribution in the search spaceincreasing the algorithmrsquos ability to find the global optimaand (ii) the use of the social operation provides a simpleexploitation operator that intensifies the capacity of findingbetter solutions during the evolution process

5 Gaussian Mixture Modelling

In this section the modeling of image histograms throughGaussian mixture models is presented Let one consider animage holding 119871 gray levels [0 119871 minus 1] whose distribution

is defined by a histogram ℎ(119892) represented by the followingformulation

ℎ (119892) =

119899119892

119873119901 ℎ (119892) gt 0

Np =119871minus1sum

119892=0119899119892

119871minus1sum

119892=0ℎ (119892) = 1

(15)

where 119899119892denotes the number of pixels with gray level 119892

and Np the total number of pixels in the image Under suchcircumstances ℎ(119892) can be modeled by using a mixture 119901(119909)of Gaussian functions of the form

119901 (119909) =

119870

sum

119894=1

119875119894

radic2120587120590119894

exp[minus (119909 minus 120583

119894)2

21205902119894

] (16)

where119870 symbolizes the number of Gaussian functions of themodel whereas 119875

119894is the a priori probability of function 119894 120583

119894

and 120590119894represent the mean and standard deviation of the 119894th

Gaussian function respectively Furthermore the constraintsum119870

119894=1 119875119894 = 1must be satisfied by themodel In order to evaluatethe similarity between the image histogram and a candidatemixture model the mean square error can be used as follows

119869 =1119899

119871

sum

119895=1[119901 (119909119895) minus ℎ (119909

119895)]

2+120596 sdot

1003816100381610038161003816100381610038161003816100381610038161003816

(

119870

sum

119894=1119875119894)minus 1

1003816100381610038161003816100381610038161003816100381610038161003816

(17)

where 120596 represents the penalty associated with the constrainsum119870

119894=1 119875119894 = 1 Therefore 119869 is considered as the objective func-tion which must be minimized in the estimation problem Inorder to illustrate the histogram modeling through a Gaus-sian mixture Figure 6 presents an example assuming threeclasses that is 119870 = 3 Considering Figure 6(a) as the imagehistogram ℎ(119909) the Gaussian mixture 119901(119909) that is shown inFigure 6(c) is produced by adding the Gaussian functions1199011(119909) 1199012(119909) and 1199013(119909) in the configuration presented inFigure 6(b) Once themodel parameters that bettermodel theimage histogram have been determined the final step is todefine the threshold values 119879

119894(119894 isin [1 119870]) which can be

calculated by simple standard methods just as it is presentedin [19ndash21]

6 Segmentation Algorithm Based on LS

In the proposed method the segmentation process isapproached as an optimization problem Computationaloptimization generally involves two distinct elements (1) asearch strategy to produce candidate solutions (individualsparticles insects locust etc) and (2) an objective functionthat evaluates the quality of each selected candidate solutionSeveral computational algorithms are available to performthe first element The second element where the objectivefunction is designed is unquestionably the most criticalIn the objective function it is expected to embody the full

Mathematical Problems in Engineering 11

Freq

uenc

y

Gray level

h(x)

(a)

Gray level

Freq

uenc

y

p1(x)p2(x) p3(x)

(b)

Freq

uenc

y

Gray level

p(x)

(c)

Figure 6 Histogram approximation through a Gaussian mixture (a) Original histogram (b) configuration of the Gaussian components1199011(119909) 1199012(119909) and 1199013(119909) and (c) final Gaussian mixture 119901(119909)

complexity of the performance biases and restrictions of theproblem to be solved In the segmentation problem candidatesolutions represent Gaussian mixtures The objective func-tion 119869 (17) is used as a fitness value to evaluate the similaritypresented between the Gaussian mixture and the imagehistogram Therefore guided by the fitness values (119869 values)a set of encoded candidate solutions are evolved using theevolutionary operators until the best possible resemblancecan be found

Over the last decade several algorithms based on evo-lutionary and swarm principles [19ndash22] have been proposedto solve the problem of segmentation through a Gaussianmixture model Although these algorithms own certain goodperformance indexes they present two important limitations(1) They frequently obtain suboptimal approximations asa consequence of an inappropriate balance between explo-ration and exploitation in their search strategies (2) Theyare based on the assumption that the number of Gaussians(classes) in the mixture is preknown and fixed otherwisethey cannot work

In order to eliminate such flaws the proposed approachincludes (A) a new search strategy and (B) the definitionof a new objective function For the search strategy the LSmethod (Section 4) is adopted Under LS the concentrationof individuals in the current best positions is explicitlyavoided Such fact allows reducing critical problems such asthe premature convergence to suboptimal solutions and theincorrect exploration-exploitation balance

61 New Objective Function 119869new Previous segmentation

algorithms based on evolutionary and swarm methods use(17) as objective function Under these circumstances each

candidate solution (Gaussian mixture) is only evaluated interms of its approximation with the image histogram

Since the proposed approach aims to automatically selectthe number ofGaussian functions119870 in the finalmixture119901(119909)the objective function must be modified The new objectivefunction 119869new is defined as follows

119869new

= 119869 + 120582 sdot119876 (18)

where 120582 is a scaling constant The new objective function isdivided into two partsThe first part 119869 evaluates the quality ofeach candidate solution in terms of its similarity with regardto the image histogram (17) The second part 119876 penalizes theoverlapped area among Gaussian functions (classes) with 119876being defined as follows

119876 =

119870

sum

119894=1

119870

sum

119895=1119895 =119894

119871

sum

119897=1min (119875

119894sdot 119901119894(119897) 119875119895sdot 119901119895(119897)) (19)

where 119870 and 119871 represent the number of classes and the graylevels respectively The parameters 119901

119894(119897) and 119901

119895(119897) symbolize

the Gaussian functions 119894 and 119895 respectively that are to beevaluated on the point 119897 (gray level) whereas the elements119875119894and 119875

119895represent the a priori probabilities of the Gaussian

functions 119894 and 119895 respectively Under such circumstancesmixtures with Gaussian functions that do not ldquopositivelyrdquoparticipate in the histogram approximation are severelypenalized

Figure 7 illustrates the effect of the new objective function119869new in the evaluation of Gaussian mixtures (candidatesolutions) From the image histogram (Figure 7(a)) it isevident that two Gaussian functions are enough to accurately

12 Mathematical Problems in Engineering

Gray level

Freq

uenc

y h(x)

(a)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)Freq

uenc

y

(b)

Gray level

Freq

uenc

y p1(x)

p2(x)

p3(x)

p4(x)

Q12 Q23 Q34

(c)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)

Q12

Q23

Q34

Freq

uenc

y(d)

Figure 7 Effect of the new objective function 119869new in the evaluation of Gaussian mixtures (candidate solutions) (a) Original histogram (b)Gaussian mixture considering four classes (c) penalization areas and (d) Gaussian mixture of better quality solution

approximate the original histogramHowever if theGaussianmixture is modeled by using a greater number of functions(eg four as it is shown in Figure 7(b)) the original objectivefunction 119869 is unable to obtain a reasonable approximationUnder the new objective function 119869

new the overlapped areaamong Gaussian functions (classes) is penalized Such areasin Figure 7(c) correspond to 11987612 11987623 and 11987634 where11987612 represents the penalization value produced between theGaussian function 1199011(119909) and 1199012(119909) Therefore due to thepenalization theGaussianmixture shown in Figures 7(b) and7(c) provides a solution of low quality On the other hand theGaussian mixture presented in Figure 7(d) maintains a lowpenalty thus it represents a solution of high quality FromFigure 7(d) it is easy to see that functions 1199011(119909) and 1199014(119909)can be removed from the final mixture This eliminationcould be performed by a simple comparison with a thresholdvalue 120579 since 1199011(119909) and 1199014(119909) have a reduced amplitude(1199011(119909) asymp 1199012(119909) asymp 0) Therefore under 119869new it is possibleto find the reduced Gaussian mixture model starting from aconsiderable number of functions

Since the proposed segmentation method is conceivedas an optimization problem the overall operation can bereduced to solve the formulation of (20) by using the LSalgorithm

minimize 119869new

(x) = 119869 (x) + 120582 sdot 119876 (x)

x = (1198751 1205831 1205901 119875119870 120583119870 120590119870) isin R3sdot119870

subject to 0 le 119875119889le 1 119889 isin (1 119870)

0 le 120583119889le 255

0 le 120590119889le 25

(20)

where 119875119889 120583119889 and 120590

119889represent the probability mean and

standard deviation of the class 119889 It is important to remarkthat the new objective function 119869

new allows the evaluationof a candidate solution in terms of the necessary numberof Gaussian functions and its approximation quality Undersuch circumstances it can be used in combination with anyother evolutionary method and not only with the proposedLS algorithm

62 Complete Segmentation Algorithm Once the new searchstrategy (LS) and objective function (119869new) have been definedthe proposed segmentation algorithm can be summarized byAlgorithm 2The new algorithm combines operators definedby LS and operations for calculating the threshold values

(Line 1) The algorithm sets the operative parameters 119865119871 119892 119892119890119899 119873 119870 120582 and 120579 They rule the behavior of thesegmentation algorithm (Line 2) Afterwards the populationL0 is initialized considering 119873 different random Gaussianmixtures of119870 functionsThe idea is to generate an119873-randomGaussian mixture subjected to the constraints formulatedin (20) The parameter 119870 must hold a high value in orderto correctly segment all images (recall that the algorithmis able to reduce the Gaussian mixture to its minimumexpression) (Line 3)Then the Gaussianmixtures are evolvedby using the LS operators and the new objective function119869newThis process is repeated during119892119890119899 cycles (Line 8) Afterthis procedure the best Gaussian mixture l119892119890119899

119887119890119904119905is selected

according to its objective function 119869new (Line 9) Then

the Gaussian mixture l119892119890119899119887119890119904119905

is reduced by eliminating thosefunctions whose amplitudes are lower than 120579 (119901

119894(119909) le 120579)

(Line 10) Then the threshold values 119879119894from the reduced

model are calculated (Line 11) Finally the calculated 119879119894

values are employed to segment the image Figure 8 shows aflowchart of the complete segmentation procedure

Mathematical Problems in Engineering 13

(1) Input 119865 119871 119892 119892119890119899119873 119870 120582 and 120579(2) Initialize L0 (119896 = 0)(3) until (119896 = 119892119890119899)(4) F larr SolitaryOperation(L119896) Solitary operator (Section 31)(5) L119896+1 larr SocialOperation(L119896 F) Social operator (Section 32)(6) 119896 = k + 1(7) end until(8) Obtain I119892119890119899

119887119890119904119905

(9) Reduce I119892119890119899119887119890119904119905

(10) Calculate the threshold values 119879119894from the reduced model

(11) Use 119879119894to segment the image

Algorithm 2 Segmentation LS algorithm

The proposed segmentation algorithm permits to auto-matically detect the number of segmentation partitions(classes) Furthermore due to its remarkable search capaci-ties the LSmethodmaintains a better accuracy than previousalgorithms based on evolutionary principles However theproposed method presents two disadvantages first it isrelated to its implementation which in general is morecomplex than most of the other segmentators based onevolutionary basics The second refers to the segmentationprocedure of the proposed approach which does not considerany spatial pixel characteristics As a consequence pixelsthat may belong to a determined region due to its positionare labeled as a part of another region due to its graylevel intensity Such a fact adversely affects the segmentationperformance of the method

7 Segmentation Results

This section analyses the performance of the proposedsegmentation algorithmThe discussion is divided into threeparts the first one shows the performance of the proposedLS segmentation algorithm while the second presents a com-parison between the proposed approach and others segmen-tation algorithms that are based on evolutionary and swammethods The comparison mainly considers the capacities ofeach algorithm to accurately and robustly approximate theimage histogram Finally the third part presents an objectiveevaluation of segmentation results produced by all algorithmsthat have been employed in the comparisons

71 Performance of LS Algorithm in Image SegmentationThis section presents two experiments that analyze theperformance of the proposed approach Table 5 presentsthe algorithmrsquos parameters that have been experimentallydetermined and kept for all the test images through allexperiments

First Image The first test considers the histogram shownby Figure 9(b) while Figure 9(a) presents the original imageAfter applying the proposed algorithm just as it has beenconfigured according to parameters in Table 5 a minimummodel of four classes emerges after the start from Gaussian

Table 5 Parameter setup for the LS segmentation algorithm

119865 119871 119892 119892119890119899 119873 119870 120582 120579

06 02 20 1000 40 10 001 00001

Table 6 Results of the reduced Gaussian mixture for the first andthe second image

Parameters First image Second image1198751 0004 00321205831 181 1211205901 82 291198752 00035 000151205832 699 4511205902 184 2491198753 001 00061205833 949 13011205903 89 1791198754 0007 0021205834 1631 16721205904 299 101

mixtures of 10 classes Considering 30 independent exe-cutions the averaged parameters of the resultant Gaussianmixture are presented in Table 6 One final Gaussian mixture(ten classes) which has been obtained by LS is presentedin Figure 10 Furthermore the approximation of the reducedGaussian mixture is also visually compared with the originalhistogram in Figure 10 On the other hand Figure 11 presentsthe segmented image after calculating the threshold points

Second Image For the second experiment the image inFigure 12 is testedThemethod aims to segment the image byusing a reduced Gaussian mixture model obtained by the LSapproach After executing the algorithm according to param-eters from Table 5 the resulting averaged parameters of theresultant Gaussian mixture are presented in Table 6 In orderto assure consistency the results are averaged considering 30independent executions Figure 13 shows the approximationquality that is obtained by the reduced Gaussian mixturemodel in (a) and the segmented image in (b)

14 Mathematical Problems in Engineering

Input parameters

ImageI(xy)

from the reduced model

YesLS method

No

k = k + 1

k gt gen

Reduce 119845genbest

Calculate the threshold values Ti

Use Ti to segment the image

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

119813 larr SolitaryOperation (119819k)

119819k+1 larr SocialOperation (119819k 119813)

Initialize 1198190 (k = 0)

F L g gen N K 120582 and 120579

Figure 8 Flowchart of the complete segmentation procedure

(a)

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray level

Freq

uenc

y

(b)

Figure 9 (a) Original first image used on the first experiment and (b) its histogram

Mathematical Problems in Engineering 15

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray levelFr

eque

ncy

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

Figure 10 Gaussian mixture obtained by LS for the first image

Figure 11 Image segmented with the reduced Gaussian mixture

(a)

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

(b)

Figure 12 (a) Original second image used on the first experiment and (b) its histogram

72 Histogram Approximation Comparisons This sectiondiscusses the comparison between the proposed segmenta-tion algorithm and other evolutionary-segmentation meth-ods that have been proposed in the literature The set ofmethods used in the experiments includes the 119869 + ABC[19] 119869 + AIS [20] and 119869 + DE [21] These algorithmsconsider the combination between the original objectivefunction 119869 (17) and an evolutionary technique such asArtificial Bee Colony (ABC) the Artificial Immune Systems(AIS) and the Differential Evolution (DE) [21] respectively

Since the proposed segmentation approach considers thecombination of the new objective function 119869

new (18) andthe LS algorithm it will be referred to as 119869

new+ LS

The comparison focuses mainly on the capacities of eachalgorithm to accurately and robustly approximate the imagehistogram

In the experiments the populations have been set to 40(119873 = 40) individuals The maximum iteration number for allfunctions has been set to 1000 Such stop criterion has beenconsidered to maintain compatibility to similar experiments

16 Mathematical Problems in Engineering

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

(a) (b)

Figure 13 (a) Segmentation result obtained by LS for the first image and (b) the segmented image

(1) (2)

(3) (4)

(a)

0015

0005001

0020025

0035003

0045004

005

00 50 100 150 200 250

Gray level

Freq

uenc

y

(b)

Figure 14 (a) Synthetic image used in the comparison and (b) its histogram

that are reported in the literature [18] The parameter settingfor each of the segmentation algorithms in the comparison isdescribed as follows

(1) 119869 + ABC [19] in the algorithm its parameters areconfigured as follows the abandonment limit = 100120572 = 005 and limit = 30

(2) 119869 + AIS [20] it presents the following parameters ℎ =120119873

119888= 80 120588 = 10 119875

119903= 20 119871 = 22 and 119879

119890= 001

(3) 119869 + DE [21] the DERand1 scheme is employed Theparameter settings follow the instructions in [21]Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(4) In 119869new +LS the method is set according to the valuesdescribed in Table 5

In order to conduct the experiments a synthetic image isdesigned to be used as a reference in the comparisons Themain idea is to know in advance the exact number of classes(and their parameters) that are contained in the image sothat the histogram can be considered as a ground truth The

Table 7 Employed parameters for the design of the reference image

119875119894

120583119894

120590119894

(1) 005 40 8(2) 004 100 10(3) 005 160 8(4) 0027 220 15

synthetic image is divided into four sections Each sectioncorresponds to a different class which is produced by settingeach gray level pixel 119875V119894 to a value that is determined by thefollowing model

119875V119894= 119890minus((119909minus1205831)

221205902119894) (21)

where 119894 represents the section whereas 120583119894and 120590119894are themean

and the dispersion of the gray level pixels respectively Thecomparative study employs the image of 512 times 512 that isshown in Figure 14(a) and the algorithmrsquos parameters thathave been presented in Table 7 Figure 14(b) illustrates theresultant histogram

Mathematical Problems in Engineering 17

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2Class 3Class 4

Class 5Class 6Class 7Class 8

(a)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(b)

Figure 15 Convergence results (a) Convergence of the following methods 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes (b) Convergence of the proposed method (reduced Gaussian mixture)

(a) (b)

Figure 16 Segmentation results obtained by (a) several methods including 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes and (b) the proposed method (reduced Gaussian mixture)

In the comparison the discussion focuses on the follow-ing issues first of all accuracy second convergence andthird computational cost

Convergence This section analyzes the approximation con-vergence when the number of classes that are used bythe algorithm during the evolution process is different tothe actual number of classes in the image Recall that theproposed approach automatically finds the reduced Gaussianmixture which better adapts to the image histogram

In the experiment the methods 119869 + ABC 119869 + AISand 119869 + DE are executed considering Gaussian mixturescomposed of 8 functions Under such circumstances thenumber of classes to be detected is higher than the actualnumber of classes in the image On the other hand the

proposed algorithm maintains the same configuration ofTable 5Therefore it can detect and calculate up to ten classes(119870 = 10)

As a result the techniques 119869 + ABC 119869 + AIS and 119869 +DE tend to overestimate the image histogram This effectcan be seen in Figure 15(a) where the resulting Gaussianfunctions are concentrated within actual classes Such abehavior is a consequence of the evaluation that is consideredby the original objective function 119869 which privileges onlythe approximation between the Gaussian mixture and theimage histogram This effect can be graphically illustrated byFigure 16(a) that shows the pixel misclassification producedby the wrong segmentation of Figure 14(a) On the otherhand the proposed approach obtains a reduced Gaussianmixture model which allows the detection of each class from

18 Mathematical Problems in Engineering

006

004

005

002

003

001

00 50 100 150 200 250

Gray level

Freq

uenc

y

(a)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

(b)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

HistogramClass 1Class 2

Class 3Class 4

(c)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(d)

Figure 17 Approximation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

the actual histogram (see Figure 15(b)) As a consequencethe segmentation is significantly improved by eliminating thepixel misclassification as it is shown by Figure 16(b)

It is evident from Figure 15 that the techniques 119869 + ABC119869 + AIS and 119869 + DE all need an a priori knowledge of thenumber of classes that are contained in the actual histogramin order to obtain a satisfactory result On the other handthe proposed algorithm is able to find a reduced Gaussianmixture whose classes coincide with the actual number ofclasses that are contained in the image histogram

Accuracy In this section the comparison among the algo-rithms in terms of accuracy is reported Most of the reportedcomparisons [19ndash26] are concerned about comparing theparameters of the resultant Gaussian mixtures by using realimages Under such circumstances it is difficult to considera clear reference in order to define a meaningful errorTherefore the image defined in Figure 14 has been used in theexperiments because its construction parameters are clearlydefined in Table 7

Since the parameter values from Table 7 act as groundtruth a simple averaged difference between them and thevalues that are computed by each algorithm could be used as

comparison error However as each parametermaintains dif-ferent active intervals it is necessary to express the differencesin terms of percentage Therefore if Δ120573 is the parametricdifference and 120573 the ground truth parameter the percentageerror Δ120573 can be defined as follows

Δ120573 =Δ120573

120573sdot 100 (22)

In the segmentation problem each Gaussian mixture repre-sents a 119870-dimensional model where each dimension corre-sponds to a Gaussian function of the optimization problemto be solved Since each Gaussian function possesses threeparameters 119875

119894 120583119894 and 120590

119894 the complete number of parameters

is 3 sdot 119870 dimensions Therefore the final error 119864 produced bythe final Gaussian mixture is

119864 =1

119870 sdot 3

119870sdot3sum

V=1Δ120573V (23)

where 120573V isin (119875119894 120583119894 120590119894)In order to compare accuracy the algorithms 119869 + ABC

119869 + AIS 119869 + DE and the proposed approach are all executedover the image shown by Figure 14(a) The experiment aims

Mathematical Problems in Engineering 19

(a) (b)

(c) (d)

Figure 18 Segmentation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

to estimate the Gaussian mixture that better approximatesthe actual image histogram Methods 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures composed of 4functions (119870 = 4) In case of the 119869new + LS method althoughthe algorithm finds a reduced Gaussian mixture of fourfunctions it is initially set with ten functions (119870 = 10)Table 8 presents the final Gaussian mixture parameters andthe final error 119864 The final Gaussian mixture parametershave been averaged over 30 independent executions in orderto assure consistency A close inspection of Table 8 revealsthat the proposed method is able to achieve the smallesterror (119864) in comparison to the other algorithms Figure 16presents the histogram approximations that are produced byeach algorithm whereas Figure 17 shows their correspondentsegmented images Both illustrations present the mediancase obtained throughout 30 runs Figure 18 exhibits that 119869 +ABC 119869 + AIS and 119869 + DE present different levels of misclas-sifications which are nearly absent in the proposed approachcase

Computational Cost The experiment aims to measure thecomplexity and the computing time spent by the 119869 + ABC

Table 8 Results of the reduced Gaussian mixture in terms ofaccuracy

Algorithm Gaussian function 119875119894

120583119894

120590119894

119864

J + ABC

(1) 0052 445 64

1179(2) 0084 9812 1287(3) 0058 16350 894(4) 0025 21884 175

J + AIS

(1) 0072 3101 614

2201(2) 0054 8852 1221(3) 0039 14921 914(4) 0034 24841 1384

J + DE

(1) 0041 3574 786

1357(2) 0036 9057 1197(3) 0059 14847 901(4) 0020 20134 1302

119869new + LS

(1) 0049 4012 75

398(2) 0041 10204 104(3) 0052 16866 83(4) 0025 11092 152

20 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 19 Images employed in the computational cost analysis

the 119869 + AIS the 119869 + DE and the 119869new + LS algorithm whilecalculating the parameters of the Gaussian mixture in bench-mark images (see Figures 19(a)ndash19(d)) 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures that are composedof 4 functions (119870 = 4) In case of the 119869new + LS methodalthough the algorithm finds a reduced Gaussian mixture offour functions despite being initialized with ten functions(119870 = 10) Table 9 shows the averaged measurements after30 experiments It is evident that the 119869 + ABC and 119869 +

DE are the slowest to converge (iterations) and the 119869 + AISshows the highest computational cost (time elapsed) becauseit requires operators which demand long times On the otherhand the 119869new + LS shows an acceptable trade-off betweenits convergence time and its computational cost Thereforealthough the implementation of 119869new + LS in general requiresmore code than most of other evolution-based segmentatorssuch a fact is not reflected in the execution time FinallyFigure 19 below shows the segmented images as they are

Mathematical Problems in Engineering 21

(a) (b) (c) (d)

Figure 20 Experimental set used in the evaluation of the segmentation results

Table 9 Iterations and time requirements of the 119869 + ABC the 119869 +AIS the 119869 + DE and the 119869new + LS algorithm as they are applied tosegment benchmark images (see Figure 17)

Iterations (a) (b) (c) (d)Time elapsed

119869 + ABC 855 833 870 997272 s 270 s 273 s 31 s

119869 + AIS 725 704 754 812178 s 161 s 141 s 201 s

119869 + DE 657 627 694 742125 s 112 s 145 s 188 s

119869new + LS 314 298 307 402

098 s 084 s 072 s 102 s

Table 10 Evaluation of the segmentation results in terms of the ROSindex

Number of classesImage

119873119877= 4

(a)119873119877= 3

(b)119873119877= 4

(c)119873119877= 4

(d)119869 + ABC 0534 0328 0411 0457119869 + AIS 0522 0321 0427 0437119869 + DE 0512 0312 0408 0418119869new + LS 0674 0401 0514 0527

generated by each algorithm It can be seen that the proposedapproach generate more homogeneous regions whereas 119869 +ABC 119869 + AIS and 119869 + DE present several artifacts that areproduced by an incorrect pixel classification

73 Performance Evaluation of the Segmentation Results Thissection presents an objective evaluation of segmentationresults that are produced by all algorithms in the compar-isons The ill-defined nature of the segmentation problemmakes the evaluation of a candidate algorithm difficult[57] Traditionally the evaluation has been conducted byusing some supervised criteria [58] which are based on thecomputation of a dissimilarity measure between a segmen-tation result and a ground truth image Recently the use ofunsupervised measures has substituted supervised indexesfor the objective evaluation of segmentation results [59]They

Table 11 Unimodal test functions

Test function S 119891opt

1198911(x) =119899

sum

119894=11199092119894

[minus100 100]119899 0

1198912(x) =119899

sum

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 +

119899

prod

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 [minus10 10]119899 0

1198913(x) =119899

sum

119894=1(

119894

sum

119895=1119909119895)

2

[minus100 100]119899 0

1198914 (x) = max119894

10038161003816100381610038161199091198941003816100381610038161003816 1 le 119894 le 119899 [minus100 100]119899 0

1198915(x) =119899minus1sum

119894=1[100 (119909

119894+1 minus 1199092119894)2+ (119909119894minus 1)2] [minus30 30]119899 0

1198916(x) =119899

sum

119894=1(119909119894+ 05)2 [minus100 100]119899 0

1198917(x) =119899

sum

119894=1119894119909

4119894+ rand(0 1) [minus128 128]119899 0

enable the quantification of the quality of a segmentationresult without a priori knowledge (ground truth image)

Evaluation Criteria In this paper the unsupervised indexROS proposed by Chabrier et al [60] has been used toobjectively evaluate the performance of each candidate algo-rithm This index evaluates the segmentation quality interms of the homogeneity within segmented regions andthe heterogeneity among the different regions ROS can becomputed as follows

ROS =119863 minus 119863

2 (24)

where 119863 quantifies the homogeneity within segmentedregions Similarly 119863 measures the disparity among theregions A segmentation result 1198781 is considered better than 1198782if ROS

1198781gt ROS

1198782 The interregion homogeneity character-

ized by119863 is calculated considering the following formulation

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot

120590119888

(sum119873119877

119897=1 120590119897) (25)

where 119873119877represents the number of partitions in which the

image has been segmented 119877119888symbolizes the number of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 11

Freq

uenc

y

Gray level

h(x)

(a)

Gray level

Freq

uenc

y

p1(x)p2(x) p3(x)

(b)

Freq

uenc

y

Gray level

p(x)

(c)

Figure 6 Histogram approximation through a Gaussian mixture (a) Original histogram (b) configuration of the Gaussian components1199011(119909) 1199012(119909) and 1199013(119909) and (c) final Gaussian mixture 119901(119909)

complexity of the performance biases and restrictions of theproblem to be solved In the segmentation problem candidatesolutions represent Gaussian mixtures The objective func-tion 119869 (17) is used as a fitness value to evaluate the similaritypresented between the Gaussian mixture and the imagehistogram Therefore guided by the fitness values (119869 values)a set of encoded candidate solutions are evolved using theevolutionary operators until the best possible resemblancecan be found

Over the last decade several algorithms based on evo-lutionary and swarm principles [19ndash22] have been proposedto solve the problem of segmentation through a Gaussianmixture model Although these algorithms own certain goodperformance indexes they present two important limitations(1) They frequently obtain suboptimal approximations asa consequence of an inappropriate balance between explo-ration and exploitation in their search strategies (2) Theyare based on the assumption that the number of Gaussians(classes) in the mixture is preknown and fixed otherwisethey cannot work

In order to eliminate such flaws the proposed approachincludes (A) a new search strategy and (B) the definitionof a new objective function For the search strategy the LSmethod (Section 4) is adopted Under LS the concentrationof individuals in the current best positions is explicitlyavoided Such fact allows reducing critical problems such asthe premature convergence to suboptimal solutions and theincorrect exploration-exploitation balance

61 New Objective Function 119869new Previous segmentation

algorithms based on evolutionary and swarm methods use(17) as objective function Under these circumstances each

candidate solution (Gaussian mixture) is only evaluated interms of its approximation with the image histogram

Since the proposed approach aims to automatically selectthe number ofGaussian functions119870 in the finalmixture119901(119909)the objective function must be modified The new objectivefunction 119869new is defined as follows

119869new

= 119869 + 120582 sdot119876 (18)

where 120582 is a scaling constant The new objective function isdivided into two partsThe first part 119869 evaluates the quality ofeach candidate solution in terms of its similarity with regardto the image histogram (17) The second part 119876 penalizes theoverlapped area among Gaussian functions (classes) with 119876being defined as follows

119876 =

119870

sum

119894=1

119870

sum

119895=1119895 =119894

119871

sum

119897=1min (119875

119894sdot 119901119894(119897) 119875119895sdot 119901119895(119897)) (19)

where 119870 and 119871 represent the number of classes and the graylevels respectively The parameters 119901

119894(119897) and 119901

119895(119897) symbolize

the Gaussian functions 119894 and 119895 respectively that are to beevaluated on the point 119897 (gray level) whereas the elements119875119894and 119875

119895represent the a priori probabilities of the Gaussian

functions 119894 and 119895 respectively Under such circumstancesmixtures with Gaussian functions that do not ldquopositivelyrdquoparticipate in the histogram approximation are severelypenalized

Figure 7 illustrates the effect of the new objective function119869new in the evaluation of Gaussian mixtures (candidatesolutions) From the image histogram (Figure 7(a)) it isevident that two Gaussian functions are enough to accurately

12 Mathematical Problems in Engineering

Gray level

Freq

uenc

y h(x)

(a)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)Freq

uenc

y

(b)

Gray level

Freq

uenc

y p1(x)

p2(x)

p3(x)

p4(x)

Q12 Q23 Q34

(c)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)

Q12

Q23

Q34

Freq

uenc

y(d)

Figure 7 Effect of the new objective function 119869new in the evaluation of Gaussian mixtures (candidate solutions) (a) Original histogram (b)Gaussian mixture considering four classes (c) penalization areas and (d) Gaussian mixture of better quality solution

approximate the original histogramHowever if theGaussianmixture is modeled by using a greater number of functions(eg four as it is shown in Figure 7(b)) the original objectivefunction 119869 is unable to obtain a reasonable approximationUnder the new objective function 119869

new the overlapped areaamong Gaussian functions (classes) is penalized Such areasin Figure 7(c) correspond to 11987612 11987623 and 11987634 where11987612 represents the penalization value produced between theGaussian function 1199011(119909) and 1199012(119909) Therefore due to thepenalization theGaussianmixture shown in Figures 7(b) and7(c) provides a solution of low quality On the other hand theGaussian mixture presented in Figure 7(d) maintains a lowpenalty thus it represents a solution of high quality FromFigure 7(d) it is easy to see that functions 1199011(119909) and 1199014(119909)can be removed from the final mixture This eliminationcould be performed by a simple comparison with a thresholdvalue 120579 since 1199011(119909) and 1199014(119909) have a reduced amplitude(1199011(119909) asymp 1199012(119909) asymp 0) Therefore under 119869new it is possibleto find the reduced Gaussian mixture model starting from aconsiderable number of functions

Since the proposed segmentation method is conceivedas an optimization problem the overall operation can bereduced to solve the formulation of (20) by using the LSalgorithm

minimize 119869new

(x) = 119869 (x) + 120582 sdot 119876 (x)

x = (1198751 1205831 1205901 119875119870 120583119870 120590119870) isin R3sdot119870

subject to 0 le 119875119889le 1 119889 isin (1 119870)

0 le 120583119889le 255

0 le 120590119889le 25

(20)

where 119875119889 120583119889 and 120590

119889represent the probability mean and

standard deviation of the class 119889 It is important to remarkthat the new objective function 119869

new allows the evaluationof a candidate solution in terms of the necessary numberof Gaussian functions and its approximation quality Undersuch circumstances it can be used in combination with anyother evolutionary method and not only with the proposedLS algorithm

62 Complete Segmentation Algorithm Once the new searchstrategy (LS) and objective function (119869new) have been definedthe proposed segmentation algorithm can be summarized byAlgorithm 2The new algorithm combines operators definedby LS and operations for calculating the threshold values

(Line 1) The algorithm sets the operative parameters 119865119871 119892 119892119890119899 119873 119870 120582 and 120579 They rule the behavior of thesegmentation algorithm (Line 2) Afterwards the populationL0 is initialized considering 119873 different random Gaussianmixtures of119870 functionsThe idea is to generate an119873-randomGaussian mixture subjected to the constraints formulatedin (20) The parameter 119870 must hold a high value in orderto correctly segment all images (recall that the algorithmis able to reduce the Gaussian mixture to its minimumexpression) (Line 3)Then the Gaussianmixtures are evolvedby using the LS operators and the new objective function119869newThis process is repeated during119892119890119899 cycles (Line 8) Afterthis procedure the best Gaussian mixture l119892119890119899

119887119890119904119905is selected

according to its objective function 119869new (Line 9) Then

the Gaussian mixture l119892119890119899119887119890119904119905

is reduced by eliminating thosefunctions whose amplitudes are lower than 120579 (119901

119894(119909) le 120579)

(Line 10) Then the threshold values 119879119894from the reduced

model are calculated (Line 11) Finally the calculated 119879119894

values are employed to segment the image Figure 8 shows aflowchart of the complete segmentation procedure

Mathematical Problems in Engineering 13

(1) Input 119865 119871 119892 119892119890119899119873 119870 120582 and 120579(2) Initialize L0 (119896 = 0)(3) until (119896 = 119892119890119899)(4) F larr SolitaryOperation(L119896) Solitary operator (Section 31)(5) L119896+1 larr SocialOperation(L119896 F) Social operator (Section 32)(6) 119896 = k + 1(7) end until(8) Obtain I119892119890119899

119887119890119904119905

(9) Reduce I119892119890119899119887119890119904119905

(10) Calculate the threshold values 119879119894from the reduced model

(11) Use 119879119894to segment the image

Algorithm 2 Segmentation LS algorithm

The proposed segmentation algorithm permits to auto-matically detect the number of segmentation partitions(classes) Furthermore due to its remarkable search capaci-ties the LSmethodmaintains a better accuracy than previousalgorithms based on evolutionary principles However theproposed method presents two disadvantages first it isrelated to its implementation which in general is morecomplex than most of the other segmentators based onevolutionary basics The second refers to the segmentationprocedure of the proposed approach which does not considerany spatial pixel characteristics As a consequence pixelsthat may belong to a determined region due to its positionare labeled as a part of another region due to its graylevel intensity Such a fact adversely affects the segmentationperformance of the method

7 Segmentation Results

This section analyses the performance of the proposedsegmentation algorithmThe discussion is divided into threeparts the first one shows the performance of the proposedLS segmentation algorithm while the second presents a com-parison between the proposed approach and others segmen-tation algorithms that are based on evolutionary and swammethods The comparison mainly considers the capacities ofeach algorithm to accurately and robustly approximate theimage histogram Finally the third part presents an objectiveevaluation of segmentation results produced by all algorithmsthat have been employed in the comparisons

71 Performance of LS Algorithm in Image SegmentationThis section presents two experiments that analyze theperformance of the proposed approach Table 5 presentsthe algorithmrsquos parameters that have been experimentallydetermined and kept for all the test images through allexperiments

First Image The first test considers the histogram shownby Figure 9(b) while Figure 9(a) presents the original imageAfter applying the proposed algorithm just as it has beenconfigured according to parameters in Table 5 a minimummodel of four classes emerges after the start from Gaussian

Table 5 Parameter setup for the LS segmentation algorithm

119865 119871 119892 119892119890119899 119873 119870 120582 120579

06 02 20 1000 40 10 001 00001

Table 6 Results of the reduced Gaussian mixture for the first andthe second image

Parameters First image Second image1198751 0004 00321205831 181 1211205901 82 291198752 00035 000151205832 699 4511205902 184 2491198753 001 00061205833 949 13011205903 89 1791198754 0007 0021205834 1631 16721205904 299 101

mixtures of 10 classes Considering 30 independent exe-cutions the averaged parameters of the resultant Gaussianmixture are presented in Table 6 One final Gaussian mixture(ten classes) which has been obtained by LS is presentedin Figure 10 Furthermore the approximation of the reducedGaussian mixture is also visually compared with the originalhistogram in Figure 10 On the other hand Figure 11 presentsthe segmented image after calculating the threshold points

Second Image For the second experiment the image inFigure 12 is testedThemethod aims to segment the image byusing a reduced Gaussian mixture model obtained by the LSapproach After executing the algorithm according to param-eters from Table 5 the resulting averaged parameters of theresultant Gaussian mixture are presented in Table 6 In orderto assure consistency the results are averaged considering 30independent executions Figure 13 shows the approximationquality that is obtained by the reduced Gaussian mixturemodel in (a) and the segmented image in (b)

14 Mathematical Problems in Engineering

Input parameters

ImageI(xy)

from the reduced model

YesLS method

No

k = k + 1

k gt gen

Reduce 119845genbest

Calculate the threshold values Ti

Use Ti to segment the image

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

119813 larr SolitaryOperation (119819k)

119819k+1 larr SocialOperation (119819k 119813)

Initialize 1198190 (k = 0)

F L g gen N K 120582 and 120579

Figure 8 Flowchart of the complete segmentation procedure

(a)

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray level

Freq

uenc

y

(b)

Figure 9 (a) Original first image used on the first experiment and (b) its histogram

Mathematical Problems in Engineering 15

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray levelFr

eque

ncy

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

Figure 10 Gaussian mixture obtained by LS for the first image

Figure 11 Image segmented with the reduced Gaussian mixture

(a)

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

(b)

Figure 12 (a) Original second image used on the first experiment and (b) its histogram

72 Histogram Approximation Comparisons This sectiondiscusses the comparison between the proposed segmenta-tion algorithm and other evolutionary-segmentation meth-ods that have been proposed in the literature The set ofmethods used in the experiments includes the 119869 + ABC[19] 119869 + AIS [20] and 119869 + DE [21] These algorithmsconsider the combination between the original objectivefunction 119869 (17) and an evolutionary technique such asArtificial Bee Colony (ABC) the Artificial Immune Systems(AIS) and the Differential Evolution (DE) [21] respectively

Since the proposed segmentation approach considers thecombination of the new objective function 119869

new (18) andthe LS algorithm it will be referred to as 119869

new+ LS

The comparison focuses mainly on the capacities of eachalgorithm to accurately and robustly approximate the imagehistogram

In the experiments the populations have been set to 40(119873 = 40) individuals The maximum iteration number for allfunctions has been set to 1000 Such stop criterion has beenconsidered to maintain compatibility to similar experiments

16 Mathematical Problems in Engineering

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

(a) (b)

Figure 13 (a) Segmentation result obtained by LS for the first image and (b) the segmented image

(1) (2)

(3) (4)

(a)

0015

0005001

0020025

0035003

0045004

005

00 50 100 150 200 250

Gray level

Freq

uenc

y

(b)

Figure 14 (a) Synthetic image used in the comparison and (b) its histogram

that are reported in the literature [18] The parameter settingfor each of the segmentation algorithms in the comparison isdescribed as follows

(1) 119869 + ABC [19] in the algorithm its parameters areconfigured as follows the abandonment limit = 100120572 = 005 and limit = 30

(2) 119869 + AIS [20] it presents the following parameters ℎ =120119873

119888= 80 120588 = 10 119875

119903= 20 119871 = 22 and 119879

119890= 001

(3) 119869 + DE [21] the DERand1 scheme is employed Theparameter settings follow the instructions in [21]Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(4) In 119869new +LS the method is set according to the valuesdescribed in Table 5

In order to conduct the experiments a synthetic image isdesigned to be used as a reference in the comparisons Themain idea is to know in advance the exact number of classes(and their parameters) that are contained in the image sothat the histogram can be considered as a ground truth The

Table 7 Employed parameters for the design of the reference image

119875119894

120583119894

120590119894

(1) 005 40 8(2) 004 100 10(3) 005 160 8(4) 0027 220 15

synthetic image is divided into four sections Each sectioncorresponds to a different class which is produced by settingeach gray level pixel 119875V119894 to a value that is determined by thefollowing model

119875V119894= 119890minus((119909minus1205831)

221205902119894) (21)

where 119894 represents the section whereas 120583119894and 120590119894are themean

and the dispersion of the gray level pixels respectively Thecomparative study employs the image of 512 times 512 that isshown in Figure 14(a) and the algorithmrsquos parameters thathave been presented in Table 7 Figure 14(b) illustrates theresultant histogram

Mathematical Problems in Engineering 17

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2Class 3Class 4

Class 5Class 6Class 7Class 8

(a)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(b)

Figure 15 Convergence results (a) Convergence of the following methods 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes (b) Convergence of the proposed method (reduced Gaussian mixture)

(a) (b)

Figure 16 Segmentation results obtained by (a) several methods including 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes and (b) the proposed method (reduced Gaussian mixture)

In the comparison the discussion focuses on the follow-ing issues first of all accuracy second convergence andthird computational cost

Convergence This section analyzes the approximation con-vergence when the number of classes that are used bythe algorithm during the evolution process is different tothe actual number of classes in the image Recall that theproposed approach automatically finds the reduced Gaussianmixture which better adapts to the image histogram

In the experiment the methods 119869 + ABC 119869 + AISand 119869 + DE are executed considering Gaussian mixturescomposed of 8 functions Under such circumstances thenumber of classes to be detected is higher than the actualnumber of classes in the image On the other hand the

proposed algorithm maintains the same configuration ofTable 5Therefore it can detect and calculate up to ten classes(119870 = 10)

As a result the techniques 119869 + ABC 119869 + AIS and 119869 +DE tend to overestimate the image histogram This effectcan be seen in Figure 15(a) where the resulting Gaussianfunctions are concentrated within actual classes Such abehavior is a consequence of the evaluation that is consideredby the original objective function 119869 which privileges onlythe approximation between the Gaussian mixture and theimage histogram This effect can be graphically illustrated byFigure 16(a) that shows the pixel misclassification producedby the wrong segmentation of Figure 14(a) On the otherhand the proposed approach obtains a reduced Gaussianmixture model which allows the detection of each class from

18 Mathematical Problems in Engineering

006

004

005

002

003

001

00 50 100 150 200 250

Gray level

Freq

uenc

y

(a)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

(b)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

HistogramClass 1Class 2

Class 3Class 4

(c)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(d)

Figure 17 Approximation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

the actual histogram (see Figure 15(b)) As a consequencethe segmentation is significantly improved by eliminating thepixel misclassification as it is shown by Figure 16(b)

It is evident from Figure 15 that the techniques 119869 + ABC119869 + AIS and 119869 + DE all need an a priori knowledge of thenumber of classes that are contained in the actual histogramin order to obtain a satisfactory result On the other handthe proposed algorithm is able to find a reduced Gaussianmixture whose classes coincide with the actual number ofclasses that are contained in the image histogram

Accuracy In this section the comparison among the algo-rithms in terms of accuracy is reported Most of the reportedcomparisons [19ndash26] are concerned about comparing theparameters of the resultant Gaussian mixtures by using realimages Under such circumstances it is difficult to considera clear reference in order to define a meaningful errorTherefore the image defined in Figure 14 has been used in theexperiments because its construction parameters are clearlydefined in Table 7

Since the parameter values from Table 7 act as groundtruth a simple averaged difference between them and thevalues that are computed by each algorithm could be used as

comparison error However as each parametermaintains dif-ferent active intervals it is necessary to express the differencesin terms of percentage Therefore if Δ120573 is the parametricdifference and 120573 the ground truth parameter the percentageerror Δ120573 can be defined as follows

Δ120573 =Δ120573

120573sdot 100 (22)

In the segmentation problem each Gaussian mixture repre-sents a 119870-dimensional model where each dimension corre-sponds to a Gaussian function of the optimization problemto be solved Since each Gaussian function possesses threeparameters 119875

119894 120583119894 and 120590

119894 the complete number of parameters

is 3 sdot 119870 dimensions Therefore the final error 119864 produced bythe final Gaussian mixture is

119864 =1

119870 sdot 3

119870sdot3sum

V=1Δ120573V (23)

where 120573V isin (119875119894 120583119894 120590119894)In order to compare accuracy the algorithms 119869 + ABC

119869 + AIS 119869 + DE and the proposed approach are all executedover the image shown by Figure 14(a) The experiment aims

Mathematical Problems in Engineering 19

(a) (b)

(c) (d)

Figure 18 Segmentation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

to estimate the Gaussian mixture that better approximatesthe actual image histogram Methods 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures composed of 4functions (119870 = 4) In case of the 119869new + LS method althoughthe algorithm finds a reduced Gaussian mixture of fourfunctions it is initially set with ten functions (119870 = 10)Table 8 presents the final Gaussian mixture parameters andthe final error 119864 The final Gaussian mixture parametershave been averaged over 30 independent executions in orderto assure consistency A close inspection of Table 8 revealsthat the proposed method is able to achieve the smallesterror (119864) in comparison to the other algorithms Figure 16presents the histogram approximations that are produced byeach algorithm whereas Figure 17 shows their correspondentsegmented images Both illustrations present the mediancase obtained throughout 30 runs Figure 18 exhibits that 119869 +ABC 119869 + AIS and 119869 + DE present different levels of misclas-sifications which are nearly absent in the proposed approachcase

Computational Cost The experiment aims to measure thecomplexity and the computing time spent by the 119869 + ABC

Table 8 Results of the reduced Gaussian mixture in terms ofaccuracy

Algorithm Gaussian function 119875119894

120583119894

120590119894

119864

J + ABC

(1) 0052 445 64

1179(2) 0084 9812 1287(3) 0058 16350 894(4) 0025 21884 175

J + AIS

(1) 0072 3101 614

2201(2) 0054 8852 1221(3) 0039 14921 914(4) 0034 24841 1384

J + DE

(1) 0041 3574 786

1357(2) 0036 9057 1197(3) 0059 14847 901(4) 0020 20134 1302

119869new + LS

(1) 0049 4012 75

398(2) 0041 10204 104(3) 0052 16866 83(4) 0025 11092 152

20 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 19 Images employed in the computational cost analysis

the 119869 + AIS the 119869 + DE and the 119869new + LS algorithm whilecalculating the parameters of the Gaussian mixture in bench-mark images (see Figures 19(a)ndash19(d)) 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures that are composedof 4 functions (119870 = 4) In case of the 119869new + LS methodalthough the algorithm finds a reduced Gaussian mixture offour functions despite being initialized with ten functions(119870 = 10) Table 9 shows the averaged measurements after30 experiments It is evident that the 119869 + ABC and 119869 +

DE are the slowest to converge (iterations) and the 119869 + AISshows the highest computational cost (time elapsed) becauseit requires operators which demand long times On the otherhand the 119869new + LS shows an acceptable trade-off betweenits convergence time and its computational cost Thereforealthough the implementation of 119869new + LS in general requiresmore code than most of other evolution-based segmentatorssuch a fact is not reflected in the execution time FinallyFigure 19 below shows the segmented images as they are

Mathematical Problems in Engineering 21

(a) (b) (c) (d)

Figure 20 Experimental set used in the evaluation of the segmentation results

Table 9 Iterations and time requirements of the 119869 + ABC the 119869 +AIS the 119869 + DE and the 119869new + LS algorithm as they are applied tosegment benchmark images (see Figure 17)

Iterations (a) (b) (c) (d)Time elapsed

119869 + ABC 855 833 870 997272 s 270 s 273 s 31 s

119869 + AIS 725 704 754 812178 s 161 s 141 s 201 s

119869 + DE 657 627 694 742125 s 112 s 145 s 188 s

119869new + LS 314 298 307 402

098 s 084 s 072 s 102 s

Table 10 Evaluation of the segmentation results in terms of the ROSindex

Number of classesImage

119873119877= 4

(a)119873119877= 3

(b)119873119877= 4

(c)119873119877= 4

(d)119869 + ABC 0534 0328 0411 0457119869 + AIS 0522 0321 0427 0437119869 + DE 0512 0312 0408 0418119869new + LS 0674 0401 0514 0527

generated by each algorithm It can be seen that the proposedapproach generate more homogeneous regions whereas 119869 +ABC 119869 + AIS and 119869 + DE present several artifacts that areproduced by an incorrect pixel classification

73 Performance Evaluation of the Segmentation Results Thissection presents an objective evaluation of segmentationresults that are produced by all algorithms in the compar-isons The ill-defined nature of the segmentation problemmakes the evaluation of a candidate algorithm difficult[57] Traditionally the evaluation has been conducted byusing some supervised criteria [58] which are based on thecomputation of a dissimilarity measure between a segmen-tation result and a ground truth image Recently the use ofunsupervised measures has substituted supervised indexesfor the objective evaluation of segmentation results [59]They

Table 11 Unimodal test functions

Test function S 119891opt

1198911(x) =119899

sum

119894=11199092119894

[minus100 100]119899 0

1198912(x) =119899

sum

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 +

119899

prod

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 [minus10 10]119899 0

1198913(x) =119899

sum

119894=1(

119894

sum

119895=1119909119895)

2

[minus100 100]119899 0

1198914 (x) = max119894

10038161003816100381610038161199091198941003816100381610038161003816 1 le 119894 le 119899 [minus100 100]119899 0

1198915(x) =119899minus1sum

119894=1[100 (119909

119894+1 minus 1199092119894)2+ (119909119894minus 1)2] [minus30 30]119899 0

1198916(x) =119899

sum

119894=1(119909119894+ 05)2 [minus100 100]119899 0

1198917(x) =119899

sum

119894=1119894119909

4119894+ rand(0 1) [minus128 128]119899 0

enable the quantification of the quality of a segmentationresult without a priori knowledge (ground truth image)

Evaluation Criteria In this paper the unsupervised indexROS proposed by Chabrier et al [60] has been used toobjectively evaluate the performance of each candidate algo-rithm This index evaluates the segmentation quality interms of the homogeneity within segmented regions andthe heterogeneity among the different regions ROS can becomputed as follows

ROS =119863 minus 119863

2 (24)

where 119863 quantifies the homogeneity within segmentedregions Similarly 119863 measures the disparity among theregions A segmentation result 1198781 is considered better than 1198782if ROS

1198781gt ROS

1198782 The interregion homogeneity character-

ized by119863 is calculated considering the following formulation

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot

120590119888

(sum119873119877

119897=1 120590119897) (25)

where 119873119877represents the number of partitions in which the

image has been segmented 119877119888symbolizes the number of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

12 Mathematical Problems in Engineering

Gray level

Freq

uenc

y h(x)

(a)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)Freq

uenc

y

(b)

Gray level

Freq

uenc

y p1(x)

p2(x)

p3(x)

p4(x)

Q12 Q23 Q34

(c)

Gray level

p1(x)

p2(x)

p3(x)

p4(x)

Q12

Q23

Q34

Freq

uenc

y(d)

Figure 7 Effect of the new objective function 119869new in the evaluation of Gaussian mixtures (candidate solutions) (a) Original histogram (b)Gaussian mixture considering four classes (c) penalization areas and (d) Gaussian mixture of better quality solution

approximate the original histogramHowever if theGaussianmixture is modeled by using a greater number of functions(eg four as it is shown in Figure 7(b)) the original objectivefunction 119869 is unable to obtain a reasonable approximationUnder the new objective function 119869

new the overlapped areaamong Gaussian functions (classes) is penalized Such areasin Figure 7(c) correspond to 11987612 11987623 and 11987634 where11987612 represents the penalization value produced between theGaussian function 1199011(119909) and 1199012(119909) Therefore due to thepenalization theGaussianmixture shown in Figures 7(b) and7(c) provides a solution of low quality On the other hand theGaussian mixture presented in Figure 7(d) maintains a lowpenalty thus it represents a solution of high quality FromFigure 7(d) it is easy to see that functions 1199011(119909) and 1199014(119909)can be removed from the final mixture This eliminationcould be performed by a simple comparison with a thresholdvalue 120579 since 1199011(119909) and 1199014(119909) have a reduced amplitude(1199011(119909) asymp 1199012(119909) asymp 0) Therefore under 119869new it is possibleto find the reduced Gaussian mixture model starting from aconsiderable number of functions

Since the proposed segmentation method is conceivedas an optimization problem the overall operation can bereduced to solve the formulation of (20) by using the LSalgorithm

minimize 119869new

(x) = 119869 (x) + 120582 sdot 119876 (x)

x = (1198751 1205831 1205901 119875119870 120583119870 120590119870) isin R3sdot119870

subject to 0 le 119875119889le 1 119889 isin (1 119870)

0 le 120583119889le 255

0 le 120590119889le 25

(20)

where 119875119889 120583119889 and 120590

119889represent the probability mean and

standard deviation of the class 119889 It is important to remarkthat the new objective function 119869

new allows the evaluationof a candidate solution in terms of the necessary numberof Gaussian functions and its approximation quality Undersuch circumstances it can be used in combination with anyother evolutionary method and not only with the proposedLS algorithm

62 Complete Segmentation Algorithm Once the new searchstrategy (LS) and objective function (119869new) have been definedthe proposed segmentation algorithm can be summarized byAlgorithm 2The new algorithm combines operators definedby LS and operations for calculating the threshold values

(Line 1) The algorithm sets the operative parameters 119865119871 119892 119892119890119899 119873 119870 120582 and 120579 They rule the behavior of thesegmentation algorithm (Line 2) Afterwards the populationL0 is initialized considering 119873 different random Gaussianmixtures of119870 functionsThe idea is to generate an119873-randomGaussian mixture subjected to the constraints formulatedin (20) The parameter 119870 must hold a high value in orderto correctly segment all images (recall that the algorithmis able to reduce the Gaussian mixture to its minimumexpression) (Line 3)Then the Gaussianmixtures are evolvedby using the LS operators and the new objective function119869newThis process is repeated during119892119890119899 cycles (Line 8) Afterthis procedure the best Gaussian mixture l119892119890119899

119887119890119904119905is selected

according to its objective function 119869new (Line 9) Then

the Gaussian mixture l119892119890119899119887119890119904119905

is reduced by eliminating thosefunctions whose amplitudes are lower than 120579 (119901

119894(119909) le 120579)

(Line 10) Then the threshold values 119879119894from the reduced

model are calculated (Line 11) Finally the calculated 119879119894

values are employed to segment the image Figure 8 shows aflowchart of the complete segmentation procedure

Mathematical Problems in Engineering 13

(1) Input 119865 119871 119892 119892119890119899119873 119870 120582 and 120579(2) Initialize L0 (119896 = 0)(3) until (119896 = 119892119890119899)(4) F larr SolitaryOperation(L119896) Solitary operator (Section 31)(5) L119896+1 larr SocialOperation(L119896 F) Social operator (Section 32)(6) 119896 = k + 1(7) end until(8) Obtain I119892119890119899

119887119890119904119905

(9) Reduce I119892119890119899119887119890119904119905

(10) Calculate the threshold values 119879119894from the reduced model

(11) Use 119879119894to segment the image

Algorithm 2 Segmentation LS algorithm

The proposed segmentation algorithm permits to auto-matically detect the number of segmentation partitions(classes) Furthermore due to its remarkable search capaci-ties the LSmethodmaintains a better accuracy than previousalgorithms based on evolutionary principles However theproposed method presents two disadvantages first it isrelated to its implementation which in general is morecomplex than most of the other segmentators based onevolutionary basics The second refers to the segmentationprocedure of the proposed approach which does not considerany spatial pixel characteristics As a consequence pixelsthat may belong to a determined region due to its positionare labeled as a part of another region due to its graylevel intensity Such a fact adversely affects the segmentationperformance of the method

7 Segmentation Results

This section analyses the performance of the proposedsegmentation algorithmThe discussion is divided into threeparts the first one shows the performance of the proposedLS segmentation algorithm while the second presents a com-parison between the proposed approach and others segmen-tation algorithms that are based on evolutionary and swammethods The comparison mainly considers the capacities ofeach algorithm to accurately and robustly approximate theimage histogram Finally the third part presents an objectiveevaluation of segmentation results produced by all algorithmsthat have been employed in the comparisons

71 Performance of LS Algorithm in Image SegmentationThis section presents two experiments that analyze theperformance of the proposed approach Table 5 presentsthe algorithmrsquos parameters that have been experimentallydetermined and kept for all the test images through allexperiments

First Image The first test considers the histogram shownby Figure 9(b) while Figure 9(a) presents the original imageAfter applying the proposed algorithm just as it has beenconfigured according to parameters in Table 5 a minimummodel of four classes emerges after the start from Gaussian

Table 5 Parameter setup for the LS segmentation algorithm

119865 119871 119892 119892119890119899 119873 119870 120582 120579

06 02 20 1000 40 10 001 00001

Table 6 Results of the reduced Gaussian mixture for the first andthe second image

Parameters First image Second image1198751 0004 00321205831 181 1211205901 82 291198752 00035 000151205832 699 4511205902 184 2491198753 001 00061205833 949 13011205903 89 1791198754 0007 0021205834 1631 16721205904 299 101

mixtures of 10 classes Considering 30 independent exe-cutions the averaged parameters of the resultant Gaussianmixture are presented in Table 6 One final Gaussian mixture(ten classes) which has been obtained by LS is presentedin Figure 10 Furthermore the approximation of the reducedGaussian mixture is also visually compared with the originalhistogram in Figure 10 On the other hand Figure 11 presentsthe segmented image after calculating the threshold points

Second Image For the second experiment the image inFigure 12 is testedThemethod aims to segment the image byusing a reduced Gaussian mixture model obtained by the LSapproach After executing the algorithm according to param-eters from Table 5 the resulting averaged parameters of theresultant Gaussian mixture are presented in Table 6 In orderto assure consistency the results are averaged considering 30independent executions Figure 13 shows the approximationquality that is obtained by the reduced Gaussian mixturemodel in (a) and the segmented image in (b)

14 Mathematical Problems in Engineering

Input parameters

ImageI(xy)

from the reduced model

YesLS method

No

k = k + 1

k gt gen

Reduce 119845genbest

Calculate the threshold values Ti

Use Ti to segment the image

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

119813 larr SolitaryOperation (119819k)

119819k+1 larr SocialOperation (119819k 119813)

Initialize 1198190 (k = 0)

F L g gen N K 120582 and 120579

Figure 8 Flowchart of the complete segmentation procedure

(a)

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray level

Freq

uenc

y

(b)

Figure 9 (a) Original first image used on the first experiment and (b) its histogram

Mathematical Problems in Engineering 15

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray levelFr

eque

ncy

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

Figure 10 Gaussian mixture obtained by LS for the first image

Figure 11 Image segmented with the reduced Gaussian mixture

(a)

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

(b)

Figure 12 (a) Original second image used on the first experiment and (b) its histogram

72 Histogram Approximation Comparisons This sectiondiscusses the comparison between the proposed segmenta-tion algorithm and other evolutionary-segmentation meth-ods that have been proposed in the literature The set ofmethods used in the experiments includes the 119869 + ABC[19] 119869 + AIS [20] and 119869 + DE [21] These algorithmsconsider the combination between the original objectivefunction 119869 (17) and an evolutionary technique such asArtificial Bee Colony (ABC) the Artificial Immune Systems(AIS) and the Differential Evolution (DE) [21] respectively

Since the proposed segmentation approach considers thecombination of the new objective function 119869

new (18) andthe LS algorithm it will be referred to as 119869

new+ LS

The comparison focuses mainly on the capacities of eachalgorithm to accurately and robustly approximate the imagehistogram

In the experiments the populations have been set to 40(119873 = 40) individuals The maximum iteration number for allfunctions has been set to 1000 Such stop criterion has beenconsidered to maintain compatibility to similar experiments

16 Mathematical Problems in Engineering

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

(a) (b)

Figure 13 (a) Segmentation result obtained by LS for the first image and (b) the segmented image

(1) (2)

(3) (4)

(a)

0015

0005001

0020025

0035003

0045004

005

00 50 100 150 200 250

Gray level

Freq

uenc

y

(b)

Figure 14 (a) Synthetic image used in the comparison and (b) its histogram

that are reported in the literature [18] The parameter settingfor each of the segmentation algorithms in the comparison isdescribed as follows

(1) 119869 + ABC [19] in the algorithm its parameters areconfigured as follows the abandonment limit = 100120572 = 005 and limit = 30

(2) 119869 + AIS [20] it presents the following parameters ℎ =120119873

119888= 80 120588 = 10 119875

119903= 20 119871 = 22 and 119879

119890= 001

(3) 119869 + DE [21] the DERand1 scheme is employed Theparameter settings follow the instructions in [21]Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(4) In 119869new +LS the method is set according to the valuesdescribed in Table 5

In order to conduct the experiments a synthetic image isdesigned to be used as a reference in the comparisons Themain idea is to know in advance the exact number of classes(and their parameters) that are contained in the image sothat the histogram can be considered as a ground truth The

Table 7 Employed parameters for the design of the reference image

119875119894

120583119894

120590119894

(1) 005 40 8(2) 004 100 10(3) 005 160 8(4) 0027 220 15

synthetic image is divided into four sections Each sectioncorresponds to a different class which is produced by settingeach gray level pixel 119875V119894 to a value that is determined by thefollowing model

119875V119894= 119890minus((119909minus1205831)

221205902119894) (21)

where 119894 represents the section whereas 120583119894and 120590119894are themean

and the dispersion of the gray level pixels respectively Thecomparative study employs the image of 512 times 512 that isshown in Figure 14(a) and the algorithmrsquos parameters thathave been presented in Table 7 Figure 14(b) illustrates theresultant histogram

Mathematical Problems in Engineering 17

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2Class 3Class 4

Class 5Class 6Class 7Class 8

(a)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(b)

Figure 15 Convergence results (a) Convergence of the following methods 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes (b) Convergence of the proposed method (reduced Gaussian mixture)

(a) (b)

Figure 16 Segmentation results obtained by (a) several methods including 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes and (b) the proposed method (reduced Gaussian mixture)

In the comparison the discussion focuses on the follow-ing issues first of all accuracy second convergence andthird computational cost

Convergence This section analyzes the approximation con-vergence when the number of classes that are used bythe algorithm during the evolution process is different tothe actual number of classes in the image Recall that theproposed approach automatically finds the reduced Gaussianmixture which better adapts to the image histogram

In the experiment the methods 119869 + ABC 119869 + AISand 119869 + DE are executed considering Gaussian mixturescomposed of 8 functions Under such circumstances thenumber of classes to be detected is higher than the actualnumber of classes in the image On the other hand the

proposed algorithm maintains the same configuration ofTable 5Therefore it can detect and calculate up to ten classes(119870 = 10)

As a result the techniques 119869 + ABC 119869 + AIS and 119869 +DE tend to overestimate the image histogram This effectcan be seen in Figure 15(a) where the resulting Gaussianfunctions are concentrated within actual classes Such abehavior is a consequence of the evaluation that is consideredby the original objective function 119869 which privileges onlythe approximation between the Gaussian mixture and theimage histogram This effect can be graphically illustrated byFigure 16(a) that shows the pixel misclassification producedby the wrong segmentation of Figure 14(a) On the otherhand the proposed approach obtains a reduced Gaussianmixture model which allows the detection of each class from

18 Mathematical Problems in Engineering

006

004

005

002

003

001

00 50 100 150 200 250

Gray level

Freq

uenc

y

(a)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

(b)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

HistogramClass 1Class 2

Class 3Class 4

(c)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(d)

Figure 17 Approximation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

the actual histogram (see Figure 15(b)) As a consequencethe segmentation is significantly improved by eliminating thepixel misclassification as it is shown by Figure 16(b)

It is evident from Figure 15 that the techniques 119869 + ABC119869 + AIS and 119869 + DE all need an a priori knowledge of thenumber of classes that are contained in the actual histogramin order to obtain a satisfactory result On the other handthe proposed algorithm is able to find a reduced Gaussianmixture whose classes coincide with the actual number ofclasses that are contained in the image histogram

Accuracy In this section the comparison among the algo-rithms in terms of accuracy is reported Most of the reportedcomparisons [19ndash26] are concerned about comparing theparameters of the resultant Gaussian mixtures by using realimages Under such circumstances it is difficult to considera clear reference in order to define a meaningful errorTherefore the image defined in Figure 14 has been used in theexperiments because its construction parameters are clearlydefined in Table 7

Since the parameter values from Table 7 act as groundtruth a simple averaged difference between them and thevalues that are computed by each algorithm could be used as

comparison error However as each parametermaintains dif-ferent active intervals it is necessary to express the differencesin terms of percentage Therefore if Δ120573 is the parametricdifference and 120573 the ground truth parameter the percentageerror Δ120573 can be defined as follows

Δ120573 =Δ120573

120573sdot 100 (22)

In the segmentation problem each Gaussian mixture repre-sents a 119870-dimensional model where each dimension corre-sponds to a Gaussian function of the optimization problemto be solved Since each Gaussian function possesses threeparameters 119875

119894 120583119894 and 120590

119894 the complete number of parameters

is 3 sdot 119870 dimensions Therefore the final error 119864 produced bythe final Gaussian mixture is

119864 =1

119870 sdot 3

119870sdot3sum

V=1Δ120573V (23)

where 120573V isin (119875119894 120583119894 120590119894)In order to compare accuracy the algorithms 119869 + ABC

119869 + AIS 119869 + DE and the proposed approach are all executedover the image shown by Figure 14(a) The experiment aims

Mathematical Problems in Engineering 19

(a) (b)

(c) (d)

Figure 18 Segmentation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

to estimate the Gaussian mixture that better approximatesthe actual image histogram Methods 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures composed of 4functions (119870 = 4) In case of the 119869new + LS method althoughthe algorithm finds a reduced Gaussian mixture of fourfunctions it is initially set with ten functions (119870 = 10)Table 8 presents the final Gaussian mixture parameters andthe final error 119864 The final Gaussian mixture parametershave been averaged over 30 independent executions in orderto assure consistency A close inspection of Table 8 revealsthat the proposed method is able to achieve the smallesterror (119864) in comparison to the other algorithms Figure 16presents the histogram approximations that are produced byeach algorithm whereas Figure 17 shows their correspondentsegmented images Both illustrations present the mediancase obtained throughout 30 runs Figure 18 exhibits that 119869 +ABC 119869 + AIS and 119869 + DE present different levels of misclas-sifications which are nearly absent in the proposed approachcase

Computational Cost The experiment aims to measure thecomplexity and the computing time spent by the 119869 + ABC

Table 8 Results of the reduced Gaussian mixture in terms ofaccuracy

Algorithm Gaussian function 119875119894

120583119894

120590119894

119864

J + ABC

(1) 0052 445 64

1179(2) 0084 9812 1287(3) 0058 16350 894(4) 0025 21884 175

J + AIS

(1) 0072 3101 614

2201(2) 0054 8852 1221(3) 0039 14921 914(4) 0034 24841 1384

J + DE

(1) 0041 3574 786

1357(2) 0036 9057 1197(3) 0059 14847 901(4) 0020 20134 1302

119869new + LS

(1) 0049 4012 75

398(2) 0041 10204 104(3) 0052 16866 83(4) 0025 11092 152

20 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 19 Images employed in the computational cost analysis

the 119869 + AIS the 119869 + DE and the 119869new + LS algorithm whilecalculating the parameters of the Gaussian mixture in bench-mark images (see Figures 19(a)ndash19(d)) 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures that are composedof 4 functions (119870 = 4) In case of the 119869new + LS methodalthough the algorithm finds a reduced Gaussian mixture offour functions despite being initialized with ten functions(119870 = 10) Table 9 shows the averaged measurements after30 experiments It is evident that the 119869 + ABC and 119869 +

DE are the slowest to converge (iterations) and the 119869 + AISshows the highest computational cost (time elapsed) becauseit requires operators which demand long times On the otherhand the 119869new + LS shows an acceptable trade-off betweenits convergence time and its computational cost Thereforealthough the implementation of 119869new + LS in general requiresmore code than most of other evolution-based segmentatorssuch a fact is not reflected in the execution time FinallyFigure 19 below shows the segmented images as they are

Mathematical Problems in Engineering 21

(a) (b) (c) (d)

Figure 20 Experimental set used in the evaluation of the segmentation results

Table 9 Iterations and time requirements of the 119869 + ABC the 119869 +AIS the 119869 + DE and the 119869new + LS algorithm as they are applied tosegment benchmark images (see Figure 17)

Iterations (a) (b) (c) (d)Time elapsed

119869 + ABC 855 833 870 997272 s 270 s 273 s 31 s

119869 + AIS 725 704 754 812178 s 161 s 141 s 201 s

119869 + DE 657 627 694 742125 s 112 s 145 s 188 s

119869new + LS 314 298 307 402

098 s 084 s 072 s 102 s

Table 10 Evaluation of the segmentation results in terms of the ROSindex

Number of classesImage

119873119877= 4

(a)119873119877= 3

(b)119873119877= 4

(c)119873119877= 4

(d)119869 + ABC 0534 0328 0411 0457119869 + AIS 0522 0321 0427 0437119869 + DE 0512 0312 0408 0418119869new + LS 0674 0401 0514 0527

generated by each algorithm It can be seen that the proposedapproach generate more homogeneous regions whereas 119869 +ABC 119869 + AIS and 119869 + DE present several artifacts that areproduced by an incorrect pixel classification

73 Performance Evaluation of the Segmentation Results Thissection presents an objective evaluation of segmentationresults that are produced by all algorithms in the compar-isons The ill-defined nature of the segmentation problemmakes the evaluation of a candidate algorithm difficult[57] Traditionally the evaluation has been conducted byusing some supervised criteria [58] which are based on thecomputation of a dissimilarity measure between a segmen-tation result and a ground truth image Recently the use ofunsupervised measures has substituted supervised indexesfor the objective evaluation of segmentation results [59]They

Table 11 Unimodal test functions

Test function S 119891opt

1198911(x) =119899

sum

119894=11199092119894

[minus100 100]119899 0

1198912(x) =119899

sum

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 +

119899

prod

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 [minus10 10]119899 0

1198913(x) =119899

sum

119894=1(

119894

sum

119895=1119909119895)

2

[minus100 100]119899 0

1198914 (x) = max119894

10038161003816100381610038161199091198941003816100381610038161003816 1 le 119894 le 119899 [minus100 100]119899 0

1198915(x) =119899minus1sum

119894=1[100 (119909

119894+1 minus 1199092119894)2+ (119909119894minus 1)2] [minus30 30]119899 0

1198916(x) =119899

sum

119894=1(119909119894+ 05)2 [minus100 100]119899 0

1198917(x) =119899

sum

119894=1119894119909

4119894+ rand(0 1) [minus128 128]119899 0

enable the quantification of the quality of a segmentationresult without a priori knowledge (ground truth image)

Evaluation Criteria In this paper the unsupervised indexROS proposed by Chabrier et al [60] has been used toobjectively evaluate the performance of each candidate algo-rithm This index evaluates the segmentation quality interms of the homogeneity within segmented regions andthe heterogeneity among the different regions ROS can becomputed as follows

ROS =119863 minus 119863

2 (24)

where 119863 quantifies the homogeneity within segmentedregions Similarly 119863 measures the disparity among theregions A segmentation result 1198781 is considered better than 1198782if ROS

1198781gt ROS

1198782 The interregion homogeneity character-

ized by119863 is calculated considering the following formulation

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot

120590119888

(sum119873119877

119897=1 120590119897) (25)

where 119873119877represents the number of partitions in which the

image has been segmented 119877119888symbolizes the number of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 13

(1) Input 119865 119871 119892 119892119890119899119873 119870 120582 and 120579(2) Initialize L0 (119896 = 0)(3) until (119896 = 119892119890119899)(4) F larr SolitaryOperation(L119896) Solitary operator (Section 31)(5) L119896+1 larr SocialOperation(L119896 F) Social operator (Section 32)(6) 119896 = k + 1(7) end until(8) Obtain I119892119890119899

119887119890119904119905

(9) Reduce I119892119890119899119887119890119904119905

(10) Calculate the threshold values 119879119894from the reduced model

(11) Use 119879119894to segment the image

Algorithm 2 Segmentation LS algorithm

The proposed segmentation algorithm permits to auto-matically detect the number of segmentation partitions(classes) Furthermore due to its remarkable search capaci-ties the LSmethodmaintains a better accuracy than previousalgorithms based on evolutionary principles However theproposed method presents two disadvantages first it isrelated to its implementation which in general is morecomplex than most of the other segmentators based onevolutionary basics The second refers to the segmentationprocedure of the proposed approach which does not considerany spatial pixel characteristics As a consequence pixelsthat may belong to a determined region due to its positionare labeled as a part of another region due to its graylevel intensity Such a fact adversely affects the segmentationperformance of the method

7 Segmentation Results

This section analyses the performance of the proposedsegmentation algorithmThe discussion is divided into threeparts the first one shows the performance of the proposedLS segmentation algorithm while the second presents a com-parison between the proposed approach and others segmen-tation algorithms that are based on evolutionary and swammethods The comparison mainly considers the capacities ofeach algorithm to accurately and robustly approximate theimage histogram Finally the third part presents an objectiveevaluation of segmentation results produced by all algorithmsthat have been employed in the comparisons

71 Performance of LS Algorithm in Image SegmentationThis section presents two experiments that analyze theperformance of the proposed approach Table 5 presentsthe algorithmrsquos parameters that have been experimentallydetermined and kept for all the test images through allexperiments

First Image The first test considers the histogram shownby Figure 9(b) while Figure 9(a) presents the original imageAfter applying the proposed algorithm just as it has beenconfigured according to parameters in Table 5 a minimummodel of four classes emerges after the start from Gaussian

Table 5 Parameter setup for the LS segmentation algorithm

119865 119871 119892 119892119890119899 119873 119870 120582 120579

06 02 20 1000 40 10 001 00001

Table 6 Results of the reduced Gaussian mixture for the first andthe second image

Parameters First image Second image1198751 0004 00321205831 181 1211205901 82 291198752 00035 000151205832 699 4511205902 184 2491198753 001 00061205833 949 13011205903 89 1791198754 0007 0021205834 1631 16721205904 299 101

mixtures of 10 classes Considering 30 independent exe-cutions the averaged parameters of the resultant Gaussianmixture are presented in Table 6 One final Gaussian mixture(ten classes) which has been obtained by LS is presentedin Figure 10 Furthermore the approximation of the reducedGaussian mixture is also visually compared with the originalhistogram in Figure 10 On the other hand Figure 11 presentsthe segmented image after calculating the threshold points

Second Image For the second experiment the image inFigure 12 is testedThemethod aims to segment the image byusing a reduced Gaussian mixture model obtained by the LSapproach After executing the algorithm according to param-eters from Table 5 the resulting averaged parameters of theresultant Gaussian mixture are presented in Table 6 In orderto assure consistency the results are averaged considering 30independent executions Figure 13 shows the approximationquality that is obtained by the reduced Gaussian mixturemodel in (a) and the segmented image in (b)

14 Mathematical Problems in Engineering

Input parameters

ImageI(xy)

from the reduced model

YesLS method

No

k = k + 1

k gt gen

Reduce 119845genbest

Calculate the threshold values Ti

Use Ti to segment the image

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

119813 larr SolitaryOperation (119819k)

119819k+1 larr SocialOperation (119819k 119813)

Initialize 1198190 (k = 0)

F L g gen N K 120582 and 120579

Figure 8 Flowchart of the complete segmentation procedure

(a)

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray level

Freq

uenc

y

(b)

Figure 9 (a) Original first image used on the first experiment and (b) its histogram

Mathematical Problems in Engineering 15

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray levelFr

eque

ncy

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

Figure 10 Gaussian mixture obtained by LS for the first image

Figure 11 Image segmented with the reduced Gaussian mixture

(a)

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

(b)

Figure 12 (a) Original second image used on the first experiment and (b) its histogram

72 Histogram Approximation Comparisons This sectiondiscusses the comparison between the proposed segmenta-tion algorithm and other evolutionary-segmentation meth-ods that have been proposed in the literature The set ofmethods used in the experiments includes the 119869 + ABC[19] 119869 + AIS [20] and 119869 + DE [21] These algorithmsconsider the combination between the original objectivefunction 119869 (17) and an evolutionary technique such asArtificial Bee Colony (ABC) the Artificial Immune Systems(AIS) and the Differential Evolution (DE) [21] respectively

Since the proposed segmentation approach considers thecombination of the new objective function 119869

new (18) andthe LS algorithm it will be referred to as 119869

new+ LS

The comparison focuses mainly on the capacities of eachalgorithm to accurately and robustly approximate the imagehistogram

In the experiments the populations have been set to 40(119873 = 40) individuals The maximum iteration number for allfunctions has been set to 1000 Such stop criterion has beenconsidered to maintain compatibility to similar experiments

16 Mathematical Problems in Engineering

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

(a) (b)

Figure 13 (a) Segmentation result obtained by LS for the first image and (b) the segmented image

(1) (2)

(3) (4)

(a)

0015

0005001

0020025

0035003

0045004

005

00 50 100 150 200 250

Gray level

Freq

uenc

y

(b)

Figure 14 (a) Synthetic image used in the comparison and (b) its histogram

that are reported in the literature [18] The parameter settingfor each of the segmentation algorithms in the comparison isdescribed as follows

(1) 119869 + ABC [19] in the algorithm its parameters areconfigured as follows the abandonment limit = 100120572 = 005 and limit = 30

(2) 119869 + AIS [20] it presents the following parameters ℎ =120119873

119888= 80 120588 = 10 119875

119903= 20 119871 = 22 and 119879

119890= 001

(3) 119869 + DE [21] the DERand1 scheme is employed Theparameter settings follow the instructions in [21]Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(4) In 119869new +LS the method is set according to the valuesdescribed in Table 5

In order to conduct the experiments a synthetic image isdesigned to be used as a reference in the comparisons Themain idea is to know in advance the exact number of classes(and their parameters) that are contained in the image sothat the histogram can be considered as a ground truth The

Table 7 Employed parameters for the design of the reference image

119875119894

120583119894

120590119894

(1) 005 40 8(2) 004 100 10(3) 005 160 8(4) 0027 220 15

synthetic image is divided into four sections Each sectioncorresponds to a different class which is produced by settingeach gray level pixel 119875V119894 to a value that is determined by thefollowing model

119875V119894= 119890minus((119909minus1205831)

221205902119894) (21)

where 119894 represents the section whereas 120583119894and 120590119894are themean

and the dispersion of the gray level pixels respectively Thecomparative study employs the image of 512 times 512 that isshown in Figure 14(a) and the algorithmrsquos parameters thathave been presented in Table 7 Figure 14(b) illustrates theresultant histogram

Mathematical Problems in Engineering 17

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2Class 3Class 4

Class 5Class 6Class 7Class 8

(a)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(b)

Figure 15 Convergence results (a) Convergence of the following methods 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes (b) Convergence of the proposed method (reduced Gaussian mixture)

(a) (b)

Figure 16 Segmentation results obtained by (a) several methods including 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes and (b) the proposed method (reduced Gaussian mixture)

In the comparison the discussion focuses on the follow-ing issues first of all accuracy second convergence andthird computational cost

Convergence This section analyzes the approximation con-vergence when the number of classes that are used bythe algorithm during the evolution process is different tothe actual number of classes in the image Recall that theproposed approach automatically finds the reduced Gaussianmixture which better adapts to the image histogram

In the experiment the methods 119869 + ABC 119869 + AISand 119869 + DE are executed considering Gaussian mixturescomposed of 8 functions Under such circumstances thenumber of classes to be detected is higher than the actualnumber of classes in the image On the other hand the

proposed algorithm maintains the same configuration ofTable 5Therefore it can detect and calculate up to ten classes(119870 = 10)

As a result the techniques 119869 + ABC 119869 + AIS and 119869 +DE tend to overestimate the image histogram This effectcan be seen in Figure 15(a) where the resulting Gaussianfunctions are concentrated within actual classes Such abehavior is a consequence of the evaluation that is consideredby the original objective function 119869 which privileges onlythe approximation between the Gaussian mixture and theimage histogram This effect can be graphically illustrated byFigure 16(a) that shows the pixel misclassification producedby the wrong segmentation of Figure 14(a) On the otherhand the proposed approach obtains a reduced Gaussianmixture model which allows the detection of each class from

18 Mathematical Problems in Engineering

006

004

005

002

003

001

00 50 100 150 200 250

Gray level

Freq

uenc

y

(a)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

(b)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

HistogramClass 1Class 2

Class 3Class 4

(c)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(d)

Figure 17 Approximation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

the actual histogram (see Figure 15(b)) As a consequencethe segmentation is significantly improved by eliminating thepixel misclassification as it is shown by Figure 16(b)

It is evident from Figure 15 that the techniques 119869 + ABC119869 + AIS and 119869 + DE all need an a priori knowledge of thenumber of classes that are contained in the actual histogramin order to obtain a satisfactory result On the other handthe proposed algorithm is able to find a reduced Gaussianmixture whose classes coincide with the actual number ofclasses that are contained in the image histogram

Accuracy In this section the comparison among the algo-rithms in terms of accuracy is reported Most of the reportedcomparisons [19ndash26] are concerned about comparing theparameters of the resultant Gaussian mixtures by using realimages Under such circumstances it is difficult to considera clear reference in order to define a meaningful errorTherefore the image defined in Figure 14 has been used in theexperiments because its construction parameters are clearlydefined in Table 7

Since the parameter values from Table 7 act as groundtruth a simple averaged difference between them and thevalues that are computed by each algorithm could be used as

comparison error However as each parametermaintains dif-ferent active intervals it is necessary to express the differencesin terms of percentage Therefore if Δ120573 is the parametricdifference and 120573 the ground truth parameter the percentageerror Δ120573 can be defined as follows

Δ120573 =Δ120573

120573sdot 100 (22)

In the segmentation problem each Gaussian mixture repre-sents a 119870-dimensional model where each dimension corre-sponds to a Gaussian function of the optimization problemto be solved Since each Gaussian function possesses threeparameters 119875

119894 120583119894 and 120590

119894 the complete number of parameters

is 3 sdot 119870 dimensions Therefore the final error 119864 produced bythe final Gaussian mixture is

119864 =1

119870 sdot 3

119870sdot3sum

V=1Δ120573V (23)

where 120573V isin (119875119894 120583119894 120590119894)In order to compare accuracy the algorithms 119869 + ABC

119869 + AIS 119869 + DE and the proposed approach are all executedover the image shown by Figure 14(a) The experiment aims

Mathematical Problems in Engineering 19

(a) (b)

(c) (d)

Figure 18 Segmentation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

to estimate the Gaussian mixture that better approximatesthe actual image histogram Methods 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures composed of 4functions (119870 = 4) In case of the 119869new + LS method althoughthe algorithm finds a reduced Gaussian mixture of fourfunctions it is initially set with ten functions (119870 = 10)Table 8 presents the final Gaussian mixture parameters andthe final error 119864 The final Gaussian mixture parametershave been averaged over 30 independent executions in orderto assure consistency A close inspection of Table 8 revealsthat the proposed method is able to achieve the smallesterror (119864) in comparison to the other algorithms Figure 16presents the histogram approximations that are produced byeach algorithm whereas Figure 17 shows their correspondentsegmented images Both illustrations present the mediancase obtained throughout 30 runs Figure 18 exhibits that 119869 +ABC 119869 + AIS and 119869 + DE present different levels of misclas-sifications which are nearly absent in the proposed approachcase

Computational Cost The experiment aims to measure thecomplexity and the computing time spent by the 119869 + ABC

Table 8 Results of the reduced Gaussian mixture in terms ofaccuracy

Algorithm Gaussian function 119875119894

120583119894

120590119894

119864

J + ABC

(1) 0052 445 64

1179(2) 0084 9812 1287(3) 0058 16350 894(4) 0025 21884 175

J + AIS

(1) 0072 3101 614

2201(2) 0054 8852 1221(3) 0039 14921 914(4) 0034 24841 1384

J + DE

(1) 0041 3574 786

1357(2) 0036 9057 1197(3) 0059 14847 901(4) 0020 20134 1302

119869new + LS

(1) 0049 4012 75

398(2) 0041 10204 104(3) 0052 16866 83(4) 0025 11092 152

20 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 19 Images employed in the computational cost analysis

the 119869 + AIS the 119869 + DE and the 119869new + LS algorithm whilecalculating the parameters of the Gaussian mixture in bench-mark images (see Figures 19(a)ndash19(d)) 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures that are composedof 4 functions (119870 = 4) In case of the 119869new + LS methodalthough the algorithm finds a reduced Gaussian mixture offour functions despite being initialized with ten functions(119870 = 10) Table 9 shows the averaged measurements after30 experiments It is evident that the 119869 + ABC and 119869 +

DE are the slowest to converge (iterations) and the 119869 + AISshows the highest computational cost (time elapsed) becauseit requires operators which demand long times On the otherhand the 119869new + LS shows an acceptable trade-off betweenits convergence time and its computational cost Thereforealthough the implementation of 119869new + LS in general requiresmore code than most of other evolution-based segmentatorssuch a fact is not reflected in the execution time FinallyFigure 19 below shows the segmented images as they are

Mathematical Problems in Engineering 21

(a) (b) (c) (d)

Figure 20 Experimental set used in the evaluation of the segmentation results

Table 9 Iterations and time requirements of the 119869 + ABC the 119869 +AIS the 119869 + DE and the 119869new + LS algorithm as they are applied tosegment benchmark images (see Figure 17)

Iterations (a) (b) (c) (d)Time elapsed

119869 + ABC 855 833 870 997272 s 270 s 273 s 31 s

119869 + AIS 725 704 754 812178 s 161 s 141 s 201 s

119869 + DE 657 627 694 742125 s 112 s 145 s 188 s

119869new + LS 314 298 307 402

098 s 084 s 072 s 102 s

Table 10 Evaluation of the segmentation results in terms of the ROSindex

Number of classesImage

119873119877= 4

(a)119873119877= 3

(b)119873119877= 4

(c)119873119877= 4

(d)119869 + ABC 0534 0328 0411 0457119869 + AIS 0522 0321 0427 0437119869 + DE 0512 0312 0408 0418119869new + LS 0674 0401 0514 0527

generated by each algorithm It can be seen that the proposedapproach generate more homogeneous regions whereas 119869 +ABC 119869 + AIS and 119869 + DE present several artifacts that areproduced by an incorrect pixel classification

73 Performance Evaluation of the Segmentation Results Thissection presents an objective evaluation of segmentationresults that are produced by all algorithms in the compar-isons The ill-defined nature of the segmentation problemmakes the evaluation of a candidate algorithm difficult[57] Traditionally the evaluation has been conducted byusing some supervised criteria [58] which are based on thecomputation of a dissimilarity measure between a segmen-tation result and a ground truth image Recently the use ofunsupervised measures has substituted supervised indexesfor the objective evaluation of segmentation results [59]They

Table 11 Unimodal test functions

Test function S 119891opt

1198911(x) =119899

sum

119894=11199092119894

[minus100 100]119899 0

1198912(x) =119899

sum

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 +

119899

prod

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 [minus10 10]119899 0

1198913(x) =119899

sum

119894=1(

119894

sum

119895=1119909119895)

2

[minus100 100]119899 0

1198914 (x) = max119894

10038161003816100381610038161199091198941003816100381610038161003816 1 le 119894 le 119899 [minus100 100]119899 0

1198915(x) =119899minus1sum

119894=1[100 (119909

119894+1 minus 1199092119894)2+ (119909119894minus 1)2] [minus30 30]119899 0

1198916(x) =119899

sum

119894=1(119909119894+ 05)2 [minus100 100]119899 0

1198917(x) =119899

sum

119894=1119894119909

4119894+ rand(0 1) [minus128 128]119899 0

enable the quantification of the quality of a segmentationresult without a priori knowledge (ground truth image)

Evaluation Criteria In this paper the unsupervised indexROS proposed by Chabrier et al [60] has been used toobjectively evaluate the performance of each candidate algo-rithm This index evaluates the segmentation quality interms of the homogeneity within segmented regions andthe heterogeneity among the different regions ROS can becomputed as follows

ROS =119863 minus 119863

2 (24)

where 119863 quantifies the homogeneity within segmentedregions Similarly 119863 measures the disparity among theregions A segmentation result 1198781 is considered better than 1198782if ROS

1198781gt ROS

1198782 The interregion homogeneity character-

ized by119863 is calculated considering the following formulation

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot

120590119888

(sum119873119877

119897=1 120590119897) (25)

where 119873119877represents the number of partitions in which the

image has been segmented 119877119888symbolizes the number of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

14 Mathematical Problems in Engineering

Input parameters

ImageI(xy)

from the reduced model

YesLS method

No

k = k + 1

k gt gen

Reduce 119845genbest

Calculate the threshold values Ti

Use Ti to segment the image

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

119813 larr SolitaryOperation (119819k)

119819k+1 larr SocialOperation (119819k 119813)

Initialize 1198190 (k = 0)

F L g gen N K 120582 and 120579

Figure 8 Flowchart of the complete segmentation procedure

(a)

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray level

Freq

uenc

y

(b)

Figure 9 (a) Original first image used on the first experiment and (b) its histogram

Mathematical Problems in Engineering 15

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray levelFr

eque

ncy

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

Figure 10 Gaussian mixture obtained by LS for the first image

Figure 11 Image segmented with the reduced Gaussian mixture

(a)

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

(b)

Figure 12 (a) Original second image used on the first experiment and (b) its histogram

72 Histogram Approximation Comparisons This sectiondiscusses the comparison between the proposed segmenta-tion algorithm and other evolutionary-segmentation meth-ods that have been proposed in the literature The set ofmethods used in the experiments includes the 119869 + ABC[19] 119869 + AIS [20] and 119869 + DE [21] These algorithmsconsider the combination between the original objectivefunction 119869 (17) and an evolutionary technique such asArtificial Bee Colony (ABC) the Artificial Immune Systems(AIS) and the Differential Evolution (DE) [21] respectively

Since the proposed segmentation approach considers thecombination of the new objective function 119869

new (18) andthe LS algorithm it will be referred to as 119869

new+ LS

The comparison focuses mainly on the capacities of eachalgorithm to accurately and robustly approximate the imagehistogram

In the experiments the populations have been set to 40(119873 = 40) individuals The maximum iteration number for allfunctions has been set to 1000 Such stop criterion has beenconsidered to maintain compatibility to similar experiments

16 Mathematical Problems in Engineering

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

(a) (b)

Figure 13 (a) Segmentation result obtained by LS for the first image and (b) the segmented image

(1) (2)

(3) (4)

(a)

0015

0005001

0020025

0035003

0045004

005

00 50 100 150 200 250

Gray level

Freq

uenc

y

(b)

Figure 14 (a) Synthetic image used in the comparison and (b) its histogram

that are reported in the literature [18] The parameter settingfor each of the segmentation algorithms in the comparison isdescribed as follows

(1) 119869 + ABC [19] in the algorithm its parameters areconfigured as follows the abandonment limit = 100120572 = 005 and limit = 30

(2) 119869 + AIS [20] it presents the following parameters ℎ =120119873

119888= 80 120588 = 10 119875

119903= 20 119871 = 22 and 119879

119890= 001

(3) 119869 + DE [21] the DERand1 scheme is employed Theparameter settings follow the instructions in [21]Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(4) In 119869new +LS the method is set according to the valuesdescribed in Table 5

In order to conduct the experiments a synthetic image isdesigned to be used as a reference in the comparisons Themain idea is to know in advance the exact number of classes(and their parameters) that are contained in the image sothat the histogram can be considered as a ground truth The

Table 7 Employed parameters for the design of the reference image

119875119894

120583119894

120590119894

(1) 005 40 8(2) 004 100 10(3) 005 160 8(4) 0027 220 15

synthetic image is divided into four sections Each sectioncorresponds to a different class which is produced by settingeach gray level pixel 119875V119894 to a value that is determined by thefollowing model

119875V119894= 119890minus((119909minus1205831)

221205902119894) (21)

where 119894 represents the section whereas 120583119894and 120590119894are themean

and the dispersion of the gray level pixels respectively Thecomparative study employs the image of 512 times 512 that isshown in Figure 14(a) and the algorithmrsquos parameters thathave been presented in Table 7 Figure 14(b) illustrates theresultant histogram

Mathematical Problems in Engineering 17

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2Class 3Class 4

Class 5Class 6Class 7Class 8

(a)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(b)

Figure 15 Convergence results (a) Convergence of the following methods 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes (b) Convergence of the proposed method (reduced Gaussian mixture)

(a) (b)

Figure 16 Segmentation results obtained by (a) several methods including 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes and (b) the proposed method (reduced Gaussian mixture)

In the comparison the discussion focuses on the follow-ing issues first of all accuracy second convergence andthird computational cost

Convergence This section analyzes the approximation con-vergence when the number of classes that are used bythe algorithm during the evolution process is different tothe actual number of classes in the image Recall that theproposed approach automatically finds the reduced Gaussianmixture which better adapts to the image histogram

In the experiment the methods 119869 + ABC 119869 + AISand 119869 + DE are executed considering Gaussian mixturescomposed of 8 functions Under such circumstances thenumber of classes to be detected is higher than the actualnumber of classes in the image On the other hand the

proposed algorithm maintains the same configuration ofTable 5Therefore it can detect and calculate up to ten classes(119870 = 10)

As a result the techniques 119869 + ABC 119869 + AIS and 119869 +DE tend to overestimate the image histogram This effectcan be seen in Figure 15(a) where the resulting Gaussianfunctions are concentrated within actual classes Such abehavior is a consequence of the evaluation that is consideredby the original objective function 119869 which privileges onlythe approximation between the Gaussian mixture and theimage histogram This effect can be graphically illustrated byFigure 16(a) that shows the pixel misclassification producedby the wrong segmentation of Figure 14(a) On the otherhand the proposed approach obtains a reduced Gaussianmixture model which allows the detection of each class from

18 Mathematical Problems in Engineering

006

004

005

002

003

001

00 50 100 150 200 250

Gray level

Freq

uenc

y

(a)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

(b)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

HistogramClass 1Class 2

Class 3Class 4

(c)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(d)

Figure 17 Approximation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

the actual histogram (see Figure 15(b)) As a consequencethe segmentation is significantly improved by eliminating thepixel misclassification as it is shown by Figure 16(b)

It is evident from Figure 15 that the techniques 119869 + ABC119869 + AIS and 119869 + DE all need an a priori knowledge of thenumber of classes that are contained in the actual histogramin order to obtain a satisfactory result On the other handthe proposed algorithm is able to find a reduced Gaussianmixture whose classes coincide with the actual number ofclasses that are contained in the image histogram

Accuracy In this section the comparison among the algo-rithms in terms of accuracy is reported Most of the reportedcomparisons [19ndash26] are concerned about comparing theparameters of the resultant Gaussian mixtures by using realimages Under such circumstances it is difficult to considera clear reference in order to define a meaningful errorTherefore the image defined in Figure 14 has been used in theexperiments because its construction parameters are clearlydefined in Table 7

Since the parameter values from Table 7 act as groundtruth a simple averaged difference between them and thevalues that are computed by each algorithm could be used as

comparison error However as each parametermaintains dif-ferent active intervals it is necessary to express the differencesin terms of percentage Therefore if Δ120573 is the parametricdifference and 120573 the ground truth parameter the percentageerror Δ120573 can be defined as follows

Δ120573 =Δ120573

120573sdot 100 (22)

In the segmentation problem each Gaussian mixture repre-sents a 119870-dimensional model where each dimension corre-sponds to a Gaussian function of the optimization problemto be solved Since each Gaussian function possesses threeparameters 119875

119894 120583119894 and 120590

119894 the complete number of parameters

is 3 sdot 119870 dimensions Therefore the final error 119864 produced bythe final Gaussian mixture is

119864 =1

119870 sdot 3

119870sdot3sum

V=1Δ120573V (23)

where 120573V isin (119875119894 120583119894 120590119894)In order to compare accuracy the algorithms 119869 + ABC

119869 + AIS 119869 + DE and the proposed approach are all executedover the image shown by Figure 14(a) The experiment aims

Mathematical Problems in Engineering 19

(a) (b)

(c) (d)

Figure 18 Segmentation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

to estimate the Gaussian mixture that better approximatesthe actual image histogram Methods 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures composed of 4functions (119870 = 4) In case of the 119869new + LS method althoughthe algorithm finds a reduced Gaussian mixture of fourfunctions it is initially set with ten functions (119870 = 10)Table 8 presents the final Gaussian mixture parameters andthe final error 119864 The final Gaussian mixture parametershave been averaged over 30 independent executions in orderto assure consistency A close inspection of Table 8 revealsthat the proposed method is able to achieve the smallesterror (119864) in comparison to the other algorithms Figure 16presents the histogram approximations that are produced byeach algorithm whereas Figure 17 shows their correspondentsegmented images Both illustrations present the mediancase obtained throughout 30 runs Figure 18 exhibits that 119869 +ABC 119869 + AIS and 119869 + DE present different levels of misclas-sifications which are nearly absent in the proposed approachcase

Computational Cost The experiment aims to measure thecomplexity and the computing time spent by the 119869 + ABC

Table 8 Results of the reduced Gaussian mixture in terms ofaccuracy

Algorithm Gaussian function 119875119894

120583119894

120590119894

119864

J + ABC

(1) 0052 445 64

1179(2) 0084 9812 1287(3) 0058 16350 894(4) 0025 21884 175

J + AIS

(1) 0072 3101 614

2201(2) 0054 8852 1221(3) 0039 14921 914(4) 0034 24841 1384

J + DE

(1) 0041 3574 786

1357(2) 0036 9057 1197(3) 0059 14847 901(4) 0020 20134 1302

119869new + LS

(1) 0049 4012 75

398(2) 0041 10204 104(3) 0052 16866 83(4) 0025 11092 152

20 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 19 Images employed in the computational cost analysis

the 119869 + AIS the 119869 + DE and the 119869new + LS algorithm whilecalculating the parameters of the Gaussian mixture in bench-mark images (see Figures 19(a)ndash19(d)) 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures that are composedof 4 functions (119870 = 4) In case of the 119869new + LS methodalthough the algorithm finds a reduced Gaussian mixture offour functions despite being initialized with ten functions(119870 = 10) Table 9 shows the averaged measurements after30 experiments It is evident that the 119869 + ABC and 119869 +

DE are the slowest to converge (iterations) and the 119869 + AISshows the highest computational cost (time elapsed) becauseit requires operators which demand long times On the otherhand the 119869new + LS shows an acceptable trade-off betweenits convergence time and its computational cost Thereforealthough the implementation of 119869new + LS in general requiresmore code than most of other evolution-based segmentatorssuch a fact is not reflected in the execution time FinallyFigure 19 below shows the segmented images as they are

Mathematical Problems in Engineering 21

(a) (b) (c) (d)

Figure 20 Experimental set used in the evaluation of the segmentation results

Table 9 Iterations and time requirements of the 119869 + ABC the 119869 +AIS the 119869 + DE and the 119869new + LS algorithm as they are applied tosegment benchmark images (see Figure 17)

Iterations (a) (b) (c) (d)Time elapsed

119869 + ABC 855 833 870 997272 s 270 s 273 s 31 s

119869 + AIS 725 704 754 812178 s 161 s 141 s 201 s

119869 + DE 657 627 694 742125 s 112 s 145 s 188 s

119869new + LS 314 298 307 402

098 s 084 s 072 s 102 s

Table 10 Evaluation of the segmentation results in terms of the ROSindex

Number of classesImage

119873119877= 4

(a)119873119877= 3

(b)119873119877= 4

(c)119873119877= 4

(d)119869 + ABC 0534 0328 0411 0457119869 + AIS 0522 0321 0427 0437119869 + DE 0512 0312 0408 0418119869new + LS 0674 0401 0514 0527

generated by each algorithm It can be seen that the proposedapproach generate more homogeneous regions whereas 119869 +ABC 119869 + AIS and 119869 + DE present several artifacts that areproduced by an incorrect pixel classification

73 Performance Evaluation of the Segmentation Results Thissection presents an objective evaluation of segmentationresults that are produced by all algorithms in the compar-isons The ill-defined nature of the segmentation problemmakes the evaluation of a candidate algorithm difficult[57] Traditionally the evaluation has been conducted byusing some supervised criteria [58] which are based on thecomputation of a dissimilarity measure between a segmen-tation result and a ground truth image Recently the use ofunsupervised measures has substituted supervised indexesfor the objective evaluation of segmentation results [59]They

Table 11 Unimodal test functions

Test function S 119891opt

1198911(x) =119899

sum

119894=11199092119894

[minus100 100]119899 0

1198912(x) =119899

sum

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 +

119899

prod

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 [minus10 10]119899 0

1198913(x) =119899

sum

119894=1(

119894

sum

119895=1119909119895)

2

[minus100 100]119899 0

1198914 (x) = max119894

10038161003816100381610038161199091198941003816100381610038161003816 1 le 119894 le 119899 [minus100 100]119899 0

1198915(x) =119899minus1sum

119894=1[100 (119909

119894+1 minus 1199092119894)2+ (119909119894minus 1)2] [minus30 30]119899 0

1198916(x) =119899

sum

119894=1(119909119894+ 05)2 [minus100 100]119899 0

1198917(x) =119899

sum

119894=1119894119909

4119894+ rand(0 1) [minus128 128]119899 0

enable the quantification of the quality of a segmentationresult without a priori knowledge (ground truth image)

Evaluation Criteria In this paper the unsupervised indexROS proposed by Chabrier et al [60] has been used toobjectively evaluate the performance of each candidate algo-rithm This index evaluates the segmentation quality interms of the homogeneity within segmented regions andthe heterogeneity among the different regions ROS can becomputed as follows

ROS =119863 minus 119863

2 (24)

where 119863 quantifies the homogeneity within segmentedregions Similarly 119863 measures the disparity among theregions A segmentation result 1198781 is considered better than 1198782if ROS

1198781gt ROS

1198782 The interregion homogeneity character-

ized by119863 is calculated considering the following formulation

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot

120590119888

(sum119873119877

119897=1 120590119897) (25)

where 119873119877represents the number of partitions in which the

image has been segmented 119877119888symbolizes the number of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 15

0 50 100 150 200 250

0012

0008

0006

0004

0002

001

0

Gray levelFr

eque

ncy

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

Figure 10 Gaussian mixture obtained by LS for the first image

Figure 11 Image segmented with the reduced Gaussian mixture

(a)

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

(b)

Figure 12 (a) Original second image used on the first experiment and (b) its histogram

72 Histogram Approximation Comparisons This sectiondiscusses the comparison between the proposed segmenta-tion algorithm and other evolutionary-segmentation meth-ods that have been proposed in the literature The set ofmethods used in the experiments includes the 119869 + ABC[19] 119869 + AIS [20] and 119869 + DE [21] These algorithmsconsider the combination between the original objectivefunction 119869 (17) and an evolutionary technique such asArtificial Bee Colony (ABC) the Artificial Immune Systems(AIS) and the Differential Evolution (DE) [21] respectively

Since the proposed segmentation approach considers thecombination of the new objective function 119869

new (18) andthe LS algorithm it will be referred to as 119869

new+ LS

The comparison focuses mainly on the capacities of eachalgorithm to accurately and robustly approximate the imagehistogram

In the experiments the populations have been set to 40(119873 = 40) individuals The maximum iteration number for allfunctions has been set to 1000 Such stop criterion has beenconsidered to maintain compatibility to similar experiments

16 Mathematical Problems in Engineering

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

(a) (b)

Figure 13 (a) Segmentation result obtained by LS for the first image and (b) the segmented image

(1) (2)

(3) (4)

(a)

0015

0005001

0020025

0035003

0045004

005

00 50 100 150 200 250

Gray level

Freq

uenc

y

(b)

Figure 14 (a) Synthetic image used in the comparison and (b) its histogram

that are reported in the literature [18] The parameter settingfor each of the segmentation algorithms in the comparison isdescribed as follows

(1) 119869 + ABC [19] in the algorithm its parameters areconfigured as follows the abandonment limit = 100120572 = 005 and limit = 30

(2) 119869 + AIS [20] it presents the following parameters ℎ =120119873

119888= 80 120588 = 10 119875

119903= 20 119871 = 22 and 119879

119890= 001

(3) 119869 + DE [21] the DERand1 scheme is employed Theparameter settings follow the instructions in [21]Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(4) In 119869new +LS the method is set according to the valuesdescribed in Table 5

In order to conduct the experiments a synthetic image isdesigned to be used as a reference in the comparisons Themain idea is to know in advance the exact number of classes(and their parameters) that are contained in the image sothat the histogram can be considered as a ground truth The

Table 7 Employed parameters for the design of the reference image

119875119894

120583119894

120590119894

(1) 005 40 8(2) 004 100 10(3) 005 160 8(4) 0027 220 15

synthetic image is divided into four sections Each sectioncorresponds to a different class which is produced by settingeach gray level pixel 119875V119894 to a value that is determined by thefollowing model

119875V119894= 119890minus((119909minus1205831)

221205902119894) (21)

where 119894 represents the section whereas 120583119894and 120590119894are themean

and the dispersion of the gray level pixels respectively Thecomparative study employs the image of 512 times 512 that isshown in Figure 14(a) and the algorithmrsquos parameters thathave been presented in Table 7 Figure 14(b) illustrates theresultant histogram

Mathematical Problems in Engineering 17

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2Class 3Class 4

Class 5Class 6Class 7Class 8

(a)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(b)

Figure 15 Convergence results (a) Convergence of the following methods 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes (b) Convergence of the proposed method (reduced Gaussian mixture)

(a) (b)

Figure 16 Segmentation results obtained by (a) several methods including 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes and (b) the proposed method (reduced Gaussian mixture)

In the comparison the discussion focuses on the follow-ing issues first of all accuracy second convergence andthird computational cost

Convergence This section analyzes the approximation con-vergence when the number of classes that are used bythe algorithm during the evolution process is different tothe actual number of classes in the image Recall that theproposed approach automatically finds the reduced Gaussianmixture which better adapts to the image histogram

In the experiment the methods 119869 + ABC 119869 + AISand 119869 + DE are executed considering Gaussian mixturescomposed of 8 functions Under such circumstances thenumber of classes to be detected is higher than the actualnumber of classes in the image On the other hand the

proposed algorithm maintains the same configuration ofTable 5Therefore it can detect and calculate up to ten classes(119870 = 10)

As a result the techniques 119869 + ABC 119869 + AIS and 119869 +DE tend to overestimate the image histogram This effectcan be seen in Figure 15(a) where the resulting Gaussianfunctions are concentrated within actual classes Such abehavior is a consequence of the evaluation that is consideredby the original objective function 119869 which privileges onlythe approximation between the Gaussian mixture and theimage histogram This effect can be graphically illustrated byFigure 16(a) that shows the pixel misclassification producedby the wrong segmentation of Figure 14(a) On the otherhand the proposed approach obtains a reduced Gaussianmixture model which allows the detection of each class from

18 Mathematical Problems in Engineering

006

004

005

002

003

001

00 50 100 150 200 250

Gray level

Freq

uenc

y

(a)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

(b)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

HistogramClass 1Class 2

Class 3Class 4

(c)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(d)

Figure 17 Approximation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

the actual histogram (see Figure 15(b)) As a consequencethe segmentation is significantly improved by eliminating thepixel misclassification as it is shown by Figure 16(b)

It is evident from Figure 15 that the techniques 119869 + ABC119869 + AIS and 119869 + DE all need an a priori knowledge of thenumber of classes that are contained in the actual histogramin order to obtain a satisfactory result On the other handthe proposed algorithm is able to find a reduced Gaussianmixture whose classes coincide with the actual number ofclasses that are contained in the image histogram

Accuracy In this section the comparison among the algo-rithms in terms of accuracy is reported Most of the reportedcomparisons [19ndash26] are concerned about comparing theparameters of the resultant Gaussian mixtures by using realimages Under such circumstances it is difficult to considera clear reference in order to define a meaningful errorTherefore the image defined in Figure 14 has been used in theexperiments because its construction parameters are clearlydefined in Table 7

Since the parameter values from Table 7 act as groundtruth a simple averaged difference between them and thevalues that are computed by each algorithm could be used as

comparison error However as each parametermaintains dif-ferent active intervals it is necessary to express the differencesin terms of percentage Therefore if Δ120573 is the parametricdifference and 120573 the ground truth parameter the percentageerror Δ120573 can be defined as follows

Δ120573 =Δ120573

120573sdot 100 (22)

In the segmentation problem each Gaussian mixture repre-sents a 119870-dimensional model where each dimension corre-sponds to a Gaussian function of the optimization problemto be solved Since each Gaussian function possesses threeparameters 119875

119894 120583119894 and 120590

119894 the complete number of parameters

is 3 sdot 119870 dimensions Therefore the final error 119864 produced bythe final Gaussian mixture is

119864 =1

119870 sdot 3

119870sdot3sum

V=1Δ120573V (23)

where 120573V isin (119875119894 120583119894 120590119894)In order to compare accuracy the algorithms 119869 + ABC

119869 + AIS 119869 + DE and the proposed approach are all executedover the image shown by Figure 14(a) The experiment aims

Mathematical Problems in Engineering 19

(a) (b)

(c) (d)

Figure 18 Segmentation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

to estimate the Gaussian mixture that better approximatesthe actual image histogram Methods 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures composed of 4functions (119870 = 4) In case of the 119869new + LS method althoughthe algorithm finds a reduced Gaussian mixture of fourfunctions it is initially set with ten functions (119870 = 10)Table 8 presents the final Gaussian mixture parameters andthe final error 119864 The final Gaussian mixture parametershave been averaged over 30 independent executions in orderto assure consistency A close inspection of Table 8 revealsthat the proposed method is able to achieve the smallesterror (119864) in comparison to the other algorithms Figure 16presents the histogram approximations that are produced byeach algorithm whereas Figure 17 shows their correspondentsegmented images Both illustrations present the mediancase obtained throughout 30 runs Figure 18 exhibits that 119869 +ABC 119869 + AIS and 119869 + DE present different levels of misclas-sifications which are nearly absent in the proposed approachcase

Computational Cost The experiment aims to measure thecomplexity and the computing time spent by the 119869 + ABC

Table 8 Results of the reduced Gaussian mixture in terms ofaccuracy

Algorithm Gaussian function 119875119894

120583119894

120590119894

119864

J + ABC

(1) 0052 445 64

1179(2) 0084 9812 1287(3) 0058 16350 894(4) 0025 21884 175

J + AIS

(1) 0072 3101 614

2201(2) 0054 8852 1221(3) 0039 14921 914(4) 0034 24841 1384

J + DE

(1) 0041 3574 786

1357(2) 0036 9057 1197(3) 0059 14847 901(4) 0020 20134 1302

119869new + LS

(1) 0049 4012 75

398(2) 0041 10204 104(3) 0052 16866 83(4) 0025 11092 152

20 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 19 Images employed in the computational cost analysis

the 119869 + AIS the 119869 + DE and the 119869new + LS algorithm whilecalculating the parameters of the Gaussian mixture in bench-mark images (see Figures 19(a)ndash19(d)) 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures that are composedof 4 functions (119870 = 4) In case of the 119869new + LS methodalthough the algorithm finds a reduced Gaussian mixture offour functions despite being initialized with ten functions(119870 = 10) Table 9 shows the averaged measurements after30 experiments It is evident that the 119869 + ABC and 119869 +

DE are the slowest to converge (iterations) and the 119869 + AISshows the highest computational cost (time elapsed) becauseit requires operators which demand long times On the otherhand the 119869new + LS shows an acceptable trade-off betweenits convergence time and its computational cost Thereforealthough the implementation of 119869new + LS in general requiresmore code than most of other evolution-based segmentatorssuch a fact is not reflected in the execution time FinallyFigure 19 below shows the segmented images as they are

Mathematical Problems in Engineering 21

(a) (b) (c) (d)

Figure 20 Experimental set used in the evaluation of the segmentation results

Table 9 Iterations and time requirements of the 119869 + ABC the 119869 +AIS the 119869 + DE and the 119869new + LS algorithm as they are applied tosegment benchmark images (see Figure 17)

Iterations (a) (b) (c) (d)Time elapsed

119869 + ABC 855 833 870 997272 s 270 s 273 s 31 s

119869 + AIS 725 704 754 812178 s 161 s 141 s 201 s

119869 + DE 657 627 694 742125 s 112 s 145 s 188 s

119869new + LS 314 298 307 402

098 s 084 s 072 s 102 s

Table 10 Evaluation of the segmentation results in terms of the ROSindex

Number of classesImage

119873119877= 4

(a)119873119877= 3

(b)119873119877= 4

(c)119873119877= 4

(d)119869 + ABC 0534 0328 0411 0457119869 + AIS 0522 0321 0427 0437119869 + DE 0512 0312 0408 0418119869new + LS 0674 0401 0514 0527

generated by each algorithm It can be seen that the proposedapproach generate more homogeneous regions whereas 119869 +ABC 119869 + AIS and 119869 + DE present several artifacts that areproduced by an incorrect pixel classification

73 Performance Evaluation of the Segmentation Results Thissection presents an objective evaluation of segmentationresults that are produced by all algorithms in the compar-isons The ill-defined nature of the segmentation problemmakes the evaluation of a candidate algorithm difficult[57] Traditionally the evaluation has been conducted byusing some supervised criteria [58] which are based on thecomputation of a dissimilarity measure between a segmen-tation result and a ground truth image Recently the use ofunsupervised measures has substituted supervised indexesfor the objective evaluation of segmentation results [59]They

Table 11 Unimodal test functions

Test function S 119891opt

1198911(x) =119899

sum

119894=11199092119894

[minus100 100]119899 0

1198912(x) =119899

sum

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 +

119899

prod

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 [minus10 10]119899 0

1198913(x) =119899

sum

119894=1(

119894

sum

119895=1119909119895)

2

[minus100 100]119899 0

1198914 (x) = max119894

10038161003816100381610038161199091198941003816100381610038161003816 1 le 119894 le 119899 [minus100 100]119899 0

1198915(x) =119899minus1sum

119894=1[100 (119909

119894+1 minus 1199092119894)2+ (119909119894minus 1)2] [minus30 30]119899 0

1198916(x) =119899

sum

119894=1(119909119894+ 05)2 [minus100 100]119899 0

1198917(x) =119899

sum

119894=1119894119909

4119894+ rand(0 1) [minus128 128]119899 0

enable the quantification of the quality of a segmentationresult without a priori knowledge (ground truth image)

Evaluation Criteria In this paper the unsupervised indexROS proposed by Chabrier et al [60] has been used toobjectively evaluate the performance of each candidate algo-rithm This index evaluates the segmentation quality interms of the homogeneity within segmented regions andthe heterogeneity among the different regions ROS can becomputed as follows

ROS =119863 minus 119863

2 (24)

where 119863 quantifies the homogeneity within segmentedregions Similarly 119863 measures the disparity among theregions A segmentation result 1198781 is considered better than 1198782if ROS

1198781gt ROS

1198782 The interregion homogeneity character-

ized by119863 is calculated considering the following formulation

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot

120590119888

(sum119873119877

119897=1 120590119897) (25)

where 119873119877represents the number of partitions in which the

image has been segmented 119877119888symbolizes the number of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

16 Mathematical Problems in Engineering

0 50 100 150 200 250

0015

0005001

0020025

0035003

0

Gray level

Freq

uenc

y

HistogramClass 1Class 2Class 3

Class 4Reduced Gaussian mixtureEliminated classes

(a) (b)

Figure 13 (a) Segmentation result obtained by LS for the first image and (b) the segmented image

(1) (2)

(3) (4)

(a)

0015

0005001

0020025

0035003

0045004

005

00 50 100 150 200 250

Gray level

Freq

uenc

y

(b)

Figure 14 (a) Synthetic image used in the comparison and (b) its histogram

that are reported in the literature [18] The parameter settingfor each of the segmentation algorithms in the comparison isdescribed as follows

(1) 119869 + ABC [19] in the algorithm its parameters areconfigured as follows the abandonment limit = 100120572 = 005 and limit = 30

(2) 119869 + AIS [20] it presents the following parameters ℎ =120119873

119888= 80 120588 = 10 119875

119903= 20 119871 = 22 and 119879

119890= 001

(3) 119869 + DE [21] the DERand1 scheme is employed Theparameter settings follow the instructions in [21]Thecrossover probability is CR = 09 and the weightingfactor is 119865 = 08

(4) In 119869new +LS the method is set according to the valuesdescribed in Table 5

In order to conduct the experiments a synthetic image isdesigned to be used as a reference in the comparisons Themain idea is to know in advance the exact number of classes(and their parameters) that are contained in the image sothat the histogram can be considered as a ground truth The

Table 7 Employed parameters for the design of the reference image

119875119894

120583119894

120590119894

(1) 005 40 8(2) 004 100 10(3) 005 160 8(4) 0027 220 15

synthetic image is divided into four sections Each sectioncorresponds to a different class which is produced by settingeach gray level pixel 119875V119894 to a value that is determined by thefollowing model

119875V119894= 119890minus((119909minus1205831)

221205902119894) (21)

where 119894 represents the section whereas 120583119894and 120590119894are themean

and the dispersion of the gray level pixels respectively Thecomparative study employs the image of 512 times 512 that isshown in Figure 14(a) and the algorithmrsquos parameters thathave been presented in Table 7 Figure 14(b) illustrates theresultant histogram

Mathematical Problems in Engineering 17

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2Class 3Class 4

Class 5Class 6Class 7Class 8

(a)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(b)

Figure 15 Convergence results (a) Convergence of the following methods 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes (b) Convergence of the proposed method (reduced Gaussian mixture)

(a) (b)

Figure 16 Segmentation results obtained by (a) several methods including 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes and (b) the proposed method (reduced Gaussian mixture)

In the comparison the discussion focuses on the follow-ing issues first of all accuracy second convergence andthird computational cost

Convergence This section analyzes the approximation con-vergence when the number of classes that are used bythe algorithm during the evolution process is different tothe actual number of classes in the image Recall that theproposed approach automatically finds the reduced Gaussianmixture which better adapts to the image histogram

In the experiment the methods 119869 + ABC 119869 + AISand 119869 + DE are executed considering Gaussian mixturescomposed of 8 functions Under such circumstances thenumber of classes to be detected is higher than the actualnumber of classes in the image On the other hand the

proposed algorithm maintains the same configuration ofTable 5Therefore it can detect and calculate up to ten classes(119870 = 10)

As a result the techniques 119869 + ABC 119869 + AIS and 119869 +DE tend to overestimate the image histogram This effectcan be seen in Figure 15(a) where the resulting Gaussianfunctions are concentrated within actual classes Such abehavior is a consequence of the evaluation that is consideredby the original objective function 119869 which privileges onlythe approximation between the Gaussian mixture and theimage histogram This effect can be graphically illustrated byFigure 16(a) that shows the pixel misclassification producedby the wrong segmentation of Figure 14(a) On the otherhand the proposed approach obtains a reduced Gaussianmixture model which allows the detection of each class from

18 Mathematical Problems in Engineering

006

004

005

002

003

001

00 50 100 150 200 250

Gray level

Freq

uenc

y

(a)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

(b)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

HistogramClass 1Class 2

Class 3Class 4

(c)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(d)

Figure 17 Approximation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

the actual histogram (see Figure 15(b)) As a consequencethe segmentation is significantly improved by eliminating thepixel misclassification as it is shown by Figure 16(b)

It is evident from Figure 15 that the techniques 119869 + ABC119869 + AIS and 119869 + DE all need an a priori knowledge of thenumber of classes that are contained in the actual histogramin order to obtain a satisfactory result On the other handthe proposed algorithm is able to find a reduced Gaussianmixture whose classes coincide with the actual number ofclasses that are contained in the image histogram

Accuracy In this section the comparison among the algo-rithms in terms of accuracy is reported Most of the reportedcomparisons [19ndash26] are concerned about comparing theparameters of the resultant Gaussian mixtures by using realimages Under such circumstances it is difficult to considera clear reference in order to define a meaningful errorTherefore the image defined in Figure 14 has been used in theexperiments because its construction parameters are clearlydefined in Table 7

Since the parameter values from Table 7 act as groundtruth a simple averaged difference between them and thevalues that are computed by each algorithm could be used as

comparison error However as each parametermaintains dif-ferent active intervals it is necessary to express the differencesin terms of percentage Therefore if Δ120573 is the parametricdifference and 120573 the ground truth parameter the percentageerror Δ120573 can be defined as follows

Δ120573 =Δ120573

120573sdot 100 (22)

In the segmentation problem each Gaussian mixture repre-sents a 119870-dimensional model where each dimension corre-sponds to a Gaussian function of the optimization problemto be solved Since each Gaussian function possesses threeparameters 119875

119894 120583119894 and 120590

119894 the complete number of parameters

is 3 sdot 119870 dimensions Therefore the final error 119864 produced bythe final Gaussian mixture is

119864 =1

119870 sdot 3

119870sdot3sum

V=1Δ120573V (23)

where 120573V isin (119875119894 120583119894 120590119894)In order to compare accuracy the algorithms 119869 + ABC

119869 + AIS 119869 + DE and the proposed approach are all executedover the image shown by Figure 14(a) The experiment aims

Mathematical Problems in Engineering 19

(a) (b)

(c) (d)

Figure 18 Segmentation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

to estimate the Gaussian mixture that better approximatesthe actual image histogram Methods 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures composed of 4functions (119870 = 4) In case of the 119869new + LS method althoughthe algorithm finds a reduced Gaussian mixture of fourfunctions it is initially set with ten functions (119870 = 10)Table 8 presents the final Gaussian mixture parameters andthe final error 119864 The final Gaussian mixture parametershave been averaged over 30 independent executions in orderto assure consistency A close inspection of Table 8 revealsthat the proposed method is able to achieve the smallesterror (119864) in comparison to the other algorithms Figure 16presents the histogram approximations that are produced byeach algorithm whereas Figure 17 shows their correspondentsegmented images Both illustrations present the mediancase obtained throughout 30 runs Figure 18 exhibits that 119869 +ABC 119869 + AIS and 119869 + DE present different levels of misclas-sifications which are nearly absent in the proposed approachcase

Computational Cost The experiment aims to measure thecomplexity and the computing time spent by the 119869 + ABC

Table 8 Results of the reduced Gaussian mixture in terms ofaccuracy

Algorithm Gaussian function 119875119894

120583119894

120590119894

119864

J + ABC

(1) 0052 445 64

1179(2) 0084 9812 1287(3) 0058 16350 894(4) 0025 21884 175

J + AIS

(1) 0072 3101 614

2201(2) 0054 8852 1221(3) 0039 14921 914(4) 0034 24841 1384

J + DE

(1) 0041 3574 786

1357(2) 0036 9057 1197(3) 0059 14847 901(4) 0020 20134 1302

119869new + LS

(1) 0049 4012 75

398(2) 0041 10204 104(3) 0052 16866 83(4) 0025 11092 152

20 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 19 Images employed in the computational cost analysis

the 119869 + AIS the 119869 + DE and the 119869new + LS algorithm whilecalculating the parameters of the Gaussian mixture in bench-mark images (see Figures 19(a)ndash19(d)) 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures that are composedof 4 functions (119870 = 4) In case of the 119869new + LS methodalthough the algorithm finds a reduced Gaussian mixture offour functions despite being initialized with ten functions(119870 = 10) Table 9 shows the averaged measurements after30 experiments It is evident that the 119869 + ABC and 119869 +

DE are the slowest to converge (iterations) and the 119869 + AISshows the highest computational cost (time elapsed) becauseit requires operators which demand long times On the otherhand the 119869new + LS shows an acceptable trade-off betweenits convergence time and its computational cost Thereforealthough the implementation of 119869new + LS in general requiresmore code than most of other evolution-based segmentatorssuch a fact is not reflected in the execution time FinallyFigure 19 below shows the segmented images as they are

Mathematical Problems in Engineering 21

(a) (b) (c) (d)

Figure 20 Experimental set used in the evaluation of the segmentation results

Table 9 Iterations and time requirements of the 119869 + ABC the 119869 +AIS the 119869 + DE and the 119869new + LS algorithm as they are applied tosegment benchmark images (see Figure 17)

Iterations (a) (b) (c) (d)Time elapsed

119869 + ABC 855 833 870 997272 s 270 s 273 s 31 s

119869 + AIS 725 704 754 812178 s 161 s 141 s 201 s

119869 + DE 657 627 694 742125 s 112 s 145 s 188 s

119869new + LS 314 298 307 402

098 s 084 s 072 s 102 s

Table 10 Evaluation of the segmentation results in terms of the ROSindex

Number of classesImage

119873119877= 4

(a)119873119877= 3

(b)119873119877= 4

(c)119873119877= 4

(d)119869 + ABC 0534 0328 0411 0457119869 + AIS 0522 0321 0427 0437119869 + DE 0512 0312 0408 0418119869new + LS 0674 0401 0514 0527

generated by each algorithm It can be seen that the proposedapproach generate more homogeneous regions whereas 119869 +ABC 119869 + AIS and 119869 + DE present several artifacts that areproduced by an incorrect pixel classification

73 Performance Evaluation of the Segmentation Results Thissection presents an objective evaluation of segmentationresults that are produced by all algorithms in the compar-isons The ill-defined nature of the segmentation problemmakes the evaluation of a candidate algorithm difficult[57] Traditionally the evaluation has been conducted byusing some supervised criteria [58] which are based on thecomputation of a dissimilarity measure between a segmen-tation result and a ground truth image Recently the use ofunsupervised measures has substituted supervised indexesfor the objective evaluation of segmentation results [59]They

Table 11 Unimodal test functions

Test function S 119891opt

1198911(x) =119899

sum

119894=11199092119894

[minus100 100]119899 0

1198912(x) =119899

sum

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 +

119899

prod

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 [minus10 10]119899 0

1198913(x) =119899

sum

119894=1(

119894

sum

119895=1119909119895)

2

[minus100 100]119899 0

1198914 (x) = max119894

10038161003816100381610038161199091198941003816100381610038161003816 1 le 119894 le 119899 [minus100 100]119899 0

1198915(x) =119899minus1sum

119894=1[100 (119909

119894+1 minus 1199092119894)2+ (119909119894minus 1)2] [minus30 30]119899 0

1198916(x) =119899

sum

119894=1(119909119894+ 05)2 [minus100 100]119899 0

1198917(x) =119899

sum

119894=1119894119909

4119894+ rand(0 1) [minus128 128]119899 0

enable the quantification of the quality of a segmentationresult without a priori knowledge (ground truth image)

Evaluation Criteria In this paper the unsupervised indexROS proposed by Chabrier et al [60] has been used toobjectively evaluate the performance of each candidate algo-rithm This index evaluates the segmentation quality interms of the homogeneity within segmented regions andthe heterogeneity among the different regions ROS can becomputed as follows

ROS =119863 minus 119863

2 (24)

where 119863 quantifies the homogeneity within segmentedregions Similarly 119863 measures the disparity among theregions A segmentation result 1198781 is considered better than 1198782if ROS

1198781gt ROS

1198782 The interregion homogeneity character-

ized by119863 is calculated considering the following formulation

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot

120590119888

(sum119873119877

119897=1 120590119897) (25)

where 119873119877represents the number of partitions in which the

image has been segmented 119877119888symbolizes the number of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 17

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2Class 3Class 4

Class 5Class 6Class 7Class 8

(a)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(b)

Figure 15 Convergence results (a) Convergence of the following methods 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes (b) Convergence of the proposed method (reduced Gaussian mixture)

(a) (b)

Figure 16 Segmentation results obtained by (a) several methods including 119869 + ABC 119869 + AIS and 119869 + DE considering Gaussian mixtures of8 classes and (b) the proposed method (reduced Gaussian mixture)

In the comparison the discussion focuses on the follow-ing issues first of all accuracy second convergence andthird computational cost

Convergence This section analyzes the approximation con-vergence when the number of classes that are used bythe algorithm during the evolution process is different tothe actual number of classes in the image Recall that theproposed approach automatically finds the reduced Gaussianmixture which better adapts to the image histogram

In the experiment the methods 119869 + ABC 119869 + AISand 119869 + DE are executed considering Gaussian mixturescomposed of 8 functions Under such circumstances thenumber of classes to be detected is higher than the actualnumber of classes in the image On the other hand the

proposed algorithm maintains the same configuration ofTable 5Therefore it can detect and calculate up to ten classes(119870 = 10)

As a result the techniques 119869 + ABC 119869 + AIS and 119869 +DE tend to overestimate the image histogram This effectcan be seen in Figure 15(a) where the resulting Gaussianfunctions are concentrated within actual classes Such abehavior is a consequence of the evaluation that is consideredby the original objective function 119869 which privileges onlythe approximation between the Gaussian mixture and theimage histogram This effect can be graphically illustrated byFigure 16(a) that shows the pixel misclassification producedby the wrong segmentation of Figure 14(a) On the otherhand the proposed approach obtains a reduced Gaussianmixture model which allows the detection of each class from

18 Mathematical Problems in Engineering

006

004

005

002

003

001

00 50 100 150 200 250

Gray level

Freq

uenc

y

(a)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

(b)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

HistogramClass 1Class 2

Class 3Class 4

(c)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(d)

Figure 17 Approximation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

the actual histogram (see Figure 15(b)) As a consequencethe segmentation is significantly improved by eliminating thepixel misclassification as it is shown by Figure 16(b)

It is evident from Figure 15 that the techniques 119869 + ABC119869 + AIS and 119869 + DE all need an a priori knowledge of thenumber of classes that are contained in the actual histogramin order to obtain a satisfactory result On the other handthe proposed algorithm is able to find a reduced Gaussianmixture whose classes coincide with the actual number ofclasses that are contained in the image histogram

Accuracy In this section the comparison among the algo-rithms in terms of accuracy is reported Most of the reportedcomparisons [19ndash26] are concerned about comparing theparameters of the resultant Gaussian mixtures by using realimages Under such circumstances it is difficult to considera clear reference in order to define a meaningful errorTherefore the image defined in Figure 14 has been used in theexperiments because its construction parameters are clearlydefined in Table 7

Since the parameter values from Table 7 act as groundtruth a simple averaged difference between them and thevalues that are computed by each algorithm could be used as

comparison error However as each parametermaintains dif-ferent active intervals it is necessary to express the differencesin terms of percentage Therefore if Δ120573 is the parametricdifference and 120573 the ground truth parameter the percentageerror Δ120573 can be defined as follows

Δ120573 =Δ120573

120573sdot 100 (22)

In the segmentation problem each Gaussian mixture repre-sents a 119870-dimensional model where each dimension corre-sponds to a Gaussian function of the optimization problemto be solved Since each Gaussian function possesses threeparameters 119875

119894 120583119894 and 120590

119894 the complete number of parameters

is 3 sdot 119870 dimensions Therefore the final error 119864 produced bythe final Gaussian mixture is

119864 =1

119870 sdot 3

119870sdot3sum

V=1Δ120573V (23)

where 120573V isin (119875119894 120583119894 120590119894)In order to compare accuracy the algorithms 119869 + ABC

119869 + AIS 119869 + DE and the proposed approach are all executedover the image shown by Figure 14(a) The experiment aims

Mathematical Problems in Engineering 19

(a) (b)

(c) (d)

Figure 18 Segmentation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

to estimate the Gaussian mixture that better approximatesthe actual image histogram Methods 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures composed of 4functions (119870 = 4) In case of the 119869new + LS method althoughthe algorithm finds a reduced Gaussian mixture of fourfunctions it is initially set with ten functions (119870 = 10)Table 8 presents the final Gaussian mixture parameters andthe final error 119864 The final Gaussian mixture parametershave been averaged over 30 independent executions in orderto assure consistency A close inspection of Table 8 revealsthat the proposed method is able to achieve the smallesterror (119864) in comparison to the other algorithms Figure 16presents the histogram approximations that are produced byeach algorithm whereas Figure 17 shows their correspondentsegmented images Both illustrations present the mediancase obtained throughout 30 runs Figure 18 exhibits that 119869 +ABC 119869 + AIS and 119869 + DE present different levels of misclas-sifications which are nearly absent in the proposed approachcase

Computational Cost The experiment aims to measure thecomplexity and the computing time spent by the 119869 + ABC

Table 8 Results of the reduced Gaussian mixture in terms ofaccuracy

Algorithm Gaussian function 119875119894

120583119894

120590119894

119864

J + ABC

(1) 0052 445 64

1179(2) 0084 9812 1287(3) 0058 16350 894(4) 0025 21884 175

J + AIS

(1) 0072 3101 614

2201(2) 0054 8852 1221(3) 0039 14921 914(4) 0034 24841 1384

J + DE

(1) 0041 3574 786

1357(2) 0036 9057 1197(3) 0059 14847 901(4) 0020 20134 1302

119869new + LS

(1) 0049 4012 75

398(2) 0041 10204 104(3) 0052 16866 83(4) 0025 11092 152

20 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 19 Images employed in the computational cost analysis

the 119869 + AIS the 119869 + DE and the 119869new + LS algorithm whilecalculating the parameters of the Gaussian mixture in bench-mark images (see Figures 19(a)ndash19(d)) 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures that are composedof 4 functions (119870 = 4) In case of the 119869new + LS methodalthough the algorithm finds a reduced Gaussian mixture offour functions despite being initialized with ten functions(119870 = 10) Table 9 shows the averaged measurements after30 experiments It is evident that the 119869 + ABC and 119869 +

DE are the slowest to converge (iterations) and the 119869 + AISshows the highest computational cost (time elapsed) becauseit requires operators which demand long times On the otherhand the 119869new + LS shows an acceptable trade-off betweenits convergence time and its computational cost Thereforealthough the implementation of 119869new + LS in general requiresmore code than most of other evolution-based segmentatorssuch a fact is not reflected in the execution time FinallyFigure 19 below shows the segmented images as they are

Mathematical Problems in Engineering 21

(a) (b) (c) (d)

Figure 20 Experimental set used in the evaluation of the segmentation results

Table 9 Iterations and time requirements of the 119869 + ABC the 119869 +AIS the 119869 + DE and the 119869new + LS algorithm as they are applied tosegment benchmark images (see Figure 17)

Iterations (a) (b) (c) (d)Time elapsed

119869 + ABC 855 833 870 997272 s 270 s 273 s 31 s

119869 + AIS 725 704 754 812178 s 161 s 141 s 201 s

119869 + DE 657 627 694 742125 s 112 s 145 s 188 s

119869new + LS 314 298 307 402

098 s 084 s 072 s 102 s

Table 10 Evaluation of the segmentation results in terms of the ROSindex

Number of classesImage

119873119877= 4

(a)119873119877= 3

(b)119873119877= 4

(c)119873119877= 4

(d)119869 + ABC 0534 0328 0411 0457119869 + AIS 0522 0321 0427 0437119869 + DE 0512 0312 0408 0418119869new + LS 0674 0401 0514 0527

generated by each algorithm It can be seen that the proposedapproach generate more homogeneous regions whereas 119869 +ABC 119869 + AIS and 119869 + DE present several artifacts that areproduced by an incorrect pixel classification

73 Performance Evaluation of the Segmentation Results Thissection presents an objective evaluation of segmentationresults that are produced by all algorithms in the compar-isons The ill-defined nature of the segmentation problemmakes the evaluation of a candidate algorithm difficult[57] Traditionally the evaluation has been conducted byusing some supervised criteria [58] which are based on thecomputation of a dissimilarity measure between a segmen-tation result and a ground truth image Recently the use ofunsupervised measures has substituted supervised indexesfor the objective evaluation of segmentation results [59]They

Table 11 Unimodal test functions

Test function S 119891opt

1198911(x) =119899

sum

119894=11199092119894

[minus100 100]119899 0

1198912(x) =119899

sum

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 +

119899

prod

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 [minus10 10]119899 0

1198913(x) =119899

sum

119894=1(

119894

sum

119895=1119909119895)

2

[minus100 100]119899 0

1198914 (x) = max119894

10038161003816100381610038161199091198941003816100381610038161003816 1 le 119894 le 119899 [minus100 100]119899 0

1198915(x) =119899minus1sum

119894=1[100 (119909

119894+1 minus 1199092119894)2+ (119909119894minus 1)2] [minus30 30]119899 0

1198916(x) =119899

sum

119894=1(119909119894+ 05)2 [minus100 100]119899 0

1198917(x) =119899

sum

119894=1119894119909

4119894+ rand(0 1) [minus128 128]119899 0

enable the quantification of the quality of a segmentationresult without a priori knowledge (ground truth image)

Evaluation Criteria In this paper the unsupervised indexROS proposed by Chabrier et al [60] has been used toobjectively evaluate the performance of each candidate algo-rithm This index evaluates the segmentation quality interms of the homogeneity within segmented regions andthe heterogeneity among the different regions ROS can becomputed as follows

ROS =119863 minus 119863

2 (24)

where 119863 quantifies the homogeneity within segmentedregions Similarly 119863 measures the disparity among theregions A segmentation result 1198781 is considered better than 1198782if ROS

1198781gt ROS

1198782 The interregion homogeneity character-

ized by119863 is calculated considering the following formulation

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot

120590119888

(sum119873119877

119897=1 120590119897) (25)

where 119873119877represents the number of partitions in which the

image has been segmented 119877119888symbolizes the number of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

18 Mathematical Problems in Engineering

006

004

005

002

003

001

00 50 100 150 200 250

Gray level

Freq

uenc

y

(a)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

(b)

0 50 100 150 200 250Gray level

006

004

005

002

003

001

0

Freq

uenc

y

HistogramClass 1Class 2

Class 3Class 4

(c)

006

004

005

002

003

001

0

Freq

uenc

y

0 50 100 150 200 250Gray level

HistogramClass 1Class 2

Class 3Class 4

(d)

Figure 17 Approximation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

the actual histogram (see Figure 15(b)) As a consequencethe segmentation is significantly improved by eliminating thepixel misclassification as it is shown by Figure 16(b)

It is evident from Figure 15 that the techniques 119869 + ABC119869 + AIS and 119869 + DE all need an a priori knowledge of thenumber of classes that are contained in the actual histogramin order to obtain a satisfactory result On the other handthe proposed algorithm is able to find a reduced Gaussianmixture whose classes coincide with the actual number ofclasses that are contained in the image histogram

Accuracy In this section the comparison among the algo-rithms in terms of accuracy is reported Most of the reportedcomparisons [19ndash26] are concerned about comparing theparameters of the resultant Gaussian mixtures by using realimages Under such circumstances it is difficult to considera clear reference in order to define a meaningful errorTherefore the image defined in Figure 14 has been used in theexperiments because its construction parameters are clearlydefined in Table 7

Since the parameter values from Table 7 act as groundtruth a simple averaged difference between them and thevalues that are computed by each algorithm could be used as

comparison error However as each parametermaintains dif-ferent active intervals it is necessary to express the differencesin terms of percentage Therefore if Δ120573 is the parametricdifference and 120573 the ground truth parameter the percentageerror Δ120573 can be defined as follows

Δ120573 =Δ120573

120573sdot 100 (22)

In the segmentation problem each Gaussian mixture repre-sents a 119870-dimensional model where each dimension corre-sponds to a Gaussian function of the optimization problemto be solved Since each Gaussian function possesses threeparameters 119875

119894 120583119894 and 120590

119894 the complete number of parameters

is 3 sdot 119870 dimensions Therefore the final error 119864 produced bythe final Gaussian mixture is

119864 =1

119870 sdot 3

119870sdot3sum

V=1Δ120573V (23)

where 120573V isin (119875119894 120583119894 120590119894)In order to compare accuracy the algorithms 119869 + ABC

119869 + AIS 119869 + DE and the proposed approach are all executedover the image shown by Figure 14(a) The experiment aims

Mathematical Problems in Engineering 19

(a) (b)

(c) (d)

Figure 18 Segmentation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

to estimate the Gaussian mixture that better approximatesthe actual image histogram Methods 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures composed of 4functions (119870 = 4) In case of the 119869new + LS method althoughthe algorithm finds a reduced Gaussian mixture of fourfunctions it is initially set with ten functions (119870 = 10)Table 8 presents the final Gaussian mixture parameters andthe final error 119864 The final Gaussian mixture parametershave been averaged over 30 independent executions in orderto assure consistency A close inspection of Table 8 revealsthat the proposed method is able to achieve the smallesterror (119864) in comparison to the other algorithms Figure 16presents the histogram approximations that are produced byeach algorithm whereas Figure 17 shows their correspondentsegmented images Both illustrations present the mediancase obtained throughout 30 runs Figure 18 exhibits that 119869 +ABC 119869 + AIS and 119869 + DE present different levels of misclas-sifications which are nearly absent in the proposed approachcase

Computational Cost The experiment aims to measure thecomplexity and the computing time spent by the 119869 + ABC

Table 8 Results of the reduced Gaussian mixture in terms ofaccuracy

Algorithm Gaussian function 119875119894

120583119894

120590119894

119864

J + ABC

(1) 0052 445 64

1179(2) 0084 9812 1287(3) 0058 16350 894(4) 0025 21884 175

J + AIS

(1) 0072 3101 614

2201(2) 0054 8852 1221(3) 0039 14921 914(4) 0034 24841 1384

J + DE

(1) 0041 3574 786

1357(2) 0036 9057 1197(3) 0059 14847 901(4) 0020 20134 1302

119869new + LS

(1) 0049 4012 75

398(2) 0041 10204 104(3) 0052 16866 83(4) 0025 11092 152

20 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 19 Images employed in the computational cost analysis

the 119869 + AIS the 119869 + DE and the 119869new + LS algorithm whilecalculating the parameters of the Gaussian mixture in bench-mark images (see Figures 19(a)ndash19(d)) 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures that are composedof 4 functions (119870 = 4) In case of the 119869new + LS methodalthough the algorithm finds a reduced Gaussian mixture offour functions despite being initialized with ten functions(119870 = 10) Table 9 shows the averaged measurements after30 experiments It is evident that the 119869 + ABC and 119869 +

DE are the slowest to converge (iterations) and the 119869 + AISshows the highest computational cost (time elapsed) becauseit requires operators which demand long times On the otherhand the 119869new + LS shows an acceptable trade-off betweenits convergence time and its computational cost Thereforealthough the implementation of 119869new + LS in general requiresmore code than most of other evolution-based segmentatorssuch a fact is not reflected in the execution time FinallyFigure 19 below shows the segmented images as they are

Mathematical Problems in Engineering 21

(a) (b) (c) (d)

Figure 20 Experimental set used in the evaluation of the segmentation results

Table 9 Iterations and time requirements of the 119869 + ABC the 119869 +AIS the 119869 + DE and the 119869new + LS algorithm as they are applied tosegment benchmark images (see Figure 17)

Iterations (a) (b) (c) (d)Time elapsed

119869 + ABC 855 833 870 997272 s 270 s 273 s 31 s

119869 + AIS 725 704 754 812178 s 161 s 141 s 201 s

119869 + DE 657 627 694 742125 s 112 s 145 s 188 s

119869new + LS 314 298 307 402

098 s 084 s 072 s 102 s

Table 10 Evaluation of the segmentation results in terms of the ROSindex

Number of classesImage

119873119877= 4

(a)119873119877= 3

(b)119873119877= 4

(c)119873119877= 4

(d)119869 + ABC 0534 0328 0411 0457119869 + AIS 0522 0321 0427 0437119869 + DE 0512 0312 0408 0418119869new + LS 0674 0401 0514 0527

generated by each algorithm It can be seen that the proposedapproach generate more homogeneous regions whereas 119869 +ABC 119869 + AIS and 119869 + DE present several artifacts that areproduced by an incorrect pixel classification

73 Performance Evaluation of the Segmentation Results Thissection presents an objective evaluation of segmentationresults that are produced by all algorithms in the compar-isons The ill-defined nature of the segmentation problemmakes the evaluation of a candidate algorithm difficult[57] Traditionally the evaluation has been conducted byusing some supervised criteria [58] which are based on thecomputation of a dissimilarity measure between a segmen-tation result and a ground truth image Recently the use ofunsupervised measures has substituted supervised indexesfor the objective evaluation of segmentation results [59]They

Table 11 Unimodal test functions

Test function S 119891opt

1198911(x) =119899

sum

119894=11199092119894

[minus100 100]119899 0

1198912(x) =119899

sum

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 +

119899

prod

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 [minus10 10]119899 0

1198913(x) =119899

sum

119894=1(

119894

sum

119895=1119909119895)

2

[minus100 100]119899 0

1198914 (x) = max119894

10038161003816100381610038161199091198941003816100381610038161003816 1 le 119894 le 119899 [minus100 100]119899 0

1198915(x) =119899minus1sum

119894=1[100 (119909

119894+1 minus 1199092119894)2+ (119909119894minus 1)2] [minus30 30]119899 0

1198916(x) =119899

sum

119894=1(119909119894+ 05)2 [minus100 100]119899 0

1198917(x) =119899

sum

119894=1119894119909

4119894+ rand(0 1) [minus128 128]119899 0

enable the quantification of the quality of a segmentationresult without a priori knowledge (ground truth image)

Evaluation Criteria In this paper the unsupervised indexROS proposed by Chabrier et al [60] has been used toobjectively evaluate the performance of each candidate algo-rithm This index evaluates the segmentation quality interms of the homogeneity within segmented regions andthe heterogeneity among the different regions ROS can becomputed as follows

ROS =119863 minus 119863

2 (24)

where 119863 quantifies the homogeneity within segmentedregions Similarly 119863 measures the disparity among theregions A segmentation result 1198781 is considered better than 1198782if ROS

1198781gt ROS

1198782 The interregion homogeneity character-

ized by119863 is calculated considering the following formulation

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot

120590119888

(sum119873119877

119897=1 120590119897) (25)

where 119873119877represents the number of partitions in which the

image has been segmented 119877119888symbolizes the number of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 19

(a) (b)

(c) (d)

Figure 18 Segmentation results in terms of accuracy (a) 119869 + ABC (b) 119869 + AIS (c) 119869 + DE and (d) the proposed 119869new + LS approach

to estimate the Gaussian mixture that better approximatesthe actual image histogram Methods 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures composed of 4functions (119870 = 4) In case of the 119869new + LS method althoughthe algorithm finds a reduced Gaussian mixture of fourfunctions it is initially set with ten functions (119870 = 10)Table 8 presents the final Gaussian mixture parameters andthe final error 119864 The final Gaussian mixture parametershave been averaged over 30 independent executions in orderto assure consistency A close inspection of Table 8 revealsthat the proposed method is able to achieve the smallesterror (119864) in comparison to the other algorithms Figure 16presents the histogram approximations that are produced byeach algorithm whereas Figure 17 shows their correspondentsegmented images Both illustrations present the mediancase obtained throughout 30 runs Figure 18 exhibits that 119869 +ABC 119869 + AIS and 119869 + DE present different levels of misclas-sifications which are nearly absent in the proposed approachcase

Computational Cost The experiment aims to measure thecomplexity and the computing time spent by the 119869 + ABC

Table 8 Results of the reduced Gaussian mixture in terms ofaccuracy

Algorithm Gaussian function 119875119894

120583119894

120590119894

119864

J + ABC

(1) 0052 445 64

1179(2) 0084 9812 1287(3) 0058 16350 894(4) 0025 21884 175

J + AIS

(1) 0072 3101 614

2201(2) 0054 8852 1221(3) 0039 14921 914(4) 0034 24841 1384

J + DE

(1) 0041 3574 786

1357(2) 0036 9057 1197(3) 0059 14847 901(4) 0020 20134 1302

119869new + LS

(1) 0049 4012 75

398(2) 0041 10204 104(3) 0052 16866 83(4) 0025 11092 152

20 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 19 Images employed in the computational cost analysis

the 119869 + AIS the 119869 + DE and the 119869new + LS algorithm whilecalculating the parameters of the Gaussian mixture in bench-mark images (see Figures 19(a)ndash19(d)) 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures that are composedof 4 functions (119870 = 4) In case of the 119869new + LS methodalthough the algorithm finds a reduced Gaussian mixture offour functions despite being initialized with ten functions(119870 = 10) Table 9 shows the averaged measurements after30 experiments It is evident that the 119869 + ABC and 119869 +

DE are the slowest to converge (iterations) and the 119869 + AISshows the highest computational cost (time elapsed) becauseit requires operators which demand long times On the otherhand the 119869new + LS shows an acceptable trade-off betweenits convergence time and its computational cost Thereforealthough the implementation of 119869new + LS in general requiresmore code than most of other evolution-based segmentatorssuch a fact is not reflected in the execution time FinallyFigure 19 below shows the segmented images as they are

Mathematical Problems in Engineering 21

(a) (b) (c) (d)

Figure 20 Experimental set used in the evaluation of the segmentation results

Table 9 Iterations and time requirements of the 119869 + ABC the 119869 +AIS the 119869 + DE and the 119869new + LS algorithm as they are applied tosegment benchmark images (see Figure 17)

Iterations (a) (b) (c) (d)Time elapsed

119869 + ABC 855 833 870 997272 s 270 s 273 s 31 s

119869 + AIS 725 704 754 812178 s 161 s 141 s 201 s

119869 + DE 657 627 694 742125 s 112 s 145 s 188 s

119869new + LS 314 298 307 402

098 s 084 s 072 s 102 s

Table 10 Evaluation of the segmentation results in terms of the ROSindex

Number of classesImage

119873119877= 4

(a)119873119877= 3

(b)119873119877= 4

(c)119873119877= 4

(d)119869 + ABC 0534 0328 0411 0457119869 + AIS 0522 0321 0427 0437119869 + DE 0512 0312 0408 0418119869new + LS 0674 0401 0514 0527

generated by each algorithm It can be seen that the proposedapproach generate more homogeneous regions whereas 119869 +ABC 119869 + AIS and 119869 + DE present several artifacts that areproduced by an incorrect pixel classification

73 Performance Evaluation of the Segmentation Results Thissection presents an objective evaluation of segmentationresults that are produced by all algorithms in the compar-isons The ill-defined nature of the segmentation problemmakes the evaluation of a candidate algorithm difficult[57] Traditionally the evaluation has been conducted byusing some supervised criteria [58] which are based on thecomputation of a dissimilarity measure between a segmen-tation result and a ground truth image Recently the use ofunsupervised measures has substituted supervised indexesfor the objective evaluation of segmentation results [59]They

Table 11 Unimodal test functions

Test function S 119891opt

1198911(x) =119899

sum

119894=11199092119894

[minus100 100]119899 0

1198912(x) =119899

sum

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 +

119899

prod

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 [minus10 10]119899 0

1198913(x) =119899

sum

119894=1(

119894

sum

119895=1119909119895)

2

[minus100 100]119899 0

1198914 (x) = max119894

10038161003816100381610038161199091198941003816100381610038161003816 1 le 119894 le 119899 [minus100 100]119899 0

1198915(x) =119899minus1sum

119894=1[100 (119909

119894+1 minus 1199092119894)2+ (119909119894minus 1)2] [minus30 30]119899 0

1198916(x) =119899

sum

119894=1(119909119894+ 05)2 [minus100 100]119899 0

1198917(x) =119899

sum

119894=1119894119909

4119894+ rand(0 1) [minus128 128]119899 0

enable the quantification of the quality of a segmentationresult without a priori knowledge (ground truth image)

Evaluation Criteria In this paper the unsupervised indexROS proposed by Chabrier et al [60] has been used toobjectively evaluate the performance of each candidate algo-rithm This index evaluates the segmentation quality interms of the homogeneity within segmented regions andthe heterogeneity among the different regions ROS can becomputed as follows

ROS =119863 minus 119863

2 (24)

where 119863 quantifies the homogeneity within segmentedregions Similarly 119863 measures the disparity among theregions A segmentation result 1198781 is considered better than 1198782if ROS

1198781gt ROS

1198782 The interregion homogeneity character-

ized by119863 is calculated considering the following formulation

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot

120590119888

(sum119873119877

119897=1 120590119897) (25)

where 119873119877represents the number of partitions in which the

image has been segmented 119877119888symbolizes the number of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

20 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 19 Images employed in the computational cost analysis

the 119869 + AIS the 119869 + DE and the 119869new + LS algorithm whilecalculating the parameters of the Gaussian mixture in bench-mark images (see Figures 19(a)ndash19(d)) 119869 + ABC 119869 + AISand 119869 + DE consider Gaussian mixtures that are composedof 4 functions (119870 = 4) In case of the 119869new + LS methodalthough the algorithm finds a reduced Gaussian mixture offour functions despite being initialized with ten functions(119870 = 10) Table 9 shows the averaged measurements after30 experiments It is evident that the 119869 + ABC and 119869 +

DE are the slowest to converge (iterations) and the 119869 + AISshows the highest computational cost (time elapsed) becauseit requires operators which demand long times On the otherhand the 119869new + LS shows an acceptable trade-off betweenits convergence time and its computational cost Thereforealthough the implementation of 119869new + LS in general requiresmore code than most of other evolution-based segmentatorssuch a fact is not reflected in the execution time FinallyFigure 19 below shows the segmented images as they are

Mathematical Problems in Engineering 21

(a) (b) (c) (d)

Figure 20 Experimental set used in the evaluation of the segmentation results

Table 9 Iterations and time requirements of the 119869 + ABC the 119869 +AIS the 119869 + DE and the 119869new + LS algorithm as they are applied tosegment benchmark images (see Figure 17)

Iterations (a) (b) (c) (d)Time elapsed

119869 + ABC 855 833 870 997272 s 270 s 273 s 31 s

119869 + AIS 725 704 754 812178 s 161 s 141 s 201 s

119869 + DE 657 627 694 742125 s 112 s 145 s 188 s

119869new + LS 314 298 307 402

098 s 084 s 072 s 102 s

Table 10 Evaluation of the segmentation results in terms of the ROSindex

Number of classesImage

119873119877= 4

(a)119873119877= 3

(b)119873119877= 4

(c)119873119877= 4

(d)119869 + ABC 0534 0328 0411 0457119869 + AIS 0522 0321 0427 0437119869 + DE 0512 0312 0408 0418119869new + LS 0674 0401 0514 0527

generated by each algorithm It can be seen that the proposedapproach generate more homogeneous regions whereas 119869 +ABC 119869 + AIS and 119869 + DE present several artifacts that areproduced by an incorrect pixel classification

73 Performance Evaluation of the Segmentation Results Thissection presents an objective evaluation of segmentationresults that are produced by all algorithms in the compar-isons The ill-defined nature of the segmentation problemmakes the evaluation of a candidate algorithm difficult[57] Traditionally the evaluation has been conducted byusing some supervised criteria [58] which are based on thecomputation of a dissimilarity measure between a segmen-tation result and a ground truth image Recently the use ofunsupervised measures has substituted supervised indexesfor the objective evaluation of segmentation results [59]They

Table 11 Unimodal test functions

Test function S 119891opt

1198911(x) =119899

sum

119894=11199092119894

[minus100 100]119899 0

1198912(x) =119899

sum

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 +

119899

prod

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 [minus10 10]119899 0

1198913(x) =119899

sum

119894=1(

119894

sum

119895=1119909119895)

2

[minus100 100]119899 0

1198914 (x) = max119894

10038161003816100381610038161199091198941003816100381610038161003816 1 le 119894 le 119899 [minus100 100]119899 0

1198915(x) =119899minus1sum

119894=1[100 (119909

119894+1 minus 1199092119894)2+ (119909119894minus 1)2] [minus30 30]119899 0

1198916(x) =119899

sum

119894=1(119909119894+ 05)2 [minus100 100]119899 0

1198917(x) =119899

sum

119894=1119894119909

4119894+ rand(0 1) [minus128 128]119899 0

enable the quantification of the quality of a segmentationresult without a priori knowledge (ground truth image)

Evaluation Criteria In this paper the unsupervised indexROS proposed by Chabrier et al [60] has been used toobjectively evaluate the performance of each candidate algo-rithm This index evaluates the segmentation quality interms of the homogeneity within segmented regions andthe heterogeneity among the different regions ROS can becomputed as follows

ROS =119863 minus 119863

2 (24)

where 119863 quantifies the homogeneity within segmentedregions Similarly 119863 measures the disparity among theregions A segmentation result 1198781 is considered better than 1198782if ROS

1198781gt ROS

1198782 The interregion homogeneity character-

ized by119863 is calculated considering the following formulation

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot

120590119888

(sum119873119877

119897=1 120590119897) (25)

where 119873119877represents the number of partitions in which the

image has been segmented 119877119888symbolizes the number of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 21

(a) (b) (c) (d)

Figure 20 Experimental set used in the evaluation of the segmentation results

Table 9 Iterations and time requirements of the 119869 + ABC the 119869 +AIS the 119869 + DE and the 119869new + LS algorithm as they are applied tosegment benchmark images (see Figure 17)

Iterations (a) (b) (c) (d)Time elapsed

119869 + ABC 855 833 870 997272 s 270 s 273 s 31 s

119869 + AIS 725 704 754 812178 s 161 s 141 s 201 s

119869 + DE 657 627 694 742125 s 112 s 145 s 188 s

119869new + LS 314 298 307 402

098 s 084 s 072 s 102 s

Table 10 Evaluation of the segmentation results in terms of the ROSindex

Number of classesImage

119873119877= 4

(a)119873119877= 3

(b)119873119877= 4

(c)119873119877= 4

(d)119869 + ABC 0534 0328 0411 0457119869 + AIS 0522 0321 0427 0437119869 + DE 0512 0312 0408 0418119869new + LS 0674 0401 0514 0527

generated by each algorithm It can be seen that the proposedapproach generate more homogeneous regions whereas 119869 +ABC 119869 + AIS and 119869 + DE present several artifacts that areproduced by an incorrect pixel classification

73 Performance Evaluation of the Segmentation Results Thissection presents an objective evaluation of segmentationresults that are produced by all algorithms in the compar-isons The ill-defined nature of the segmentation problemmakes the evaluation of a candidate algorithm difficult[57] Traditionally the evaluation has been conducted byusing some supervised criteria [58] which are based on thecomputation of a dissimilarity measure between a segmen-tation result and a ground truth image Recently the use ofunsupervised measures has substituted supervised indexesfor the objective evaluation of segmentation results [59]They

Table 11 Unimodal test functions

Test function S 119891opt

1198911(x) =119899

sum

119894=11199092119894

[minus100 100]119899 0

1198912(x) =119899

sum

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 +

119899

prod

119894=1

10038161003816100381610038161199091198941003816100381610038161003816 [minus10 10]119899 0

1198913(x) =119899

sum

119894=1(

119894

sum

119895=1119909119895)

2

[minus100 100]119899 0

1198914 (x) = max119894

10038161003816100381610038161199091198941003816100381610038161003816 1 le 119894 le 119899 [minus100 100]119899 0

1198915(x) =119899minus1sum

119894=1[100 (119909

119894+1 minus 1199092119894)2+ (119909119894minus 1)2] [minus30 30]119899 0

1198916(x) =119899

sum

119894=1(119909119894+ 05)2 [minus100 100]119899 0

1198917(x) =119899

sum

119894=1119894119909

4119894+ rand(0 1) [minus128 128]119899 0

enable the quantification of the quality of a segmentationresult without a priori knowledge (ground truth image)

Evaluation Criteria In this paper the unsupervised indexROS proposed by Chabrier et al [60] has been used toobjectively evaluate the performance of each candidate algo-rithm This index evaluates the segmentation quality interms of the homogeneity within segmented regions andthe heterogeneity among the different regions ROS can becomputed as follows

ROS =119863 minus 119863

2 (24)

where 119863 quantifies the homogeneity within segmentedregions Similarly 119863 measures the disparity among theregions A segmentation result 1198781 is considered better than 1198782if ROS

1198781gt ROS

1198782 The interregion homogeneity character-

ized by119863 is calculated considering the following formulation

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot

120590119888

(sum119873119877

119897=1 120590119897) (25)

where 119873119877represents the number of partitions in which the

image has been segmented 119877119888symbolizes the number of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

22 Mathematical Problems in Engineering

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

J + ABC

J + AIS

J + DE

Jnew + LS

(a) (b) (c) (d)

Figure 21 Segmentation results using in the evaluation

pixels contained in the partition 119888 whereas 119868 considers thenumber of pixels that integrate the complete image Similarly120590119888represents the standard deviation from the partition 119888 On

the other hand disparity among the regions 119863 is computedas follows

119863 =1119873119877

119873119877

sum

119888=1

119877119888

119868sdot [

1(119873119877minus 1)

119873119877

sum

119897=1

1003816100381610038161003816120583119888 minus 1205831198971003816100381610038161003816

255] (26)

where 120583119888is the average gray level in the partition 119888

Experimental Protocol In the comparison of segmentationresults a set of four classical images has been chosen tointegrate the experimental set (Figure 20) The segmentationmethods used in the comparison are 119869 + ABC [19] 119869 + AIS[20] and 119869 + DE [21]

From all segmentation methods used in the comparisonthe proposed 119869new + LS algorithm is the only one that has thecapacity to automatically detect the number of segmentationpartitions (classes) In order to conduct a fair comparisonall algorithms have been proved by using the same number

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 23

Table 12 Multimodal test functions

Test function S 119891opt

1198918(x) =119899

sum

119894=1minus119909119894sin(radic1003816100381610038161003816119909119894

1003816100381610038161003816) [minus500 500]119899 minus41898 lowast 119899

1198919(x) =119899

sum

119894=1[119909

2119894minus 10 cos (2120587119909

119894) + 10] [minus512 512]119899 0

11989110(x) = minus20 exp(minus02radic 1119899

119899

sum

119894=11199092119894) minus exp(1

119899

119899

sum

119894=1cos (2120587119909

119894)) + 20 [minus32 32]119899 0

11989111(x) =1

4000

119899

sum

119894=11199092119894minus

119899

prod

119894=1

cos(119909119894

radic119894) + 1 [minus600 600]119899 0

11989112(x) =120587

11989910 sin (1205871199101) +

119899minus1sum

119894=1(119910119894minus 1)2 [1 + 10 sin2

(120587119910119894+1)] + (119910119899 minus 1)2 +

119899

sum

119894=1119906(119909119894 10 100 4)

[minus50 50]119899 0119910119894= 1 +

119909119894+ 14

119906(119909119894 119886 119896 119898) =

119896 (119909119894minus 119886)119898

119909119894gt 119886

0 minus119886 lt 119909119894lt 119886

119896 (minus119909119894minus 119886)119898

119909119894lt minus119886

11989113(x) = 01sin2(31205871199091) +

119899

sum

119894=1(119909119894minus 1)2 [1 + sin2

(3120587119909119894+ 1)] + (119909

119899minus 1)2 [1 + sin2

(2120587119909119899)] +

119899

sum

119894=1119906(119909119894 5 100 4) [minus50 50]119899 0

of partitions Therefore in the experiments the 119869new + LSsegmentation algorithm is firstly applied to detect the bestpossible number of partitions 119873

119877 Once we obtained the

number of partitions 119873119877 the rest of the algorithms were

configured to approximate the image histogram with thisnumber of classes

Figure 21 presents the segmentation results obtainedby each algorithm considering the experimental set fromFigure 20 On the other hand Table 10 shows the evaluationof the segmentation results in terms of the ROS indexSuch values represent the averaged measurements after 30executions From them it can be seen that the proposed119869new

+ LS method obtains the best ROS indexes Such valuesindicate that the proposed algorithm maintains the bestbalance between the homogeneity within segmented regionsand the heterogeneity among the different regions FromFigure 21 it can be seen that the proposed approach generatesmore homogeneous regions whereas 119869 + ABC 119869 + AIS and119869 + DE present several artifacts that are produced by anincorrect pixel classification

8 Conclusions

Despite the fact that several evolutionary methods have beensuccessfully applied to image segmentation with interest-ing results most of them have exhibited two importantlimitations (1) they frequently obtain suboptimal results(misclassifications) as a consequence of an inappropriatebalance between exploration and exploitation in their searchstrategies (2) the number of classes is fixed and known inadvance

In this paper a new swarm algorithm for the automaticimage segmentation called the Locust Search (LS) hasbeen presented The proposed method eliminates the typicalflaws presented by previous evolutionary approaches bycombining a novel evolutionary method with the definition

of a new objective function that appropriately evaluatesthe segmentation quality with respect to the number ofclasses In order to illustrate the proficiency and robustness ofthe proposed approach several numerical experiments havebeen conducted Such experiments have been divided intotwo parts First the proposed LS method has been comparedto other well-known evolutionary techniques on a set ofbenchmark functions In the second part the performanceof the proposed segmentation algorithm has been comparedto other segmentation methods based on evolutionary prin-ciples The results in both cases validate the efficiency of theproposed technique with regard to accuracy and robustness

Several research directions will be considered for futurework such as the inclusion of other indexes to evaluatesimilarity between a candidate solution and the image his-togram the consideration of spatial pixel characteristics inthe objective function the modification of the evolutionaryLS operators to control the exploration-exploitation balanceand the conversion of the segmentation procedure into amultiobjective problem

Appendix

List of Benchmark Functions

In Table 11 119899 is the dimension of function 119891opt is theminimum value of the function and S is a subset of 119877119899 Theoptimum location (xopt) for functions in Table 11 is in [0]119899except for 1198915 with xopt in [1]

119899The optimum locations (xopt) for functions in Table 12 are

in [0]119899 except for 1198918 in [42096]119899 and 11989112-11989113 in [1]

119899

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

24 Mathematical Problems in Engineering

Acknowledgment

The present research has been supported under the GrantCONACYT CB 181053

References

[1] H Zhang J E Fritts and S A Goldman ldquoImage segmentationevaluation a survey of unsupervised methodsrdquo ComputerVision and Image Understanding vol 110 no 2 pp 260ndash2802008

[2] T Uemura G Koutaki and K Uchimura ldquoImage segmentationbased on edge detection using boundary coderdquo InternationalJournal of Innovative Computing vol 7 pp 6073ndash6083 2011

[3] L Wang H Wu and C Pan ldquoRegion-based image segmen-tation with local signed difference energyrdquo Pattern RecognitionLetters vol 34 no 6 pp 637ndash645 2013

[4] N Otsu ldquoA thresholding selection method from gray-levelhistogramrdquo IEEETransactions on SystemsMan andCyberneticsvol 9 no 1 pp 62ndash66 1979

[5] R Peng and P K Varshney ldquoOn performance limits of imagesegmentation algorithmsrdquo Computer Vision and Image Under-standing vol 132 pp 24ndash38 2015

[6] M A Balafar ldquoGaussian mixture model based segmentationmethods for brain MRI imagesrdquo Artificial Intelligence Reviewvol 41 no 3 pp 429ndash439 2014

[7] G J McLachlan and S Rathnayake ldquoOn the number ofcomponents in a Gaussian mixture modelrdquo Data Mining andKnowledge Discovery vol 4 no 5 pp 341ndash355 2014

[8] D Oliva V Osuna-Enciso E Cuevas G Pajares M Perez-Cisneros and D Zaldıvar ldquoImproving segmentation velocityusing an evolutionary methodrdquo Expert Systems with Applica-tions vol 42 no 14 pp 5874ndash5886 2015

[9] Z-W Ye M-W Wang W Liu and S-B Chen ldquoFuzzy entropybased optimal thresholding using bat algorithmrdquo Applied SoftComputing vol 31 pp 381ndash395 2015

[10] S Sarkar S Das and S Sinha Chaudhuri ldquoA multilevel colorimage thresholding scheme based on minimum cross entropyand differential evolutionrdquo Pattern Recognition Letters vol 54pp 27ndash35 2015

[11] A K Bhandari A Kumar and G K Singh ldquoModified artificialbee colony based computationally efficient multilevel thresh-olding for satellite image segmentation using Kapurrsquos Otsu andTsallis functionsrdquo Expert Systems with Applications vol 42 no3 pp 1573ndash1601 2015

[12] H Permuter J Francos and I Jermyn ldquoA study of Gaussianmixture models of color and texture features for image classi-fication and segmentationrdquo Pattern Recognition vol 39 no 4pp 695ndash706 2006

[13] A P Dempster A P Laird and D B Rubin ldquoMaximumlikelihood from incomplete data via the EM algorithmrdquo Journalof the Royal Statistical Society Series B vol 39 no 1 pp 1ndash381977

[14] Z Zhang C Chen J Sun and K L Chan ldquoEM algorithmsfor Gaussian mixtures with split-and-merge operationrdquo PatternRecognition vol 36 no 9 pp 1973ndash1983 2003

[15] H Park S-I Amari and K Fukumizu ldquoAdaptive naturalgradient learning algorithms for various stochastic modelsrdquoNeural Networks vol 13 no 7 pp 755ndash764 2000

[16] H Park and T Ozeki ldquoSingularity and slow convergence of theem algorithm for gaussian mixturesrdquo Neural Processing Lettersvol 29 no 1 pp 45ndash59 2009

[17] L Gupta and T Sortrakul ldquoA Gaussian-mixture-based imagesegmentation algorithmrdquo Pattern Recognition vol 31 no 3 pp315ndash325 1998

[18] V Osuna-Enciso E Cuevas and H Sossa ldquoA comparison ofnature inspired algorithms for multi-threshold image segmen-tationrdquo Expert Systems with Applications vol 40 no 4 pp 1213ndash1219 2013

[19] E Cuevas F Sencion D Zaldivar M Perez-Cisneros andH Sossa ldquoA multi-threshold segmentation approach based onartificial bee colony optimizationrdquo Applied Intelligence vol 37no 3 pp 321ndash336 2012

[20] E Cuevas V Osuna-Enciso D Zaldivar M Perez-Cisnerosand H Sossa ldquoMultithreshold segmentation based on artificialimmune systemsrdquo Mathematical Problems in Engineering vol2012 Article ID 874761 20 pages 2012

[21] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[22] D Oliva E Cuevas G Pajares D Zaldivar and V OsunaldquoA multilevel thresholding algorithm using electromagnetismoptimizationrdquo Neurocomputing vol 139 pp 357ndash381 2014

[23] D Oliva E Cuevas G Pajares D Zaldivar and M Perez-Cisneros ldquoMultilevel thresholding segmentation based on har-mony search optimizationrdquo Journal of AppliedMathematics vol2013 Article ID 575414 24 pages 2013

[24] E Cuevas D Zaldivar and M Perez-Cisneros ldquoSeeking multi-thresholds for image segmentation with Learning AutomatardquoMachine Vision and Applications vol 22 no 5 pp 805ndash8182011

[25] K C Tan S C Chiam A A Mamun and C K Goh ldquoBal-ancing exploration and exploitation with adaptive variation forevolutionarymulti-objective optimizationrdquo European Journal ofOperational Research vol 197 no 2 pp 701ndash713 2009

[26] G Chen C P Low and Z Yang ldquoPreserving and exploitinggenetic diversity in evolutionary programming algorithmsrdquoIEEE Transactions on Evolutionary Computation vol 13 no 3pp 661ndash673 2009

[27] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 December 1995

[28] R Storn andK Price ldquoDifferential Evolutionmdasha simple and effi-cient adaptive scheme for global optimisation over continuousspacesrdquo Tech Rep TR-95ndash012 ICSI Berkeley Calif USA 1995

[29] Y Wang B Li T Weise J Wang B Yuan and Q TianldquoSelf-adaptive learning based particle swarm optimizationrdquoInformation Sciences vol 181 no 20 pp 4515ndash4538 2011

[30] J Tvrdık ldquoAdaptation in differential evolution a numericalcomparisonrdquo Applied Soft Computing Journal vol 9 no 3 pp1149ndash1155 2009

[31] HWangH Sun C Li S Rahnamayan and J-s Pan ldquoDiversityenhanced particle swarm optimization with neighborhoodsearchrdquo Information Sciences vol 223 pp 119ndash135 2013

[32] W Gong A Fialho Z Cai and H Li ldquoAdaptive strategyselection in differential evolution for numerical optimizationan empirical studyrdquo Information Sciences vol 181 no 24 pp5364ndash5386 2011

[33] D M Gordon ldquoThe organization of work in social insectcoloniesrdquo Complexity vol 8 no 1 pp 43ndash46 2002

[34] S Kizaki and M Katori ldquoStochastic lattice model for locustoutbreakrdquo Physica A Statistical Mechanics and its Applicationsvol 266 no 1-4 pp 339ndash342 1999

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 25

[35] S M Rogers D A Cullen M L Anstey et al ldquoRapidbehavioural gregarization in the desert locust Schistocercagregaria entails synchronous changes in both activity andattraction to conspecificsrdquo Journal of Insect Physiology vol 65pp 9ndash26 2014

[36] C M Topaz A J Bernoff S Logan and W Toolson ldquoA modelfor rolling swarms of locustsrdquoEuropean Physical Journal SpecialTopics vol 157 no 1 pp 93ndash109 2008

[37] C M Topaz M R DrsquoOrsogna L Edelstein-Keshet and AJ Bernoff ldquoLocust dynamics behavioral phase change andswarmingrdquo PLoS Computational Biology vol 8 no 8 ArticleID e1002642 2012

[38] G Oster and E Wilson Caste and Ecology in the Social InsectsPrinceton University Press Princeton NJ USA 1978

[39] B Holldobler and E O Wilson Journey to the Ants A Story ofScientific Exploration 1994

[40] B Holldobler and E O Wilson The Ants Harvard UniversityPress Cambridge Mass USA 1990

[41] S Tanaka and Y Nishide ldquoBehavioral phase shift in nymphsof the desert locust Schistocerca gregaria special attentionto attractionavoidance behaviors and the role of serotoninrdquoJournal of Insect Physiology vol 59 no 1 pp 101ndash112 2013

[42] E Gaten S J Huston H B Dowse and T Matheson ldquoSolitaryand gregarious locusts differ in circadian rhythmicity of a visualoutput neuronrdquo Journal of Biological Rhythms vol 27 no 3 pp196ndash205 2012

[43] I Benaragama and J R Gray ldquoResponses of a pair of flyinglocusts to lateral looming visual stimulirdquo Journal of ComparativePhysiology A vol 200 no 8 pp 723ndash738 2014

[44] M G Sergeev ldquoDistribution patterns of grasshoppers and theirKin in the boreal zonerdquo Psyche vol 2011 Article ID 324130 9pages 2011

[45] S O Ely P G N NjagiM O Bashir S El-TomEl-Amin andAHassanali ldquoDiel behavioral activity patterns in adult solitariousdesert locust Schistocerca gregaria (Forskal)rdquo Psyche vol 2011Article ID 459315 9 pages 2011

[46] X-S Yang Nature-Inspired Metaheuristic Algorithms LuniverPress Beckington UK 2008

[47] E Cuevas A Echavarrıa and M A Ramırez-Ortegon ldquoAnoptimization algorithm inspired by the States of Matter thatimproves the balance between exploration and exploitationrdquoApplied Intelligence vol 40 no 2 pp 256ndash272 2014

[48] MM Ali C Khompatraporn and Z B Zabinsky ldquoA numericalevaluation of several stochastic algorithms on selected con-tinuous global optimization test problemsrdquo Journal of GlobalOptimization vol 31 no 4 pp 635ndash672 2005

[49] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000

[50] F Herrera M Lozano and A M Sanchez ldquoA taxonomy forthe crossover operator for real-coded genetic algorithms anexperimental studyrdquo International Journal of Intelligent Systemsvol 18 no 3 pp 309ndash338 2003

[51] E Cuevas D Zaldivar M Perez-Cisneros and M Ramırez-Ortegon ldquoCircle detection using discrete differential evolutionoptimizationrdquo Pattern Analysis amp Applications vol 14 no 1 pp93ndash107 2011

[52] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015

[53] M S Kiran H Hakli M Gunduz and H Uguz ldquoArtificial beecolony algorithm with variable search strategy for continuousoptimizationrdquo Information Sciences vol 300 pp 140ndash157 2015

[54] F Kang J Li and ZMa ldquoRosenbrock artificial bee colony algo-rithm for accurate global optimization of numerical functionsrdquoInformation Sciences vol 181 no 16 pp 3508ndash3531 2011

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[57] R Unnikrishnan C Pantofaru and M Hebert ldquoTowardobjective evaluation of image segmentation algorithmsrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol29 no 6 pp 929ndash944 2007

[58] R Unnikrishnan C Pantofaru and M Hebert ldquoA measurefor objective evaluation of image segmentation algorithmsrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo05) p 34 SanDiego Calif USA June 2005

[59] Y J Zhang ldquoA survey on evaluation methods for imagesegmentationrdquo Pattern Recognition vol 29 no 8 pp 1335ndash13461996

[60] S Chabrier B Emile C Rosenberger and H Laurent ldquoUnsu-pervised performance evaluation of image segmentationrdquoEURASIP Journal on Advances in Signal Processing vol 2006Article ID 96306 12 pages 2006

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of