Anytime perceptual grouping of 2D features into 3D basic shapes

10
Anytime Perceptual Grouping of 2D Features into 3D Basic Shapes Andreas Richtsfeld, Michael Zillich, and Markus Vincze Automation and Control Institute (ACIN), Vienna University of Technology Gusshausstrae 25-29, 1040 Vienna, Austria {ari,mz,mv}@acin.tuwien.ac.at Abstract. 2D perceptual grouping is a well studied area which still has its merits even in the age of powerful object recognizer, namely when no prior object knowledge is available. Often perceptual grouping mechanisms struggle with the runtime complexity stemming from the combinatorial explosion when creating larger assemblies of features, and simple thresholding for pruning hypotheses leads to cumbersome tuning of parameters. In this work we propose an incremental approach instead, which leads to an anytime method, where the system produces more results with longer runtime. Moreover the proposed approach lends itself easily to incorporation of attentional mechanisms. We show how basic 3D object shapes can thus be detected using a table plane assumption. Keywords: Computer vision, perceptual organization, object detection, basic shapes, proto-objects. 1 Introduction and Related Work Recognition methods based on powerful feature descriptors have lead to impres- sive results in object instance recognition and object categorization. In some scenarios, however, no prior object knowledge can be assumed and more generic object segmentation methods are required. This is the realm of perceptual group- ing, the application of generic principles for grouping certain image features into assemblies that are likely to correspond to objects in the scene. These principles are of course well studied, starting with the pioneering work in Gestalt psychology by Wertheimer, K¨ ohler, Koffka and Metzger. Gestalt prin- ciples (or Gestalt laws ) aim to formulate the regularities according to which the perceptual input is organized into unitary forms, also referred to as wholes, groups or Gestalts. Typical Gestalt principles are proximity, continuity, similar- ity and closure, as well as common fate, past experience and good Gestalt (form) ([22,23,7,6,9]). Common region and element connectedness were later introduced and discussed by Rock and Palmer [12,10,11]. The perceptual grouping literature has largely focused on grouping of edges, especially detecting the enclosing contours of objects. While learning algorithms have been mainly used for object recognition, Sarkar and Soundararajan [16,18] M. Chen, B. Leibe, and B. Neumann (Eds.): ICVS 2013, LNCS 7963, pp. 73–82, 2013. c Springer-Verlag Berlin Heidelberg 2013

Transcript of Anytime perceptual grouping of 2D features into 3D basic shapes

Anytime Perceptual Grouping of 2D Features

into 3D Basic Shapes

Andreas Richtsfeld, Michael Zillich, and Markus Vincze

Automation and Control Institute (ACIN),Vienna University of Technology

Gusshausstrae 25-29, 1040 Vienna, Austria{ari,mz,mv}@acin.tuwien.ac.at

Abstract. 2D perceptual grouping is a well studied area which stillhas its merits even in the age of powerful object recognizer, namelywhen no prior object knowledge is available. Often perceptual groupingmechanisms struggle with the runtime complexity stemming from thecombinatorial explosion when creating larger assemblies of features, andsimple thresholding for pruning hypotheses leads to cumbersome tuningof parameters. In this work we propose an incremental approach instead,which leads to an anytime method, where the system produces moreresults with longer runtime. Moreover the proposed approach lends itselfeasily to incorporation of attentional mechanisms. We show how basic3D object shapes can thus be detected using a table plane assumption.

Keywords: Computer vision, perceptual organization, object detection,basic shapes, proto-objects.

1 Introduction and Related Work

Recognition methods based on powerful feature descriptors have lead to impres-sive results in object instance recognition and object categorization. In somescenarios, however, no prior object knowledge can be assumed and more genericobject segmentation methods are required. This is the realm of perceptual group-ing, the application of generic principles for grouping certain image features intoassemblies that are likely to correspond to objects in the scene.

These principles are of course well studied, starting with the pioneering workin Gestalt psychology by Wertheimer, Kohler, Koffka and Metzger. Gestalt prin-ciples (or Gestalt laws) aim to formulate the regularities according to which theperceptual input is organized into unitary forms, also referred to as wholes,groups or Gestalts. Typical Gestalt principles are proximity, continuity, similar-ity and closure, as well as common fate, past experience and good Gestalt (form)([22,23,7,6,9]). Common region and element connectedness were later introducedand discussed by Rock and Palmer [12,10,11].

The perceptual grouping literature has largely focused on grouping of edges,especially detecting the enclosing contours of objects. While learning algorithmshave been mainly used for object recognition, Sarkar and Soundararajan [16,18]

M. Chen, B. Leibe, and B. Neumann (Eds.): ICVS 2013, LNCS 7963, pp. 73–82, 2013.c© Springer-Verlag Berlin Heidelberg 2013

74 A. Richtsfeld, M. Zillich, and M. Vincze

employ a methodology to learn the importance of Gestalt principles to build largesalient edge groupings of visual low-level primitives. Grouping principles, suchas proximity, parallelity, continuity, junctions and common region are trained toform large groups of edge primitives that are likely to come from a single ob-ject. The approach by Estrada and Jepson [4] uses a measure of affinity betweenpairs of lines to group line segments into perceptually salient contours in com-plex images. Compact closed region structure is also identified by bidirectionalbest-first search with backtracking by Saund [19], but similar to the approachby Estrada et al. many parameters and thresholds have to be set. Graph-basedmethods to extract closed contours were introduced by Wang et al. [21] andZhu et al. [24]. Unfortunately both methods suffer from the problem of combi-natorial explosion leading to polynomial runtime complexity in the number ofedge segments.

Sala and Dickinson [15,14] introduce a method for contour grouping by con-struction of a region boundary graph and use consistent path search to iden-tify a pre-defined vocabulary of simple part models. These models correspondto projections of 3D objects, but the approach stays at the 2D image level.Song et al. [20] propose a novel definition of the Gestalt principle Pragnanz basedon Koffkas definition that image descriptions should be both stable and simple.They show a grouping mechanism based on the Gestalt principles proximity andcommon region, using straight lines as grouping primitives and color image re-gions to estimate the common region principle. Benchmark results are shown onthe Berkeley Segmentation Dataset (BSD) to demonstrate their method.

Focusing on the runtime behavior of perceptual grouping Mahamud et al. [8]presented an any-time method for finding smooth closed contours bounding ob-jects of unknown shape using the most salient edge segments for incrementalprocessing. Their approach is able to stop processing at any time and deliversthe best results (most salient closures) detected up to that time. However, adrawback of the approach is the high computational cost and that the approachdoes not scale well to bigger problems. Zillich et al. [25] introduced incrementalindexing in the image space (as opposed to the more common indexing into amodel data base using geometric features) for parameter-free perceptual group-ing. Incrementally extending search lines are used to find intersections of straightlines and shortest path search is then used to identify closed convex contours,leading again to an anytime approach that yields the most salient closed con-vex contours at any processing time. Their approach however only uses straightedges.

In this work we also want to focus on anytime processing, as we believe thathaving control of the runtime behavior of a method is of high importance if thatmethod is to be used within a larger system context. This is especially true formethods prone to suffering from high runtime complexity (e.g. combinatorialexplosions), which is often the case with perceptual grouping approaches. Thecontributions of this work then are a) the extension of the anytime frameworkof Zillich et al. [25] to support a wider range of feature primitives; b) the con-struction of 3D objects from grouped 2D features (for a limited class of shapes);

Anytime Perceptual Grouping of 2D Features into 3D Basic Shapes 75

Edge-Segments

Arcs

Ellipses

C. Arc Groups

Lines

Ellipse-Junctions

Cylinders Cones Cuboids

Flaps

Rectangles

Closures

Images

Extended Ellipses

Signal level

Primitive level

Structural level

Assembly level

Line-JunctionsArc-Junctions

Spheres

Fig. 1. Our proposed perceptual grouping structure in the classificatory structure in-troduced by Sarkar and Boyer [17,2]

and c) showing how attention quite naturally fits into this framework and leadsto attended objects popping out earlier.

2 Perceptual Grouping Structure

Sarkar and Boyer [17,2] introduced a classificatory structure for perceptual group-ing processes with four different levels of data abstraction, namely signal, prim-itive, structural and assembly level. Edgels, edge chains or uniform regions areextracted first from image pixels (grey level or RGB image) in the signal level.Parametric image features, such as straight lines and arcs are subsequently esti-mated in the primitive level, before corners, closed regions, polygons and ribbonsare constructed in the structural level. Large arrangements of visual primitivesfinally form object hypotheses at the assembly level.

Our approach follows the above 4 level structure. Edges are extracted atthe signal level followed by the construction of line- and arc-primitives. In thestructural level incremental processing is initiated by employing incrementallyextending search lines in the image space. Whenever search lines intersect newjunctions are created and subsequent modules will be triggered to form newhigher level primitives. Bottom-up grouping continues until basic object shapesappear in the assembly level.

3 Implementation

In this section we first present the overall process flow within the structure inFig. 1. We then outline how incremental indexing drives the process flow, explainthe various processing modules in detail, and explain how we arrive at 3D objecthypotheses. Finally we show how attentional mechanisms can be tied to theproposed process flow.

76 A. Richtsfeld, M. Zillich, and M. Vincze

3.1 Process Flow

Processing does not happen in a traditional bottom-up pipeline, where each levelof primitives is constructed one after another. Instead, following the principleof anytimeness, processing is incremental. A processing module for a primitive(e.g. finding closures) is always triggered when the module gets informed abouta new lower level primitive (e.g. a junction). This in turn leads to processingat the next higher level (e.g. rectangles), and stops at a level where no moregrouping principles can be satisfied, or at the assembly level with the creationof an object hypothesis.

Processing starts with edge extraction, which is the only non-incremental part,as we use an off-the shelf edge detection method to detect all edges (lines andarcs) at once. All subsequent processing is controlled by extending search linesof shape primitives (see Sec. 3.2) to find junctions between primitives.

All created primitives are ranked according to significance values derived fromgeometric constraints (e.g. line length). Ranking serves two purposes. First, prim-itives at the lower levels, i.e. those triggering higher level processing, are selectedfor processing according to rank. For example a line is chosen to extend its searchline by one pixel. To this end we randomly select ranked primitives using an ex-ponential distribution. This leads to the most salient structures popping out first.Second, ranking allows masking to prune unlikely primitives. A higher rankedprimitive masks a lower ranked primitive if the two disagree about the interpre-tation of a common lower-level element (e.g. two overlapping closures sharing anedge). Masking prunes results with poor significance and limits combinatorialexplosions in processing modules higher up the hierarchy.

3.2 Incremental Indexing and Anytimeness

Incremental indexing is used in the structural level to form junctions. Searchlines emanating from primitives are used to find junctions between primitives.Search lines are defined in the image space for line, arc and ellipse primitives, asshown in Fig. 2. We define tangential and normal search lines for straight lines,tangential search lines at the start and end point for arcs, and normal search linesat the vertex points for ellipses. These search lines are drawn into the so-calledvote image one pixel at a time, whenever the respective primitive was selectedfor processing. Each search line is drawn with the originating primitive’s label.Whenever a growing search line intersects another or hits a primitive, a newjunction emerges. The different types of junctions for different types of inter-secting search lines are shown in Figure 2. Emanating search lines for differentprocessing times can be seen in the result section in Fig. 4.

3.3 Gestalt Principles and Primitives

Primitives are grouped using implicitly and explicitly implemented Gestalt prin-ciples. E.g. proximity is implicitly implemented by the search lines for findingjunctions, while closure, parallelity or connectedness are explicitly implemented

Anytime Perceptual Grouping of 2D Features into 3D Basic Shapes 77

lilj

li

lj

ts tensr

nsl

ner

nel

ts te nol nil nornir

ai aj

e

e

li lj

Line search lines Arc search lines Ellipes search lines

Collinearity T-Junction L-Junction Arc-Junction Ellipse-Junction

la

li lj

Fig. 2. Definition of search lines (first row) and types of junctions between search lines(second row). Collinearities, T-junctions and L-junctions between lines, arc-junctionsbetween arcs and ellipse-junctions between ellipses and lines.

as geometric constraints within the respective processing modules. In the follow-ing we briefly describe the generation of each primitive:Edge Segments – Edge segments are constructed from the raw color imagedata with a Canny edge extractor.Lines and Arcs – Edge segments are split for fitting straight lines and arcsinto the segments using the method by Rosin and West [13].Line Junctions – Junctions between lines are T-Junctions, L-Junctions andCollinearities, as shown in Fig. 2. T-junctions are substituted by two L-junctionsand a collinearity for further processing in the grouping framework. Lines formthe vertices Vl, and junctions the edges El of a graph Gl = (Vl, El), which isconstantly updated as new junctions are created.Closures (closed convex contours) – Whenever new junctions appear Di-jkstra’s algorithm for shortest path search is run on the updated graph Gl =(Vl, El). A constraint on similar turning directions during search ensures creationof convex contours.Rectangles – With rectangles we refer to geometric structures including trape-zoids and parallelograms (i.e. perspective projections of 3D rectangles underone-point projection [3]). Four dominant changes of the direction (L-Junctions)and at least one parallel opposing line pair is mandatory to create a rectangle.Flaps – A flap is a geometric structure built from two non-overlapping rect-angles where the two rectangles share one edge. All cuboidal structures undergeneric views consist of flap primitives.Arc junctions – Arc junctions are created when two arc search lines with sameconvexity intersect, as shown in Fig. 2. Again a graph Ga(Va, Ea) is constructedfrom arcs and their junctions, and updated whenever a new junction comes in.Convex arc groups – Shortest path search on Ga(Va, Ea) leads to convex arcgroups, i.e. groups of pairwise convex arcs.Ellipses – Ellipses are fitted to convex arc groups using least squares fitting [5]as implemented in OpenCV.Ellipse junctions – Ellipses trigger initialization of ellipse search lines, as shownin Fig. 2), with the goal of finding lines connected to the ellipse’s major vertices.

78 A. Richtsfeld, M. Zillich, and M. Vincze

ej

li lj

eiei

li ljli lj

ej

+ =

L-j

=li lj +

+ + = + =

e

li lj

e

L-j

fi fj fk fi L-j

Fig. 3. Construction of basic object shapes: First row: Cuboid from three flaps or fromflap and L-junction. Second row: Cylinder from two extended ellipses and Cone fromextended ellipse and L-junction.

Extended ellipses – Ellipses with two attached lines (possibly themselves ex-tended via collinearities) form so called extended ellipses.Cuboids – Figure 3 shows the two options to construct a cuboid. First, fromthree flaps sharing three different rectangles and second, from a L-junction andtwo line primitives connecting to a flap.Cones – Cones are built from extended ellipses by finding an L-junction betweenthe connected lines from the ellipse junctions, see Fig. 3.Cylinders – Cylinders are also build from extended ellipses with ellipse junc-tions at each vertex by finding a connection of lines between the ellipse junctions,see again Fig. 3.Spheres – Spheres are inferred from circles, which are a special type of ellipse.

3.4 From 2D Shapes to 3D Objects

All primitives so far are groups of 2D image features. But the primitives atthe assembly levels are highly non-accidental configurations corresponding toprojections of views of 3D objects. So with a few additional assumptions weshould be able to create 3D object hypotheses.

To this end we assume that the pose of the calibrated camera with respect tothe dominant plane on which objects are resting is known. For the indoor roboticsscenario we are targeting, this knowledge comes from the known tilt angle andelevation of the camera, together with the assumed (typically standardized) tableheight. We are then able to calculate the 3D properties of a rectangular cuboid,of upright standing cones and cylinders. We simply intersect view rays throughthe lowermost junctions with the ground plane and thus obtain 3D position onthe plane as well as unambiguous size of the basic object shapes. This restrictionto simple symmetric shapes also allows to complete the unseen backside of theobject.

3.5 Adding Attention

In Section 3.1 ranking of primitives was based solely on their significance. Some-times however we might have external clues. The user of our system might pro-vide us with regions of interest (ROIs), perhaps deduced from saliency maps,

Anytime Perceptual Grouping of 2D Features into 3D Basic Shapes 79

Fig. 4. Incremental grouping: Line search lines (first row), arc search lines (second row)and resulting basic shapes (third row) after 150, 200, 300 and 500ms processing time.The last row shows search lines and detected basic shapes when attention (region ofinterest) is set to the salt box and the tape respectively.

or change detection. Also higher level knowledge might be available such as thegaze direction of a human or the position of a grasping hand.

Including such attentional cues is quite straight-forward. To this end we weightthe significance of a primitive with its distance to the provided region(s) ofinterest, using a Gaussian located at the center of the ROI with sigma equal to0.25 times image width. Processing is thus concentrated around the region ofinterest. The ROIs allows us to specify where to look first and more “carefully”.Note that we do not exclude other regions of the image, as would be the case ifwe simply cut out the ROI and then work on the sub-image. Strongly prominentstructures outside the ROI will still be detected, albeit a bit later.

4 Experiments

We demonstrate results on a table top scene containing six objects of differentshapes. Figure 4 shows the growing search lines of lines (first row) and arcs(second row) and the resulting 3D shapes of the assembly level, after 150ms,200ms, 300ms and 500ms processing time. As can be seen, with the proposedincremental approach object detection depends on processing time: The longer

80 A. Richtsfeld, M. Zillich, and M. Vincze

Table 1. Average true positive detection rate for objects shown in Fig. 4 with differentprocessing times and results when using an attention point for the sought-after object

Mug Salt-Box Cube Peanuts Ball Tape Average

150ms 0.0% 0.0% 0.0% 3.3% 33.3% 0.0% 7.8%150ms with ROI 0.0% 0.0% 36.7% 10.0% 40.0% 13.3% 16.7%

200ms 0.0% 10.0% 83.3% 13.3% 16.7% 0.0% 20.6%200ms with ROI 3.3% 10.0% 100.0% 20.0% 56.6% 50.0% 39.9%

300ms 0.0% 60.0% 100.0% 40.0% 80.0% 10.0% 48.3%300ms with ROI 0.0% 30.0% 100.0% 33.3% 90.0% 56.7% 51.7%

500ms 6.7% 100.0% 100.0% 33.3% 90.0% 60.0% 65.0%500ms with ROI 16.7% 66.7% 100.0% 46.6% 96.6% 80.0% 67.8%

the search lines grow the more primitives are connected and hence the moreshapes are found.

The last row of Fig. 4 shows two examples when using attention by specifyinga region of interest (ROI) with 300ms processing time. Search lines in the ROIare preferred for extension, leading on average to earlier detection. Table 1 showsthe average detection rate (over 30 images) for the different objects shown inFig. 4. The detection rate is estimated with four different processing times withand without a region of interest (ROI) centered on the object. Note that we arehere not concerned with how attention is provided. It could be based on color(”the yellow ball”), generic saliency operators or any other means providingsalient locations in the image.

Surprisingly it can be observed that the detection rate actually decreasesfor some objects when using attention. This can be explained considering thetexture on the objects which, being nearest to the attention point, grabs toomuch attention and lets the system hallucinate objects into the texture. Notethat the actual object is typically still found, but masked by the hallucinatedtexture object, and thus not reported. The main impact of using ROIs can beobserved when using short processing times.

5 Conclusion

We presented an anytime system for detecting 3D basic object shapes from2D images. Anytimeness is provided by incremental indexing, which drives thebottom-up perceptual grouping process. We also showed how attention can quitenaturally be integrated into anytime processing. Experimental evaluation showsencouraging results on example images of moderate complexity with objects ofdifferent shapes, though a more extensive evaluation on a broader range of scenesand objects is needed. Further images with prototypical real world scenes areshown in Fig. 5.

One limitation of the system is the fact that we rely solely on edge primitivesfrom a standard Canny edge detector. Recent approaches have shown that edgeextraction and region segmentation delivers best results when done simultane-ously [1]. Also results from more sophisticated edge detectors would certainly

Anytime Perceptual Grouping of 2D Features into 3D Basic Shapes 81

Fig. 5. More detected shape primitives: Office and living room scenes with detectedboxes and cylinders (red) as well as the lower-level primitives rectangles (yellow) andclosures (blue)

lead to improved results and should be considered for further investigations.Another limitation is the restricted number of detectable shapes at the assem-bly level (much like [15,14]), but this restriction on the other hand providesthe heuristics to generate 3D shapes from the 2D assemblies. To overcome theproblem of defining basic shapes, learning algorithms could be employed at theassembly level to group primitives of the structural level according to the prin-ciples of perceptual organization as shown in [16,18].

Acknowledgement. The research leading to these results has received fundingfrom the Austrian Science Fund (FWF): project TRP 139-N23, InSitu.

References

1. Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour Detection and Hierar-chical Image Segmentation. IEEE Transaction on Pattern Analysis and MachineIntelligence (PAMI) 33(5), 898–916 (2011)

2. Boyer, K.L., Sarkar, S.: Perceptual organization in computer vision: status, chal-lenges, and potential. Computer Vision and Image Understanding 76(1), 1–5 (1999)

3. Carlbom, I., Paciorek, J.: Planar Geometric Projections and Viewing Transforma-tions. ACM Computing Surveys 10(4), 465–502 (1978)

4. Estrada, F.J., Jepson, A.D.: Perceptual grouping for contour extraction. In: In-ternational Conference on Pattern Recognition (ICPR), vol. 2, pp. 32–35. IEEE(2004)

5. Fitzgibbon, A.W., Fisher, R.B.: A Buyer’s Guide to Conic Fitting. In: Procedingsof the British Machine Vision Conference (BMVC), pp. 513–522. British MachineVision Association (1995)

6. Koffka, K.: Principles of Gestalt Psychology. International library of psychology,philosophy, and scientific method, vol. 20. Harcourt, Brace and World (1935)

7. Kohler, W.: Gestalt Psychology Today. American Psychologist 14(12), 727–734(1959)

8. Mahamud, S., Williams, L.R., Thornber, K.K.: Segmentation of multiple salientclosed contours from real images. IEEE Transactions on Pattern Analysis andMachine Intelligence (PAMI) 25(4), 433–444 (2003)

82 A. Richtsfeld, M. Zillich, and M. Vincze

9. Metzger, W.: Laws of Seeing, 1st edn. The MIT Press (1936)10. Palmer, S.E.: Common region: a new principle of perceptual grouping. Cognitive

Psychology 24(3), 436–447 (1992)11. Palmer, S., Rock, I.: Rethinking perceptual organization: The role of uniform con-

nectedness. Psychonomic Bulletin & Review 1(1), 29–55 (1994)12. Rock, I., Palmer, S.: The legacy of Gestalt psychology. Scientific American 263(6),

84–90 (1990)13. Rosin, P.L., West, G.A.W.: Segmenting Curves into Elliptic Arcs and Straight

Lines. In: Proceedings Third International Conference on Computer Vision(ICCV), pp. 75–78. IEEE Comput. Soc. Press (1990)

14. Sala, P., Dickinson, S.: Contour Grouping and Abstraction Using Simple Part Mod-els. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part V. LNCS,vol. 6315, pp. 603–616. Springer, Heidelberg (2010)

15. Sala, P., Dickinson, S.J.: Model-based perceptual grouping and shape abstraction.In: IEEE Computer Society Conference on Computer Vision and Pattern Recog-nition Workshops, pp. 1–8 (2008)

16. Sarkar, S.: Learning to Form Large Groups of Salient Image Features. In: IEEEComputer Society Conference on Computer Vision and Pattern Recognition(CVPR), pp. 780–786 (1998)

17. Sarkar, S., Boyer, K.L.: Perceptual organization in computer vision - A review anda proposal for a classificatory structure. IEEE Transactions on Systems Man andCybernetics 23(2), 382–399 (1993)

18. Sarkar, S., Soundararajan, P.: Supervised learning of large perceptual organization:graph spectral partitioning and learning automata. IEEE Transactions on PatternAnalysis and Machine Intelligence (PAMI) 22(5), 504–525 (2000)

19. Saund, E.: Finding perceptually closed paths in sketches and drawings. IEEETransactions on Pattern Analysis and Machine Intelligence (PAMI) 25(4),475–491 (2003)

20. Song, Y.-Z., Xiao, B., Hall, P., Wang, L.: In Search of Perceptually Salient Group-ings. IEEE Transactions on Image Processing 20(4), 935–947 (2011)

21. Wang, S., Stahl, J.S., Bailey, A., Dropps, M.: Global Detection of Salient Con-vex Boundaries. International Journal of Computer Vision (IJCV) 71(3), 337–359(2007)

22. Wertheimer, M.: Untersuchungen zur Lehre von der Gestalt. II. Psychological Re-search 4(1), 301–350 (1923)

23. Wertheimer, M.: Principles of perceptual organization. In: Beardslee, D.C.,Wertheimer, M. (eds.) A Source Book of Gestalt Psychology, pp. 115–135. VanNostrand, Inc. (1958)

24. Zhu, Q., Song, G., Shi, J.: Untangling Cycles for Contour Grouping. In: Interna-tional Conference on Computer Vision (ICCV), vol. (c), pp. 1–8. IEEE (2007)

25. Zillich, M., Vincze, M.: Anytimeness avoids parameters in detecting closed convexpolygons. In: IEEE Computer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW), pp. 1–8. IEEE (June 2008)