Planar segmentation of data from a laser profile scanner mounted on an industrial robot

10
Int J Adv Manuf Technol (2009) 45:181–190 DOI 10.1007/s00170-009-1951-9 ORIGINAL ARTICLE Planar segmentation of data from a laser profile scanner mounted on an industrial robot J. A. P. Kjellander · Mohamed Rahayem Received: 29 October 2008 / Accepted: 27 January 2009 / Published online: 24 February 2009 © Springer-Verlag London Limited 2009 Abstract In industrial applications like rapid prototyp- ing, robot vision, and geometric reverse engineering, where speed and automatic operation are important, an industrial robot and a laser profile scanner can be used as a 3D measurement system. This paper is concerned with the problem of segmenting the data from such a system into regions that can be fitted with planar surfaces. We have developed a new algorithm for planar segmentation based on laser scan profiles and robot poses. Compared to a traditional algorithm that operates on a point cloud, the new algorithm is shown to be more effective and faster. Keywords 3D measurement system · Laser profile scanner · Industrial robot · Segmentation · Planar regions 1 Introduction Optical measurement systems can be used to rapidly acquire the coordinates of dense sets of 3D points from the surfaces of real world objects. One such system is the laser profile scanner, see Fig. 1, which projects a straight line on the object while a digital camera cap- tures the image of the projection. Pixels representing the projected line are then joined into a 2D profile. To J. A. P. Kjellander · M. Rahayem (B ) School of Science and Technology, Örebro University, SE-701 82 Örebro, Sweden e-mail: [email protected] URL: www.oru.se/nt/cad J. A. P. Kjellander e-mail: [email protected] cover the entire object the profile scanner is moved along the surface and new images are captured. The result is a series of profiles, each captured with the scanner in a different pose. The profile scanner is usu- ally mounted on a mechanical device that controls the scanner movement, or at least records the scanner pose of each profile in 3D. This makes it possible to map the 2D points in the profiles to a common 3D coordinate system and the result is then often referred to as a point cloud. Point clouds may be used in applications like inspection, geometric reverse engineering, object recognition or navigation. In such applications the point cloud is often processed by a segmentation algorithm that uses a geometrical constraint to group points into regions representing planes, cylinders or higher order surfaces. Planar segmentation is thus defined as the problem of identifying points that belong to the same plane. We believe that the industrial robot, although not widely used for this purpose, is an interesting al- ternative as a carrier of a laser profile scanner. A robot is fast, flexible, robust and relatively cheap compared with a coordinate measuring machine (CMM). To test the idea, we have mounted a laser profile scanner on an industrial robot with a turntable, see Fig. 2 and see [1719] for details on motion control, image processing, and noise filtering. It is important to note that the absolute accuracy of an industrial robot is much lower than the accuracy of a laser profile scanner. In [27], we show that the accuracy of our robot is in the range of ±400μm, while the accuracy of the laser profile scanner is approx. 10 times better (±50μm). Individual profiles will thus be relatively accurate, but accuracy is lost when they are mapped to the point cloud. In the scope of planar segmentation, it should therefore be of interest to investigate if segmentation algorithms can

Transcript of Planar segmentation of data from a laser profile scanner mounted on an industrial robot

Int J Adv Manuf Technol (2009) 45:181–190DOI 10.1007/s00170-009-1951-9

ORIGINAL ARTICLE

Planar segmentation of data from a laser profile scannermounted on an industrial robot

J. A. P. Kjellander · Mohamed Rahayem

Received: 29 October 2008 / Accepted: 27 January 2009 / Published online: 24 February 2009© Springer-Verlag London Limited 2009

Abstract In industrial applications like rapid prototyp-ing, robot vision, and geometric reverse engineering,where speed and automatic operation are important,an industrial robot and a laser profile scanner canbe used as a 3D measurement system. This paper isconcerned with the problem of segmenting the datafrom such a system into regions that can be fitted withplanar surfaces. We have developed a new algorithmfor planar segmentation based on laser scan profiles androbot poses. Compared to a traditional algorithm thatoperates on a point cloud, the new algorithm is shownto be more effective and faster.

Keywords 3D measurement system ·Laser profile scanner · Industrial robot ·Segmentation · Planar regions

1 Introduction

Optical measurement systems can be used to rapidlyacquire the coordinates of dense sets of 3D points fromthe surfaces of real world objects. One such system isthe laser profile scanner, see Fig. 1, which projects astraight line on the object while a digital camera cap-tures the image of the projection. Pixels representingthe projected line are then joined into a 2D profile. To

J. A. P. Kjellander · M. Rahayem (B)School of Science and Technology, Örebro University,SE-701 82 Örebro, Swedene-mail: [email protected]: www.oru.se/nt/cad

J. A. P. Kjellandere-mail: [email protected]

cover the entire object the profile scanner is movedalong the surface and new images are captured. Theresult is a series of profiles, each captured with thescanner in a different pose. The profile scanner is usu-ally mounted on a mechanical device that controls thescanner movement, or at least records the scanner poseof each profile in 3D. This makes it possible to map the2D points in the profiles to a common 3D coordinatesystem and the result is then often referred to as apoint cloud. Point clouds may be used in applicationslike inspection, geometric reverse engineering, objectrecognition or navigation. In such applications the pointcloud is often processed by a segmentation algorithmthat uses a geometrical constraint to group points intoregions representing planes, cylinders or higher ordersurfaces. Planar segmentation is thus defined as theproblem of identifying points that belong to the sameplane. We believe that the industrial robot, althoughnot widely used for this purpose, is an interesting al-ternative as a carrier of a laser profile scanner. A robotis fast, flexible, robust and relatively cheap comparedwith a coordinate measuring machine (CMM). To testthe idea, we have mounted a laser profile scanner onan industrial robot with a turntable, see Fig. 2 and see[17–19] for details on motion control, image processing,and noise filtering. It is important to note that theabsolute accuracy of an industrial robot is much lowerthan the accuracy of a laser profile scanner. In [27],we show that the accuracy of our robot is in the rangeof ±400μm, while the accuracy of the laser profilescanner is approx. 10 times better (±50μm). Individualprofiles will thus be relatively accurate, but accuracy islost when they are mapped to the point cloud. In thescope of planar segmentation, it should therefore be ofinterest to investigate if segmentation algorithms can

182 Int J Adv Manuf Technol (2009) 45:181–190

Fig. 1 The laser profile scanner mounted on an industrial robot

take advantage of the relatively high accuracy of the 2Dprofiles. It is also likely that an algorithm operating on2D data would be faster than an algorithm based on 3Dpoint clouds. This has been shown for range images butnot for data from profile scanners as far as we know. Arange image scanner is similar to a profile scanner butis not moved relative to the object. The scanner itselfmoves the line over the surface of the object, usually byrotating the laser source around a fixed axis. The rangecamera thus creates a series of profiles, each related to aspecific angle of rotation but all in the same coordinatesystem.

This paper includes a literature review covering laserscanning and segmentation. In Section 3, the implemen-tations of two segmentation algorithms under consider-ation are presented. Section 4 presents the results ofthree different experiments. Finally, in Section 5, weconclude the paper and propose future work.

2 Literature review

2.1 Laser scanning

Laser scanning represents a wide range of related tech-nologies. In [2], Blais presents a review of 3D digitizingtechniques with a focus on commercial systems. Pitoand Bajcsy [25] present a simple system by combining afixed range camera with a turntable. Two more flexiblesystems are described in [4, 22], where a CMM is usedin combination with a laser profile scanner. A laserprofile scanner with two charge-coupled device (CCD)cameras mounted on a three-axis transport mechanism

Fig. 2 A laser profile scanner mounted on an industrial robotwith a turntable

is described in [20]. A motorized rotary table with twodegrees of freedom and a laser scanner mounted on acomputer numerical control (CNC) machine with fourdegrees of freedom is described in [33]. Callieri et al.[3] present a system based on a range laser scannermounted on the arm of an industrial robot in combina-tion with a turntable. For details on view planning andautomated 3D object reconstruction and inspection, see[31].

2.2 Segmentation

Segmentation is a wide and complex task, both in termsof problem formulation and solution approach in differ-ent applications. Approaches described in the literatureare usually classified in one of the following categories.

2.2.1 Edge-based approaches

Edge-based approaches attempt to detect discontinu-ities in the surface represented by the point data.Fan et al. [8] used local surface curvature properties

Int J Adv Manuf Technol (2009) 45:181–190 183

Capture profile

data

Split profiles and fit

line segments

Find the longest line

and use its neighbours

to define a seed plane

Grow the region

around the seed

R e m o v e o v e r

segmented planes

N o S e e d F o u n d

Fit planes

F i n d N e x t S e e d

Fig. 3 Main steps of planar segmentation using 2D profiles

to identify significant boundaries in range image data.Chen and Liu [6] segmented data from a CMM byslicing and fitting them with 2D NURBS curves. Theboundary points were detected by calculating the max-imum curvature of the fitted curve. Milroy et al. [23]used a semiautomatic edge-based approach for orthog-onal cross section (OCS) models. Yang and Lee in[37] identify edge points as the curvature extremesby estimating the surface curvature. Sappa and Devy[30] propose an algorithm that very quickly processeslarge-range images. The proposed algorithm consistsof two steps. First, a binary edge map is generated.Then, a contour detection strategy is responsible forthe extraction of the different boundaries. Demarsinet al. [7] presented an algorithm to extract closed sharpfeature lines, which is necessary to create a closed curvenetwork.

2.2.2 Region growing-based approaches

Approaches based on region growing use local surfaceproperties to detect continuous surfaces. Hoffman and

Jain [12] segmented range images into surface patchesand classified them as planar, convex, or concave basedon a nonparametric statistical test. Besl and Jain [1]developed a segmentation method based on variableorder surface fitting. A region-growing algorithm basedon numerical curvature estimation of mesh triangleswas published by Sacchi et al. [28, 29]. Miguel andShimada [35] automatically segmented a dense meshinto regions approximated by single surfaces. The al-gorithm iterates between region growing and surfacefitting to maximize the number of connected verticesapproximated by a single surface. Rabbani et al. [26]segmented a point cloud by using local surface normalsand point connectivity.

2.2.3 Hybrid approaches

Hybrid segmentation approaches have been developedwhere the edge-based and region growing-based ap-proaches are combined. The approach proposed byYokoya and Levine in [38] divided a point cloud intosurface primitives using biquadratic surface fitting. Thesegmented data were homogeneous in differential geo-metric properties and did not contain discontinuities.The Gaussian and mean curvatures were computedand used to perform the initial region-based segmen-tation. Then, after employing two additional edge-based segmentations from the partial derivatives anddepth values, the final segmentation was applied tothe initially segmented data. Checchin et al. [5] used ahybrid approach that combined edge detection based

Fig. 4 Photograph of object 1

184 Int J Adv Manuf Technol (2009) 45:181–190

on the surface normals and region growing to mergeover segmented regions. Zhao et al. [39] employed amethod based on triangulation and region groupingthat uses edges, critical points, and surface normals.Gotardo et al. [11] used and improved an estimatorin an iterative process to extract planar and quadricsurfaces from range images. Additionally, a geneticalgorithm was specifically designed to accelerate theprocess of surface extraction. Finally, general overviewsand surveys of segmentation methods are provided by[1] and [24, 32, 36]. In addition, Hoover et al. [13]present a comprehensive experimental comparison oftechniques for range image segmentation into planarpatches.

3 Methodology

3.1 Problem formulation

An organized point cloud S is defined as a set of pointsin 3D spatially sorted in a topologically triangular orrectangular grid. Planar segmentation of S is the par-titioning of S into planar regions {R1, R2, R3, ..., Rn},where n is the number of planar regions in S, andRi

⋂Rj = φ, i �= j, and ∀R ⊆ S. See Section 2 for ref-

erences to such methods. This paper is concerned withplanar segmentation of data from a laser profile scan-ner. If such a scanner is moved along a path, it will out-put a sequence of M profiles F j, where j = 1, 2, 3..., M.Each profile F j is defined in its own coordinate systemC j, in 3D, as a sequence of 2D points P j,i = (x j,i, y j,i),where i = 1, 2, 3..., N j and x j,i−1 < x j,i < x j,i+1. Thenumber of points in each profile, N j, is related to theshape of the object and possible occlusions of the laser

Fig. 5 Object 1 after line splitting

Fig. 6 Object 1 after planar segmentation using method 1

source or the camera. Since 2D operations are usuallyfaster than 3D ones, we want to investigate if P j,i canbe used to speed up computations compared to existingmethods operating on S, where all data are 3D. Wewill show that this is possible and also compare thenew algorithm with an existing method based on pointclouds in three experiments.

3.2 Planar segmentation using 2D profiles

Planar segmentation using 2D profiles is based on thefact that the image of a straight line projected on aplanar surface is also a straight line. Similar algorithmsapplied to range images are described in [14, 16]. Arecursive splitting and line fitting algorithm, based onscalar thresholds Dmax and Lmin, is therefore applied asfollows: For all profiles, F j, join P j,1 and P j,N with astraight line L. If L is longer than Lmin, find the pointP j,dmax with the largest orthogonal distance D to L. IfD > Dmax, split the profile at P j,dmax and apply the testrecursively on the new point sets until all D < Dmax.For each point set that passes the test, fit a straight lineL j,k where k = 1, 2, 3..., NL j, using the least squaresmethod. NL j is then the number of straight lines foundin F j. So far, calculations involve only 2D data. Havingfitted all possible lines in all profiles, we now apply abottom-up region-growing algorithm where the end-points of neighboring lines are tested for coplanaritybased on a third scalar threshold, DPmax. As this is a3D problem, all lines are now mapped to R3 using C j.A seed line and an initial plane are then selected usingthe following steps:

1. Find the profile Fseed with the longest line Lseed,lmax.This is a 2D operation. If lines also exist in Fseed−1

and Fseed+1, a seed is found. If not, apply the test

Int J Adv Manuf Technol (2009) 45:181–190 185

again recursively until a seed is found or all lineshave been processed.

2. Use the end points of Lseed,lmax and the start pointof Lseed+1,1 to define a plane.

3. Calculate the perpendicular distance DP betweenthe endpoint of Lseed+1,1 and the plane.

4. If DP � DPmax, test the two endpoints of Lseed−1,1

for coplanarity the same way.5. If the endpoints of Lseed−1,1 pass the test, create an

initial plane using the start point of Lseed−1,1 andthe two endpoints of Lseed+1,1.

6. If step 5 fails and Fseed−1 or Fseed+1 include morethan one line, an initial plane may still be found byrepeating the steps above using these lines.

With an initial plane defined, we let the region growby adding lines from Fseed+2, Fseed+3, etc., until a pro-file is found where no endpoints are coplanar withthe initial plane or all profiles in this direction areprocessed. Profiles in the other direction are processedthe same way. The result is a set of lines flagged tobelong to a planar region. Remaining lines are thenprocessed again from step 1 above until a seed is nolonger possible to find. In most cases, the algorithmcould end here, but there is a special case that needs tobe handled. A surface with sufficiently small curvaturein the direction of the projected line will give rise to aprofile that the splitting algorithm will identify as oneor more straight lines. If the scanner is then moved inthe direction of zero surface curvature, a sequence oflines will be created that seem to belong to a planarregion. A cylinder that is scanned along its axis is suchan example, see experiment 3 in Section 4. To avoidover segmentation of the above reason, a final test istherefore applied as follows: For all regions R, let Pr

be all points used by the splitting algorithm to fit thelines that belong to R and use each Pr to fit a planeusing principle component analysis (PCA), see [21].The mean perpendicular distance between Pr and thefitted plane is a way to quantify the planarity of Pr. Ifthis distance is larger than Dmax, R is rejected. The mainsteps of the algorithm are shown in Fig. 3.

Table 1 Input/output for method 1

Objects Npf Nls Np tp Dmax Lmin DPmax

Object 1 104 70 4 5 0.5 5 0.39Object 2 90 153 6 39 0.4 5 0.9Object 3 104 220 3 8 0.15 5 1.0

Npf number of profiles, Nls number of line after splitting, Npnumber of planar regions after segmentation, tp processing time,Dmax threshold for line splitting, Lmin threshold for line splitting,DPmax threshold for region growing

Fig. 7 Object 1 after triangulation

3.3 Planar segmentation using 3D point clouds

In order to test the new algorithm, we have also imple-mented a bottom-up region-growing algorithm basedon 3D point clouds and three threshold values, α, κmax,and NTmin. The first step of this algorithm is to connectall points to a triangular mesh. This is relatively easysince points from a profile scanner are ordered spatiallywithin each profile and profiles are ordered in the se-quence they were captured. The details of this meshingprocess are described in an earlier publication from thesame research project, see [18]. With a triangular mesh,a seed triangle is selected as the triangle with the lowestmean curvature estimate using an algorithm presentedby Sacchi et al. in [28] and further developed in [29].For more details on curvature estimation, see [9] and

Fig. 8 Object 1 after planar segmentation using method 2, seedtriangles in black

186 Int J Adv Manuf Technol (2009) 45:181–190

Table 2 Input/output for method 2

Objects Nv N f Np tp κmax NTmin αmax

Object 1 29,952 5,948 4 20 0.015 55 3.2Object 2 25,920 6,564 6 157 0.015 55 3.2Object 3 29,952 9,893 3 174 0.015 526 2.3

Nv number of points in the point cloud, N f number of trianglesafter meshing, Np number of planar regions after segmentation,tp processing time, κmax threshold for seed selection, NTminthreshold for region growing, αmax threshold for region growing

[10]. The steps to calculate a mean curvature estimateare:

1. Calculate the weighted interpolated vertex normal−→Np for each vertex in the mesh using the trianglesin its one-ring neighborhood. Assume there are mtriangles sharing the vertex i and A j is the area of atriangle j, then:

−−→Npi =

�N j ∗ A j + �N j+1 ∗ A j+1 + ... + �Nm ∗ Am

Awi(1)

Each triangle has three vertices p1, p2, and p3,therefore:

�N j = (p2 − p1) ⊗ (p3 − p1)

||(p2 − p1) ⊗ (p3 − p1)|| (2)

A j = 1

2||(p2 − p1) ⊗ (p3 − p1)|| (3)

Awi = A j + A j+1 + ... + Am (4)

2. For each triangle c, calculate a compensated normal−→Nc:

−−→Ncc =

�Np1 ∗ Aw1 + �Np2 ∗ Aw2 + �Np3 ∗ Aw3

Aw1 + Aw2 + Aw3(5)

Fig. 9 Photograph of object 2

Fig. 10 Object 2 after line splitting

3. Calculate compensated triangle centers Cc:

Ccc = p1 ∗ Aw1 + p2 ∗ Aw2 + p3 ∗ Aw3

Aw1 + Aw2 + Aw3(6)

4. With j = 1, 2, 3, for each triangle i, calculate curva-ture estimates κt and κv :

κt j = || �Nci ⊗ �Ncj||||Cci − Cc j|| (7)

κv j = || �Nci ⊗ �Npj||||Cci − pj|| (8)

5. Finally, the mean curvature estimate of the trianglei is considered as the average of the maximum andminimum of the six curvature estimates κt and κv :

κmean = 1

2(κmin + κmax) (9)

We now use the triangle with the lowest κmean as aseed and add adjacent triangles to the region as long asthe deviation α between �Nci of the examined triangleand �Ncs of the seed triangle is within a given threshold.

α = cos−1( �Nci · �Ncs) 0 � α � π (10)

We also add any rejected triangles surrounded onall three sides by triangles that were not rejected. The

Fig. 11 Object 2 after planar segmentation using method 1

Int J Adv Manuf Technol (2009) 45:181–190 187

Fig. 12 Object 2 after triangulation

three corner points of such a triangle already belong tothe region, so it is natural to also include the triangleitself. When no more triangles can be added to thecurrent region, search the remaining triangles for theone with lowest κmean and start the growing procedureagain with a new region until all triangles have beenprocessed. The process stops when no triangle canbe found with a κmean lower than the threshold κmax,and finally, regions with less than NTmin triangles arerejected.

4 Experimental results

Three test objects have been manufactured from CADmodels using high-precision rapid manufacturing tech-nology and scanned using the equipment shown inFig. 2. The same scan data have then been processedwith (1) the method described in Section 3.2 and (2)the method described in Section 3.3.

4.1 Experiment 1

Test object 1 is a simple object with planar surfacesof different orientation, see Fig. 4. The object wasmeasured by moving the scanner from left to right inthe picture while capturing 104 profiles each with a

Fig. 13 Object 2 after planar segmentation using method 2, seedtriangles in black

Fig. 14 Photograph of object 3

resolution of 288 points. Due to occlusion, only four ofthe planar surfaces were seen by the scanner. Figure 5shows the result after line splitting and Fig. 6 after re-gion growing using method 1. See Table 1 for additionaldata.

Figure 7 shows object 1 after triangulation usingmethod 2 and Fig. 8 shows object 1 after segmentationusing method 2. Black triangles are seeds and surround-ing gray triangles are identified by region growing.Both methods 1 and 2 have segmented the four planescorrectly, but method 1 is four times faster, see Tables 1and 2.

4.2 Experiment 2

Test object 2 is a copy of an object introduced byHoschek et al. [15]. It is well known as a test object,for example, used by Sacchi [28], Petitjean [24], andVarady [34] as a reference. It includes a number ofplanar and nonplanar surfaces, see Fig. 9. The objectwas measured by moving the scanner from left to rightin the picture while capturing 90 profiles each with aresolution of 288 points. Due to occlusion, only six ofthe planar surfaces were seen by the scanner. Figure 10shows the result after line splitting and Fig. 11 after

Fig. 15 Object 3 after line splitting

188 Int J Adv Manuf Technol (2009) 45:181–190

Fig. 16 Object 3 after planar segmentation using method 1

region growing using method 1. See Table 1 for addi-tional data.

Figure 12 shows object 2 after triangulation usingmethod 2 and Fig. 13 shows object 2 after segmentationusing method 2. Both methods have segmented the sixplanes correctly, but method 1 is four times faster, seeTables 1 and 2.

4.3 Experiment 3

Test object 3 was chosen to illustrate how the twomethods handle surfaces with relatively large radii. Thelower left part of Fig. 14 shows a cylindrical surfacewith large radius that blends with a planar surface. Theobject was measured by moving the scanner from leftto right in Fig. 14 while capturing 104 profiles each witha resolution of 288 points. Due to occlusion, only threeof the planes were seen by the scanner.

Figure 15 shows the result after line splitting andFig. 16 after region growing using method 1. Due to thelarge radius of the cylinder, the line-splitting algorithmhas not rejected the lines in this area, but Fig. 16 showsthat they are removed later in the process. See Table 1for additional data.

Figure 17 shows object 3 after triangulation usingmethod 2 and Fig. 18 shows object 3 after segmentationusing method 2. Both methods have segmented thethree planes correctly, but method 1 is now 25 timesfaster, see Tables 1 and 2.

Fig. 17 Object 3 after triangulation

Fig. 18 Object 3 after planar segmentation using method 2, seedtriangles in black

5 Conclusions and future work

In this paper, we have presented (1) a new algorithmfor planar segmentation of data from a laser profilescanner and (2) a traditional algorithm based on 3Dpoint clouds. We have also presented the results fromthree experiments where data from the same measure-ments have been processed with both methods. Resultsshow that both algorithms are equally accurate in thesense that they identify the same planar regions with-out over- or undersegmentation. The first algorithm,however, is several times faster than the second. If alow-accuracy system like an industrial robot is used asa carrier for the profile scanner, we also believe thatthe new algorithm has a potential for increasing theaccuracy of planes fitted to the segmented data. Wehave investigated this in one of the experiments andfound a 10% increase, but more tests are needed toconfirm this observation. The main differences betweenthe two methods are:

1. Method 1 is only based on 2D profiles and scannerposes. Method 2 needs an extra pass to create atriangular mesh.

2. Method 1 uses a 2D method for profile splitting,which considerably reduces the amount of dataused in the steps to follow. Seed selection is alsoa 2D operation. Method 2 uses a computationallyexpensive 3D method for seed selection that doesnot reduce the amount of data in the followingsteps.

3. Method 1 performs region growing in 3D on areduced set of data. Method 2 uses all triangles inthe mesh for region growing.

4. Method 1 needs an extra pass after region growingto cope with oversegmentation. Method 2 does notneed that.

From a user point of view, the methods are similar inthat they produce the same result and rely on the samenumber of threshold values with similar geometrical

Int J Adv Manuf Technol (2009) 45:181–190 189

interpretation. The increased speed of the first methodis explained by an early data reduction done in 2D. Inthe third experiment, method 1 is 25 times faster. This isexplained by the big radius of the cylindrical surface inFig. 14. In method 2, the triangles on this surface havea κmean less than κmax and are therefore tested as seeds.Each triangle on the surface will start a growing processthat tests all three neighbors before it stops. Loweringκmax will prevent this but cause undersegmentation inother places.

It is natural to continue the work presented in thispaper with profile-based segmentation also of curvedsurfaces. Work is currently under progress and prelimi-nary results indicate that this is possible.

Acknowledgements We would like to thank Dr. Sören Lars-son, who contributed to earlier work in this project, for helpfuldiscussions. The project is founded by the School of Scienceand Technology, Örebro University, Sweden, and the HigherEducation Ministry, Libya.

References

1. Besl P, Jain R (1988) Segmentation through variable-order surface fitting. IEEE Trans Pattern Anal Mach Intell10(2):167–192

2. Blais F (2004) Review of 20 years of range sensor develop-ment. J Electron Imaging 13(1):231–243

3. Callieri M, Fasano A, Impoco G, Cignoni P, Scopigno R,Parrini G, Biagini G (2004) Roboscan: an automatic systemfor accurate and unattended 3d scanning. In: Proceedings ofthe 2nd symposium on 3D data processing, visualization, andtransmission, pp 805–812

4. Chan V, Bradley C, Vickers G (2000) A multi-sensor ap-proach for rapid digitization and data segmentation in re-verse engineering. Trans ASME J Manuf Sci Eng 122:725–733

5. Checchin P, Trassoudaine L, Alizon J (1997) Segmentationof range images into planar regions. In: Proceedings of IEEE3D digital imaging and modeling, Ottawa, pp 156–163

6. Chen YH, Liu CY (1997) Robust segmentation of cmm databased on nurbs. Int J Adv Manuf Technol 13:530–534

7. Demarsin K, Vanderstraeten D, Volodine T, Roose D (2007)Detection of closed sharp edges in point clouds using nor-mal estimation and graph theory. Comput Aided Des 39(4):276–283

8. Fan T, Medioni G, Nevatia R (1987) Segmented descrip-tion of 3d-data surfaces. IEEE Trans Robot Autom 6:527–538

9. Flynn P, Jain A (1989) On reliable curvature estimation. In:Computer vision and pattern recognition, IEEE proceedingsCVPR ’89, pp 110–116

10. Gatzke TD (2006) Estimating curvature on triangularmeshes. Int J Shape Model 12:1–28

11. Gotardo PFU, Bellon ORP, Boyer KL, Silva L (2004) Rangeimage segmentation into planar and quadric surfaces usingan improved robust estimator and genetic algorithm. IEEETrans Syst Man Cybern Part B Cybern 34(6):2303–2316

12. Hoffman RL, Jain AK (1987) Segmentation and classificationof range images. IEEE Trans Pattern Anal Mach Intell 9:608–620

13. Hoover A, Jean-Baptiste G, Jiang X, Flynn PJ, Bunke H,Goldof D, Bowyer K, Eggert D, Fitzgibbon A, Fisher R(1996) An experimental comparison of range image segmen-tation algorithms. IEEE Trans Pattern Anal Mach Intell18(7):673–689

14. Horowitz S, Pavlidis T (1974) Picture segmentation by adirected split-and-merge procedure. In: Proceedings of thesecond international joint conference on pattern recognition,pp 424–433

15. Hoschek J, Dietz U, Wilke W (1998) A geometric conceptof reverse engineering of shape: approximation and featurelines. In: Proceedings of the international conference onmathematical methods for curves and surfaces II lilleham-mer, 1997. Vanderbilt University, Nashville, pp 253–262

16. Jiang X, Bunke H (1994) Fast segmentation of range imagesinto planar regions by scan line grouping. Mach Vis Appl7(2):115–122

17. Larsson S, Kjellander J (2004) An industrial robot and a laserscanner as a flexible solution towards an automatic systemfor reverse engineering of unknown objects. In: Proceedingsof ESDA04—2004 7th Bienal conference on engineering sys-tems design and analysis

18. Larsson S, Kjellander J (2006) Motion control and data cap-turing for laser scanning with an industrial robot. RobotAuton Syst 54:419–512

19. Larsson S, Kjellander J (2008) Path planning for laserscanning with an industrial robot. Robot Auton Syst 56:615–624

20. Lee KH, Park H, Son S (2001) A framework for laser scanplanning of freeform surfaces. Int J Adv Manuf Technol17:171–180

21. Lengyel E (2002) Mathematics for 3D game programmingand computer graphics. Chalers River Media, Hingham

22. Milroy M, Bradley C, Vickers G (1996) Automated laserscanning based on orthogonal cross sections. Mach Vis Appl9:106–118

23. Milroy M, Bradley C, Vickers, G (1997) Segmentation of awrap around model using an active contour. Comput AidedGeom Des 29(4):299–320

24. Petitjean S (2002) A survey of methods for recoveringquadrics in triangle meshes. ACM Comput Surv 2(34):1–61

25. Pito R, Bajcsy R (1995) Solution to the next best view prob-lem for automated cad model acquisiton of free-form objectsusing range cameras. In: Lumia R (ed) Modeling, simulation,and control technologies for manufacturing, vol 2596. SPIE,Bellingham, pp 78–89

26. Rabbani T, vanden Heuvel FA, Vosselman G (2006) Seg-mentation of point clouds using smoothness constraint. In:Proceedings of ISPRS commission V symposium ‘image en-gineering and vision metrology’, Dresden, pp 248–253

27. Rahayem M, Kjellander J, Larsson S (2007) Accuracy analy-sis of a 3d measurement system based on a laser profilescanner mounted on an industrial robot with a turntable. In:Proceedings of ETFA’07 12:th IEEE conference on emergingtechnologies and factory automation, pp 880–883

28. Sacchi R, Poliakoff J, Thomas P (1999) Curvature estimationfor segmentation of triangulated surfaces. In: Proceedingsof IEEE conference on 3-D digital imaging and modeling,pp 536–543

29. Sacchi R, Poliakoff J, Thomas P, Hafele K (2000) Improvedextraction of planar segments scanned surfaces. In: Proceed-ings of IEEE international conference on information visual-ization, pp 325–330

190 Int J Adv Manuf Technol (2009) 45:181–190

30. Sappa AD, Devy M (2001) Fast range image segment-ation by an edge detection strategy. In: Third interna-tional conference on 3-D digital imaging and modeling,pp 292–299

31. Scott WR, Roth G, Rivest J-F (2003) View planning for au-tomated three-dimensional object reconstruction and inspec-tion. ACM Comput Surv 35(1):64–96

32. Shamir A (2008) A survey on mesh segmentation techniques.Comput Graph Forum 27(6):1539–1556

33. Son S, Park H, Lee KH (2002) Automated laser scanningsystem for reverse engineering and inspection. Int J MachTools Manuf 42:889–897

34. Várady T, Martin RR, Cox J (1997) Reverse engineeringof geometric models—an introduction. Comput Aided Des29:255–268

35. Vieira M, Shimada K (2005) Surface mesh segmentation andsmooth surface extraction through region growing. ComputAided Geom Des 22(8):771–792

36. Woo H, Kang E, Wang S, Lee KH (2002) A new segmenta-tion method for point cloud data. Int J Mach Tools Manuf42:167–178

37. Yang M, Lee E (1999) Segmentation of a measured pointdata using a parametric quadric surface approximation. Com-put Aided Geom Des 31:449–457

38. Yokoya N, Levine MD (1997) Range image segmentationbased on differential geometry: hybrid approach. IEEE TransPattern Anal Mach Intell 11:643–649

39. Zhao D, Zhang X (1997) Rang data based object surfacesegmentation via edges and critical points. IEEE Trans ImageProcess 6(6):826–830