A real-time GPU-based wall detection algorithm for mapping and navigation in indoor environments

7
Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems October 9- 15, 2006, Beijing, China A Real-Time Wall Detection Method for Indoor Environments Hadi Moradi, Jongmoo Choi, Eunyoung Kim, Sukhan Lee Intelligent Systems Research Center Sungkyunkwan University Suwon, Korea {moradi, jmchoi,lsh}[email protected] Abstract - This paper presents an effective and real-time approach for detecting walls in indoor environment. This approach relies on the fact that behind the opaque walls is not visible. Thus, to detect the walls in an indoor environment a set of hypothetical walls, based on the ceiling edges or ground level edges, are considered; and their validity is checked using point cloud, generated by a sensor. A certainty factor is calculated for each detected wall, which is updated continuously based on the newly gathered sensory information. Furthermore, the certainty of the walls can be updated using other source of information for better and more reliable wall detection. The novelty of this approach is in its capability to handle environments, with texture-less walls, in real-time. The algorithm has been implemented in simulation, and tested in real environment and has shown effective, reliable and real-time performance. Index Terms - Wall detection, scene analysis, map building. I. INTRODUCTION Wall detection is an issue that has been addressed in many areas such as Simultaneous Localization and Mapping (SLAM) [7] and scene interpretation. Wall detection, as part of the scene interpretation and 3D modeling of environment, is needed in many applications such as facility management, architecture, and search and rescue. In other words, wall detection is needed for two purposes: 1) Feature registration: having a map of an area, registering the sensory data corresponding to a wall in the real environment to the wall presented in the provided map allows the robot to localize itself. 2) Feature detection/extraction: having the vast sensory data detecting the features in the environment allows better scene interpretation, and reduces the size of data needed to be stored. For instance, 5,000 points detected in a scene by a stereo camera or a laser range finder can be replaced by the corresponding wall, which can be represented by a very small number of points (typically, three points for a rectangular wall). In this paper, we present an intuitive, yet very effective and reliable method to detect walls. The core idea is based on the fact that the objects and environment behind an opaque wall are not visible. Consequently, any hypothetical wall in the environment can be verified using this simple fact. Fig. 1 shows an example of data is captured and the walls are detected by a robot, using this verification method, so called Wall without Contradiction method. The verified walls are represented in the environment (Fig. 1(c)). The hypothetical walls are generated based on: a) edges in the ceiling and b) edges detected after a ground level scanning, using sonar or a laser range finder. The major strength of this method is that it works in environments that include texture-less walls. Furthermore, using the edges in the ceiling, which are very small in number, gives fast processing time. Finally, the ceiling features suffer less from feature obstruction since the ceiling area is fairly visible. In Section II, we review the work related to this topic. The definitions and the problem statement are given in Section III. The drop down and rise up walls concepts are discussed in section IV and V respectively. Wall certainty analysis is discussed in section VI. The confidency map, the area in which the robot is confident in its wall detection, is explained in section VII. Extensions and discussion and experimental results are addressed in sections VIII and IX consecutively. The paper ends with conclusions and future work. C) Fig. 1 a) the point cloud generated by a stereo vision camera, in which there is no texture on the walls themselves, b) the edges detected in 2D, c) the walls without contradiction. 1-4244-0259-X/06/$20.00 C)2006 IEEE 4551

Transcript of A real-time GPU-based wall detection algorithm for mapping and navigation in indoor environments

Proceedings of the 2006 IEEE/RSJInternational Conference on Intelligent Robots and Systems

October 9- 15, 2006, Beijing, China

A Real-Time Wall Detection Method for IndoorEnvironments

Hadi Moradi, Jongmoo Choi, Eunyoung Kim, Sukhan LeeIntelligent Systems Research Center

Sungkyunkwan UniversitySuwon, Korea

{moradi, jmchoi,lsh}[email protected] - This paper presents an effective and real-time

approach for detecting walls in indoor environment. Thisapproach relies on the fact that behind the opaque walls is notvisible. Thus, to detect the walls in an indoor environment a setof hypothetical walls, based on the ceiling edges or ground leveledges, are considered; and their validity is checked using pointcloud, generated by a sensor. A certainty factor is calculated foreach detected wall, which is updated continuously based on thenewly gathered sensory information. Furthermore, the certaintyof the walls can be updated using other source of information forbetter and more reliable wall detection. The novelty of thisapproach is in its capability to handle environments, withtexture-less walls, in real-time. The algorithm has beenimplemented in simulation, and tested in real environment andhas shown effective, reliable and real-time performance.

Index Terms - Wall detection, scene analysis, map building.

I. INTRODUCTION

Wall detection is an issue that has been addressed in manyareas such as Simultaneous Localization and Mapping(SLAM) [7] and scene interpretation. Wall detection, as partof the scene interpretation and 3D modeling of environment,is needed in many applications such as facility management,architecture, and search and rescue. In other words, walldetection is needed for two purposes: 1) Feature registration:having a map of an area, registering the sensory datacorresponding to a wall in the real environment to the wallpresented in the provided map allows the robot to localizeitself. 2) Feature detection/extraction: having the vast sensorydata detecting the features in the environment allows betterscene interpretation, and reduces the size of data needed to bestored. For instance, 5,000 points detected in a scene by astereo camera or a laser range finder can be replaced by thecorresponding wall, which can be represented by a very smallnumber of points (typically, three points for a rectangularwall).

In this paper, we present an intuitive, yet very effective andreliable method to detect walls. The core idea is based on thefact that the objects and environment behind an opaque wallare not visible. Consequently, any hypothetical wall in theenvironment can be verified using this simple fact. Fig. 1shows an example of data is captured and the walls aredetected by a robot, using this verification method, so calledWall without Contradiction method. The verified walls are

represented in the environment (Fig. 1(c)). The hypotheticalwalls are generated based on: a) edges in the ceiling and b)edges detected after a ground level scanning, using sonar or alaser range finder. The major strength of this method is that itworks in environments that include texture-less walls.Furthermore, using the edges in the ceiling, which are verysmall in number, gives fast processing time. Finally, theceiling features suffer less from feature obstruction since theceiling area is fairly visible.

In Section II, we review the work related to this topic. Thedefinitions and the problem statement are given in Section III.The drop down and rise up walls concepts are discussed insection IV and V respectively. Wall certainty analysis isdiscussed in section VI. The confidency map, the area inwhich the robot is confident in its wall detection, is explainedin section VII. Extensions and discussion and experimentalresults are addressed in sections VIII and IX consecutively.The paper ends with conclusions and future work.

C)

Fig. 1 a) the point cloud generated by a stereo vision camera, in whichthere is no texture on the walls themselves, b) the edges detected in 2D,

c) the walls without contradiction.

1-4244-0259-X/06/$20.00 C)2006 IEEE4551

II. BACKGROUND

Feature detection and extraction has received a great dealof attention for the past two decades. Researchers have usedpassive sensors such as stereo vision [5] or active sensors suchas laser range finders [2, 3] to capture 3D point cloud andinterpret it to the features in the environment.

Delaunay triangulation has been one of the methods thatuses 3D points to produce surface model [9]. Faugeras et. al.[10] used a constrained 3D triangulation to retrieve polyhedralmodels. The major drawbacks of this approach are needed foradequate texture on the surface and the running time.

A typical method for plane extraction is the regiongrowing approach [2], in which an initial mesh is used toiteratively add the neighbouring triangles. This approachrequires many computational procedures. Furthermore, it willnot work if the walls do not have enough texture (e.g. Fig. 1)

Another widely used algorithm is RANSAC [1], in which amodel is fitted to the data. Lee et al. [6] used a combination of3D point clouds (ICP), photometric information (SIFT), andRANSAC to extract the planes in a workspace using stereovision camera. However, this approach fails in texture-lessenvironment that does not produce enough point clouds.Consequently in indoor environment, where texture-less wallsare dominant, dense depth map cannot be acquired foreffective wall detection. On the other hand, the point cloudsgenerated based on paintings or other artificial features on thewalls would produce a very small bounded region of the wall,and cannot be easily expanded to a full scale wall.

Expectation Maximization (EM) based plane extractionwas proposed by Liu et al. [3] that reduces the planeextraction problem to a computation of Eigen values byintroducing Lagrange multipliers. Unfortunately, this methodcannot determine the number of planes in the data set [2].

Nuchter et al. [4] used a semantic scene analysis that usesRANSAC and ICP algorithms to extract planes from the pointcloud provided by a 3D laser range finder. The majordrawback of this method is the need for a 3D laser rangefinder, which may not be feasible for a large number ofrobots. Furthermore, this approach would not be real-time dueto the scanning procedure needed by the laser range finder.

Structure from motion is another approach that starts withlocal features, such as Harris corners, SIFT, etc., and thematched features in consecutive views provide thefundamental matrix. The other views in the sequence arerelated to the estimated structure by matching. This methodalso relies on texture and may not be feasible in real-time [11].

Zhang [12] has recently proposed a stereo method intexture-less environment, which would be very useful for allthe wall detection methods. This method combines bothgeometric (optical flow) and photometric (intensity change)cues to compute dense shape, that is accurate even incompletely uniform untextured environments.

In this paper, we propose a method that uses stereo visionto detect point cloud which includes the ceiling and extractsthe edges in the ceiling. Then hypothetical planes aregenerated from the ceiling edges and the walls are

contradicted in case a point cloud detected behind them. Thismethod is advantageous to other methods because: a) works intexture-less environment, b) it is less sensitive to obstructionsince it relies on the ceiling information, which is generallyun-obstructed, c) it runs in real-time. Furthermore, we show asimilar approach by detecting the ground level edges usingsonar or laser range finder and generate the hypothetical walls.This approach is suitable for hallways and corridors withsmall wall obstruction.

III. DEFINITIONS AND PROBLEM STATEMENT

We use the following terms in the course of this paper:

Hypothetical wall: A wall that is hypothetically considered tobe stretched from a ceiling edge to the floor.

Real wall: A wall that physically exists in the environmentstretching from the ceiling to the floor.

In this study, we assume that the walls are opaque, nottransparent. In other words, the walls block any light to travelthrough the wall. Also we are considering indoorenvironments not outdoor. Finally, texture-less environment isused throughout the paper referring to an environment withtexture-less walls. Of course no texture at all, .e.g. at theintersection of walls, means no way to detect edges and noalgorithm would work. Fig. 1(a) shows a texture-lessenvironment, in which the 3D point cloud is only generatedaround the ceiling edge not on the wall.Problem Statement: Given the 3D image sequence

provided by a stereo camera, determine the walls in theimage(s).

IV. DROP DowN WALLS

Consider a set of edges in the ceiling that may belong towalls, ceiling lamps, air conditioning ventilations, etc. Amongall the edges in the ceiling, only the edges that belong to wallsor columns have attached surfaces that block the view,assuming that the walls are not transparent. Consequently,from the set of all hypothetical walls only real walls wouldblock the view, and the rest are transparent. In other words, nopoint cloud would be detected behind the real walls whilesome points or objects should be seen behind the rest. It israrely possible to accept a hypothetical wall because of notseeing a point behind it. Such a wall would be easilydisqualified in the future captures by seeing behind it.Furthermore, considering such a wall would not be dangerousto a robots operation (a conservative approach).

Fig. 2(a) shows a hypothetical ceiling in which there aremany edges. All the hypothetical walls based on the ceilingedges are shown in Fig. 2(b) (the front walls are not shown forclarity). Fig. 2(c) shows that all the virtual walls are rejectedexcept one, the one with no point cloud behind it. This mayhappen due to missing points, bad view angle, etc. Futurecaptures from the scene would eliminate this wall too.

Fig. 3 shows the basic wall without contradiction algorithm,in which SHW stands for Set of Hypothetical walls and SRWstands for Set of Real Walls.

4552

Nc

Fig. 3 Basic walls without contradiction algorithm

(c)Fig. 2 a) The edges in the ceiling can belong to the actual walls or to thelamps, air conditioning vent, etc. b) the hypothetical walls are shown.There are objects or point cloud that are visible in the environment

which is shown in red. c) the walls with contradiction are rejected and allother hypothetical walls are kept.

updated using other methods and sensors which facilitatesensor fusion.

The error in the stereo vision has been well studied, and it isshown that the major error happens in the depth measurement.For instance, in our experiments, the x and y errors are lessthan 1cm at 5m distance while the z error, depth error,

dramatically increases up to around 15cm.The above discussion can be formalized as follows. The

certainty of a hypothetical wall, i.e. ci, at time t conditionallydepends on all the sensory readings from time 1 to t, i.e. rl, ...

rt(3).

(1)

V. RISE UP WALLS

Similar to the drop down walls, the edges detected using a

sonar or a laser range finder at the ground level, can be usedto generate a set of hypothetical walls rising up to the ceiling.The main drawback of this approach is based on the hugenumber of edges that may be detected at the ground level. Toavoid this computational cost, only the edges verticallycorresponding to the ceiling edges would be considered. Thiswill filter out the majority of the ground level edges especiallyin crowded environments.

The rise up walls method would be very effectiveespecially in corridors or hallways, in which there are notmany obstacles on front of the walls. Furthermore, we showhow this approach may be used to complement the drop downmethod to better detect walls (Sec. VIII.B). .

VI. CERTAINTY OF THE WALLS

In section IV the basic approach in accepting and rejectionhypothetical walls was discussed. This approach works if thecaptured point clouds are accurate. However, considering theerror in the stereo vision, a wall with a point cloud detectedbehind it may not be rejected completely. Rather a certaintyvalue may be assigned to it representing its certainty of beinga wall. Consequently, it is very crucial to know theapproximated 3D points error since the positions of points are

used to verify the validity of a wall in this approach.Furthermore the certainty of detecting outliers or missingpoints should be considered. Finally, this certainty can be

Since the walls are static, the static environment conditionis satisfied and the sensory readings are consideredindependent, i.e.:

(2)

By inserting (2) into conditional probability formula, (3) isresulted:

P(ci rl..., = rp ci)p(ci I.r )

p(rc ri...,r; l)

For computational reasons It, the log-odds, is used

resulting to (4).

(4)

p(c. I rl..., Op=(c r ci)p(ci Ipr(...c)r-/ p(~~~~r, r, ,---, rt-l )

p(ci r, ... rO = p(ci r )P(r )P(ci rl..r-1)p(Ci )P(rt r, ,---, rt-l )

Iti =log[ p(c ' ]+log[I-(i) ]+t-

1° log[ p(co)]Ci I p(cp(c

4553

(a)

(b)

Yes

p (r, ri, -, rt-1, Ci ) p (rt Ci ) -

(3)

p

World frame (wf)Projected height (PH)

Fig. 4 The projection of a point on a wall. The higher the projection thelower the chance of rejecting the wall since it is closer to the wall edge.

in which p(c1) is the basic probability of it being a wall.

p(ci r,) is the probability of a hypothetical wall to be a realwall based on the current sensory reading. If there is a validpoint detected behind a hypothetical wall then p(ci I ) = O -

However, due to the vision error the detected points are notaccurate and p(c, rt) should be calculated based on theaccuracy of vision. Fig. 4 shows the point cloud around anedge in the ceiling that belongs to a wall or the ceiling and isnot really behind the wall. Consequently, the points aroundthe edge are not reliable for contradicting a hypothetical wall.In contrast, the points away from the ceiling, i.e. closer tofloor, or farther behind the wall are more reliable to reject ahypothetical wall. Intuitively, two factors are considered inusing the points: a) The further the point from the wall, thehigher the chance to be a point separate from the wall, b) thecloser to the floor the point the higher the chance to be a pointthat does not belong to the wall or ceiling.

In general, the certainty of the wall based on the givenpoint can be calculated as follows:

P(Ci rt ) 1 P behind wall (1 Pnoise ) (5)

in which pnoise is the probability of the point being noise andPbehind wall iS the probability of the point being behind a wall.

Considering the above factors, two methods are proposedto calculate the Pbehind wall, we propose two different methods;a) Projected Point Method (PPM) and b) Non-Ceiling PointMethod (NCPM).

PPM relies on the above intuitions that have beencombined into a single parameter called Projected Point,which represents how far and how high the point is comparedto the wall. Fig. 4 shows a sample point cloud and itscorresponding PP. PP can be simply calculated using (6), inwhich s is the parametric value for the intersection of P1P2line segment to the projection of line connecting wf to P onthe ceiling. P, is the height of P, i.e. the point behind thehypothetical wall.

PP = Pz.s (6)

Smaller PP means that the point behind the hypotheticalwall is either further away from the wall or from the ceiling.In both cases, it is more probable that the point is a valid pointbehind the wall. Consequently, PP is used as a measure tocalculate the certainty of a point behind the wall. To simplifythe calculation without loosing the generality, PP is calculatedbased on the average of all the points behind the wall. FinallyPbehind wall iS calculated as shown in (7), in whichCeilingHeight is the height of the current hypothetical wall,PPt is the Projected Point at time t.

CeilingHeight -PP, .P-eh~~ CeilingHeight

In the second method, i.e. NCPM, the average of the pointsbehind the hypothetical wall, which are below a certainheight, is calculated. Then the certainty of the hypotheticalwall is calculated using (8), in which I is the distance of theaverage point to the hypothetical wall', z distance to the point,and k is a coefficient.

Pbehind wall {kz2I < kz2

I > kz2(8)

PPM is advantageous to NCPM since the pointsbelonging to the edge of a real wall will be used in rejecting ahypothetical wall on front of a real wall. On the other hand,the NCPM would reduce the effect of windows and reflectiveobjects (which is explained in the next section).

VII. CONFIDENCE MAP

While the robot constructs the walls in the environment, itis possible to overlook some of the walls since the edgesbelonging to those walls are not visible or the walls are behindsome other walls. Consequently, it is important to formulatethe robot's confidence in the area representing of how muchand which part of the area is covered and the walls aredetected. For this purpose, the confidence map is defined as agrid based map uniformly partitioning the environment into2D grids. Each grid cell is represented with a confidence valueranging from 0, no confidency at all, to 1, highly confidentthat there is no wall.The sources of information to calculate the confidence map

are: a) detected point cloud: shows how far it is visible. Theoutliers will limit the confidence up to the outlier point whilethe missing points may increase the confidence. b) Camerafilled of view: only the confidence of the cells in the camerafilled of view would be changed. c) The hypothetical walls:implicitly represent the area in which that there are no walls.The area on front of a low certainty wall has higher

I These two formulas, which given to calculate the probability of a pointbeing behind a hypothetical wall, are simplified for fast processing. Moreaccurate probability calculation may be done in the expense of CPU time. Thefuture study will investigate the trade off between the simplified version andthe accurate version.

4554

(7)

0.6

0.3 11

Fig. 5 The confidence map: The blue grids (al) show zero confidencesince they are not visible. The camera view area (a2) receives close to100% confidence (not 100% since there could be missing points). Theareas behind the walls (a4) receive confidence proportional to their

certainty.

confidence since some points have been seen behind it andthere could not be any wall in front of it. On the other hand,the area in front of a high certainty wall, let's call it Wh, hashigher confidence because if there was a wall in front of it, itwould block the view of the ceiling edge of Wh. Thus therecould be no wall between the camera and a high certaintywall. Finally, the area on the back of a high certainty wall isobscured, and confidence drops in those areas. In contrast, wedo not have further information about the area behind lowcertainty walls. Thus they do not provide further information.

The confidence for the area in front of wall k is calculatedusing (9), based on the above discussion.

{ p(Ck) P(Ck) < 0.5

P(Ck) > 0.5 (9)

The confidence for the area in front of the point cloud isset fixed to 1 -p .ising point (note that the effect of distance

error is ignored). The total confidence Conf(i, j) for cell i, j attime t should be calculated using the conditional probabilityfrom 1 to time t and by combining these two differentconfidence values2. Fig. 5 shows a sample environment withtwo walls, the left wall with 30% and the right one with 60%certainty. The higher the certainty in the wall, the lower theconfidency in the area behind it; because behind such a wall isnot visible (e.g. area a4 in Fig. 5).

VIII. EXTENSIONS AND DISCUSSIONS

In this section, we address few issues of the basic Wallswithout Contradiction method and discuss solutions.

A. Merging WallsSince the edge detection produces many small edges,

instead of a single accurate edge corresponding to each wall,

2 Considering independent walls and captures, the similar formula to (4) canbe produced to calculate the overall confidence.

p11p~~~~1Pil P2

P02 2- n2

nn

Fig. 6 Merging planes based on their length, certainty value and theirangle. P1 P2 represents the merged plane. The original planes areextended to match each other before calculating the merged wall.

the produced walls need to be merged for better and moreefficient representation of the environment. Consequently allplanes which: a) are closer to each other than a giventhreshold, b) their normal are closer than a given threshold,and c) their certainties are closer than a given threshold wouldbe merged together. The certainty factor of such a mergedwall is calculated using (10):

P(Ccombined ) ±Pp(Ci ) XxI + p(Cj) X jl,±lI l,±+Il (10)

in which Ii and I represent the lengths of walls i andj.To calculate the new merged wall, a naive approach is to

simply consider the ends of each wall and interpolate them.However, a more reliable approach is to calculate the endpoints considering the certainty of each wall and theninterpolate between the two end points. Fig. 6 shows a casethat two planes are to be merged. The planes are extended tomatch the other planes sides. The new merged plane's sidesare calculated using (11).

P1 = .p(c1 )P.12 + '2 p(C2 ).P22) /( *p(C1) + '2 *p(C2))P2 = (.l)PPlll+ 2 P(C2 ).P21) /(/l *p(C1) + '2 *P(C2))

(11)

B. Missing or disjoint WallsWalls without Contradiction method has limitations that

should be considered, and dealt accordingly. For instance themethod assumes opaque walls that may not happen in realityand there are glassy walls, doors, or windows. Consequentlybehind such a wall is visible, and the certainty of such a wallwould be very low. To overcome this problem other sensorswhich are not sensitive to transparent walls, such as sonars,can be used to further improve the accuracy of wall detection.In case of windows, the rise up wall method can also be useddetermine the possibility of window existence. The doorsshould be handled using semantic scene analysis.

Another limitation is dealing with slanted edges, which isinitiated from vertical walls assumption. One method toovercome this problem is to use sonar or laser to measure thedistance of the hypothetical wall from the camera about 0.5meters below the ceiling level. The measured distance canbelong to: a) a slanted wall or b) a wall or object behind avertical wall. In case of a slanted wall, another measurement

4555

at different height would confirm the existence of the slantedwall. Semantic scene analysis and corner formation are neededto be used for dealing with slanted walls.

C. Time-based Certainty DecayIt is possible to register a wall based on a noisy edge,

which does not belong to a real wall. This situation mainlycreates walls behind the real walls. In other words, such a wallwould not be visible in the future captures. This fact is thebase for reducing the certainty of a wall over time. If analready-registered wall is not viewable in the current capture,while it is in the camera's field of view, then its certaintywould be reduced accordingly.

D. Selecting Candidate EdgesAs it was mentioned earlier, the algorithm uses the edges

in the ceiling to create the hypothetical walls. The followingfour criteria are used to select candidate edges:

- Being above a certain threshold: Edges below a certainthreshold do not belong to the ceiling.

- Being closer than a given distance: Due to the cameraaccuracy, the edges farther away than a given distanceare very noisy and are rejected. This criterion wouldeliminate many edges created by the reflective objects.

- Limited tilt: It is assumed that the ceiling has limitedtilt. Thus the edges with high tilt are rejected, whichare normally created due to noise especially lights.

- Not being along the camera's view: The edges that arealong the camera view cannot belong to walls, becausethe camera is not able to see those edges realistically.In other words, these edges are created due to noiseand would be rejected.

Using the above four criteria generated much smaller setof edges out of the total edges in a given picture. On theaverage, there was a reduction of 88% in the number of edges(from 328 edges in the whole image to 37 candidate edges).

IX. EXPERIMENTAL RESULTS

We implemented the proposed algorithm using a Bumblebee camera installed on the gripper of a mobile manipulator.The scenes were captured from the ceiling, and thehypothetical planes were constructed based on the capturedscene. Also, a Sick laser range finder is used to capture 2Dground level point cloud.

Since the edge detection is not perfect and produces manysmall edges, it is possible to have gap between the detectedsections of a long wall. These gaps can be patched using othermethods, such as the semantics analysis of the scene [4].

Fig. 1 shows an example of building the walls for thecorner of a room. The detection and verification of eachhypothetical wall took about 70 ms, which includes edgedetection, wall generation, and wall certainty calculation. Fig.7 shows several steps of the wall detection process for thecase shown in Fig. 1. Fig. 7(a) and (b) show all thehypothetical walls, followed by rejecting low certainty wall(i.e. Fig. 7(c) & (d)). Finally the walls that satisfied the

(a)

Fig. 7 a) and b) all the hypothetical walls detected in the environment.c) and d) all the hypothetical walls above 70% certainty. e) and f) thehypothetical walls after merging. The two small section of the walls atthe corner are represented as one since the walls are very small, 15 cm

each, and the accuracy of the camera is not enough the distinguishbetween them.

merging condition have been merged (Fig. 7(e) & (f)). Theresult show successful wall detection in a texture-lessenvironment.

Fig. 8 shows a complex corridor with some transparentwalls. Also the right side walls are covered with a pipe. Fig.8(b) shows the rise up wall, generated using laser range data,and the drop down walls using stereo vision. It can be seenthat the vision generated walls are fairly close to the lasergenerated walls.

Fig. 9 shows the confidence map for the hallwayexample, i.e. Fig. 8. Fig. 9(a) shows the confidence based onthe detected point cloud. The confidence map is furtherimproved using the detected wall information (Fig. 9(b)). Fig.9(c) shows the confidence map after 6 captures from differentangles.

X. CONCLUSION AND FUTURE WORK

An effective and intuitive approach for detecting walls inindoor environment is proposed, which works in texture-lessenvironments in real-time. A certainty factor is calculated forthe detected walls that can be updated continuously based on

4556

al *-

JU)

Fig. 8 Drop down and rise up walls for a corridor: a) The initial imageof the corridor, b) the rise up wall using laser point cloud, c) rise up

wall (gray) compared to the drop down walls (blue), d) merging rise upwalls with drop down walls. The drop down walls are generated and

merged based on multiple captures.

the newly gathered sensory information. Furthermore, thedetected walls can be fused with other sources of informationfor better and more reliable wall detection. The algorithm hasbeen implemented in simulation and tested in realenvironment, and has shown effective and reliable results.

Future work will include, but not limited to, handlingwindows, doors, tilted walls and curved corners. Furthermore,the wall verification can be done using graphic hardware thatcan greatly improve the speed of the process. Finally, it isimportant to study the effects of newly-sensed data on analready detected wall. If the wall is long, and the sensed datais gathered in a specific area, then the wall should be split,rather than completely rejected.

ACKNOWLEDGMENT

This paper was performed for the Intelligent RoboticsDevelopment Program, one of the 21 st Century Frontier R&DPrograms funded by the Ministry of Science and Technologyof Korea.

REFERENCES[1] The RANSAC (Random Sample Consensus) Algorithm

http://www.dai.ed.ac.uk/CVonline/LOCALCOPIES/FISHER/RANSA.[2] D. Hahnel, W. Burgard, and S. Thrun. "Learning Compact 3D Models of

Indoor and Outdoor Environment with a Mobile Robot". Robotics andAutonomous Systems 44(1): 15-27, 2003.

[3] Y. Liu, R. Emery, D. Chakrabarti, W. Burgard, and S. Thrun. "Using EMto Learn 3D Models of Indoor Environments with Mobile Robots". Inproceedings of the 1 8th Conf. on Machine Learning July, 2001.

[4] A. Nuchter, H. Surmann, K. Lingemann, and J. Hertbeg. "Semantic SceneAnalysis of Scanned 3D Indoor Environments" VMV 2003, Munich,German, Nov. 19-21, 2003.

[5] D. Murray and J. J. Little, "Using Real-time Stereo Vision for MobileRobot Navigation", Autonomous Robots, 8(2):161-171, 2000.

Fig. 9 Confidence map: a) the confidence map, for 76% and higherconfidence, based on the point cloud. As it can be seen, there weresome outlier points resulting to lack of confidence in front of the

robot (al). However combining the wall information eliminated thisproblem (b). b) The confidence map after including the wall

information (a2 points to the location of a low certainty wall). c) Theconfidence map after few captures: the small cells on the right side are

generated due to the gap between the walls. They were eliminatedafter few captures.

[6] S. Lee, D. Jang, E. Kim, S. Hong, and J. Han, "Stereo vision based real-time workspace modeling for robotic manipulation," IROS 2005.

[7] J.J. Leonard and H. F. Durrant-Whyte, "Simultaneous map building andLocalization for an autonomous mobile robot," Proc. IEEE/RSJ Int.Workshop on Intelligent Robots and Systems, PP. 1442-1447, May 1991.

[8] www.ptgrey.com[9] S. Kumar, "Surface Triangulation: A Survey," Technical Report,

Department of Computer Science, University of North Carolina, 1996.[10] O.D. Faugeras, E. Le Bras-Mehlman, J.D. Boissonnat, "Representing

Stereo Data with Delaunay Triagulation," Artificial Intelligence Journal,Vol. 44(1-2), pp 41-87, 1990.

[1 l]Marc Pollefeys and Reinhard Koch and Luc J. Van Gool, "Self-Calibration and Metric Reconstruction in Spite of Varying and UnknownInternal Camera Parameters," IEEE International Conference onComputer Vision, 1998.

[12]L. Zhang, B. Curless, A. Hertzmann, S. M. Seitz, "Shape and Motionunder Varying Illumination: Unifying Structure from Motion, PhotometricStereo, and Multi-view Stereo," IEEE International Conference onComputer Vision, 2003.

4557

a2