Contents SESSION TU-M-1 - INDUSTRIAL ROBOTS IMPROVEMENTS

13
Proceedings Book 40th International Symposium on Robotics Edited by: Luis Basañez - Raúl Suárez - Jan Rosell

Transcript of Contents SESSION TU-M-1 - INDUSTRIAL ROBOTS IMPROVEMENTS

Proceedings Book40th International Symposium on Robotics

Edited by:

Luis Basañez - Raúl Suárez - Jan Rosell

Organized by: Asociación Española de Robótica y Automatización Tecnologías de la Producción – AER-ATP

International Federation on Robotics – IFR

Sponsors:

Institutional SponsorsMinisterio de Industria, Turismo y Comercio

Ministerio Ciencia e Innovación

ACC1Ó Generalitat de Catalunya

Ajuntament de Barcelona

Asociación Española de Normalización - AENOR

Global SponsorsAsea Brown Boveri, S.A. (ABB)

Adept Technology Ibérica, S.L.

Fanuc Robotics Ibérica, S.L.

Kuka Robots Ibérica, S.A.

Motoman Robotics Ibérica, S.L.

Schunk Intec, S.L.

Stäubli Española, S.A.

Published by:Asociación Española de Robótica y Automatización Tecnologías de la Producción – AER-ATP

Copyright © March 2009

All articles published in these proceedings are protected by copyright which covers the exclusive right to reproduce and distribute the article as well as

translation rights. No material published in this proceedings may be reproduced photographically or stored on microfilm in electronic data bases, video,

DVD, etc. without first obtaining written permission from the publisher.

No liability is assumed by the publisher with respect of the use of information contained herein. While every precaution has been taken in the preparation

of this book, the published assumes no responsibility for errors or omissions.

ISBN 978-84-920933-8-0

Technical and Administrative Management: G&A, S.L.

Editorial Production: Rosa Esteve i Associats - www.rosaesteve.com

Printed by: Jomalsa

Depósito Legal: B-11.608-2009

ContentsSESSION TU-M-1 - INDUSTRIAL ROBOTS IMPROVEMENTS

A Symmetric Parallel Schönflies-Motion Manipulator for Pick and Place.......................................................................... 21

O. Altuzarra, Ch. Pinto, V. Petuya, B. Sandru, M. Diez

University of the Basque Country, Spain

Open architecture controller for an ABB IRB2000 industrial robot.................................................................................... 29

J. L. González, R. Sanz, F. Gayubo, J. C. Fraile, J. P. Turiel

Fundación CARTIF, Spain

ILC Filter Design for Input-Delay Robot Manipulators....................................................................................................... 35

T.-J. Ha, J. S. Yeon, J. H. Park

Hangyang University, Korea

S.-W. Son, S. Lee

Hyundai Heavy Industries Co., Ltd., Korea

Analysis on Dynamic Characteristics of an Industrial Robot Wrist................................................................................... 41

P.-J. Kim, H.-C. Cho, S.-H. Lee, S.-H. Jung, J.-S. Hur

Hyundai Heavy Industries Co., Ltd., Korea

SESSION TU-M-2 - INTELLIGENT MANUFACTURING SYSTEMS

Obstacle Avoidance Algorithm for Safe Human-Robot Cooperation in Small Medium Enterprise Scenario..................... 49

N. Pedrocchi, M. Malosio, L. Molinari Tosatti, G. Ziliani

National Research Council, Italy

Distributed Flexible Transport System by Cooperation of Flexible AGVs......................................................................... 55

D. Herrero-Pérez

University Carlos III of Madrid, Spain

H. Martínez-Barberá

University of Murcia, Spain

Development of the Highly-Efficient End effecter of Robot Control Cell Production Systems for the

Productivity Improvement in Multi-product Production in Varying Volumes ..................................................................... 61

H. Yonezawa, T. Nishiki, N. Higuchi, Y. Sugano, H. Hayashi, K. Ida, T. Takagi, M. Noshino, W. Tokumoto,

M. Yamada, T. Fujita

IDEC Corporation, Japan

CAM/Robotics Integrated Postprocessing in Redundant Workcells by means of Fuzzy Control .................................... 67

J. Andres, L. Gracia, J. Tornero

Technical University of Valencia, Spain

SESSION TU-A-1 - SLAM

Map Fusion of Visual Landmark-based Maps ................................................................................................................. 75

M. Ballesta, O. Reinoso, A. Gil, L. Payá, M. Julià

Miguel Hernandez University, Spain

An Extension to ICP Algorithm and its Application to the Scan Matching Problem ......................................................... 81

L. Armesto

Technical University of Valencia, Spain

L. Domenech

Asociación de investigación en Diseño y Fabricación, Spain

J. Tornero

Technical University of Valencia, Spain

11

Visual Self-Localization for Small Mobile Robots with Weighted Gradient Orientation Histograms................................. 87

M. Hofmeister, M. Liebsch, A. Zell

University of Tübingen, Germany

A Mobile Robot For On-Line Registration of Indoor 3D Models ...................................................................................... 93

J. Pulido Fentanes, S. Marcos Pablos, Domínguez Quijada

Fundación CARTIF, Spain

E. Zalama, J. Gómez García-Bermejo, J. R. Perán González

Universidad de Valladolid, Spain

SESSION TU-A-2 - VISION

Robotic Quality Assurance using 3D Laser Rotating Scanner.........................................................................................101

A. Picón, A. Bereciartua, J. A. Gutiérrez, J. Pérez

ROBOTIKER-TECNALIA, Spain

Tracking based on Hue-Saturation Features with a Miniaturized Active Vision System ..................................................107

J. A. Corrales, P. Gil, F. A. Candelas, F. Torres

University of Alicante, Spain

Hybrid Collaborative Stereo Vision System for Mobile Robots Formation Navigation .....................................................113

F. Roberti, J. M. Toibero, C. Soria

Universidad Nacional de San Juan, Argentina

R. F. Vassallo

Universidades Federal do Espirito Santo, Brasil

R. Carelli

Universidad Nacional de San Juan, Argentina

Robotics Platform Integrating Visual Recognition and RFID for Classification and Tracking Applications ......................119

C. Cerrada, J. J. Escribano, J. L. Bermejo, I. Abad, J. A. Cerrada, R. Heradio

UNED, Spain

SESSION WE-M-1 - MODELLING AND CONTROL

On Evolving a Robust Force-Tracking Neural Network-Based Impedance Controller.....................................................127

J. de Gea, Y. Kassahun

University of Bremen, Germany

F. Kirchner

DFKI (German Reserach Center for Artificial Intelligence), Germany

Safe Avoidance of Online-detected Obstacles for Robot Manipulators...........................................................................133

L-P. Ellekilde, H. G. Petersen

University of Southern Denmark, Denmark

Real Time Distributed Control of parallel robots using redundant sensors .....................................................................139

I. Cabanes, A. Zubizarreta, M. Marcos, Ch. Pinto

University of the Basque Country, Spain

Force Amplifier Mechanism for Small Manipulator...........................................................................................................145

Y. Aiyama, T. Kudou

University of Tsukuba, Japan

Searching a valid hand configuration to perform a given grasp ......................................................................................151

R. Suárez, J.-A. Claret

Universitat Politècnica de Catalunya, Spain

The Role of Simulation Tools in the Teaching of Robot Control and Programming........................................................157

A. Romeo

Universidad de Zaragoza, Spain

12

SESSION WE-M-2 - HUMAN ROBOT INTERACTION

Haptic Interaction with a Virtual Hand in Collaborative Tasks..........................................................................................165

Héctor A. Moreno, Roque Saltaren, M. Ferre, R. Aracil

Universidad Politécnica de Madrid, Spain

Robotics Teleoperation Using Bilateral Control by State Convergence and a Haptic System of Assistance ..................171

C. Peña

Universidad Politécnica de Madrid / Universidad de Pamplona, Spain / Colombia

R. Aracil, R. Saltaren, M. Ferre

Universidad Politécnica de Madrid, Spain

A Multimodal Teleoperation Framework: Implementation and Experiments ....................................................................177

E. Nuño

Technical University of Catalonia / University of Guadalajara, Spain / México

L. Basañez, L. Palono, A. Rodríguez

Technical University of Catalonia, Spain

Control Strategies for Human-Robot Interaction using EOG ...........................................................................................183

E. Iañez, J. M. Azorín, E. Fernandez, J. M. Sabater

Universidad Miguel Hernández de Elche, Spain

An augmented reality interface for training robotics through the web..............................................................................189

C. A. Jara, F. A. Candelas, P. Gil, M. Fernández, F. Torres

University of Alicante, Spain

A Power Assist Device for handling Heavy loads.............................................................................................................195

P. González de Santos, E. García, J. F. Sarria, R. Ponticelli, J. Reviejo

Institute of Industrial Automation - CSIC, Spain

SESSION WE-A-1 - NEW ROBOT APPLICATIONS

Advanced HMI for robot programming: An Industrial application for the ceramic industry ..............................................203

G. Veiga

University of Coimbra, Portugal

R. Cancela

Motofil Robotics / University of Aveiro, Portugal / Portugal

Use of CAD/CAM/ROB integration for Scaled Orography Modelling. A case study: Mijares River.................................209

J. Andrés, L. Gracia, J. Tornero

Technical University of Valencia, Spain

Welding Process Control for Rapid Manufacturing: Two Different Approaches Using Tin and Stainless Steel ..............213

G. Muscato, G. Spampinato, L. Cantelli

Università degli Studi di Catania, Italy

Evolution of the Robotics in the NDT Inspection .............................................................................................................219

R. Alberdi, M. Aguado, J. L. Rembado

Tecnatom, S.A., Spain

SESSION WE-A-2 - AERIAL ROBOTS

TG2M: Trajectory Generator and Guidance Module for the Aerial Vehicle Control Language AVCL...............................225

A. Barrientos, J. Colorado, P. Gutiérrez

Universidad Politécnica de Madrid, Spain

Development of an Adaptive Wing System for the Aquila Disseminative Multilayered Unmanned Aerial System..........233

A. Khan, R. Molfino

University of Genova, Italy

13

RF-based Particle Filter localization for Wildlife Tracking by using an UAV.....................................................................239

P. Soriano, F. Caballero

University of Seville, Spain

A. Ollero

University of Seville / Centro Avanzado de Tecnologías Aeroespaciales, Spain / Spain

Method based on a particle filter for UAV trajectory Prediction under uncertainties........................................................245

R. Conde, J. A. Cobano

Universidad de Sevilla, Spain

A. Ollero

Universidad de Sevilla / Centro Avanzado de Tecnologías Aeroespaciales, Spain

SESSION TH-M-1 - MOBILE ROBOT NAVIGATION

Comparative Study of Localization Techniques for Mobile Robots based on Indirect Kalman Filter...............................253

R. González, F. Rodríguez, J. L. Guzmán, M. Berenguel

Universidad de Almería, Spain

Adaptive Extended Kalman Filtering for Mobile Robot Localization.................................................................................259

R. Caballero

Universidad Tecnológica de Panamá, Panamá

D. Rodríguez-Losada, F. Matia

Universidad Politécnica de Madrid, Spain

Stereo Image Registration Based on RANSAC with Selective Hypothesis Generation...................................................265

R. Cupec.

University of Osijek, Croatia

A. Andreja Kitanov, I. Petrovic

University of Zagreb, Croatia

Manoeuvres for Autonomous Mobile RoboT Navigation using the Optical Flow and Time-to-Contact Techniques................271

P. Nebot, E. Cervera

Jaume I University, Spain

Control Structure a Mobile Robot for Maintenance of Material Flow Systems ................................................................277

T. Brutscheck, M. Bücker, B. Kuhlenkötter

Dortmund University of Technology, Germany

Speaker Localization and Tracking in Mobile Robot Environment Using a Microphone Array ........................................283

I. Markovic, I. Petrovic

University of Zagreb, Croatia

SESSION TH-M-2 - PLANNING IN ROBOTICS

An Hybrid Architecture for Multi-Robot Integrated Exploration .......................................................................................291

M. Juliá, A. Gil, L. Payá, O. Reinoso

Miguel Hernández University, Spain

A Constraint-based Probabilistic Roadmap Planner .......................................................................................................297

A. Pérez, A. Rodríguez, J. Rosell, L. Basañez

Technical University of Catalonia, Spain

Error Adaptive Tracking for Back Stepping Controllers: Application to Mobile Robots ...................................................303

F. Diaz-del-Rio, D. Cagigas, J. L. Sevillano, S.

Vicente, D. Cascado

Universidad de Sevilla, Spain

Simultaneous Task Subdivision and Assignment in the FRACTAL Multi-robot System ..................................................309

C. Rossi, L. Aldama, A. Barrientos

Universidad Politécnica de Madrid, Spain

14

A Dynamic Model-Based Acceleration Setting Method for Industrial Robots Trajectories ..............................................315

J. -Y. Kim, D.-H. Kim, S.- R. Kim

Hyundai Heavy Industries Co., Ltd., Korea

Path planning using sub-and-super-harmonic functions .................................................................................................319P. Iñiguez

Rovira i Virgili University, Spain

J. Rosell

Technical University of Catalonia, Spain

SESSION TH-A-1- SERVICE ROBOTICS

An Underwater Robotic System for Sea-bottom Reclamation ........................................................................................327

R. Molfino, M. Zoppi

University of Genova, Italy

Decentralized Control for Robot Teams in Unknown Environments ................................................................................333

R. Falconi, C. Melchiorri

University of Bologna, Italy

“HUMI” - a mobile Robot for Humanitarian Demining ......................................................................................................339

P. Kopacek

Vienna University of Technology, Austria

Robot Technology to Support Workplace Ergonomic Adaptation ...................................................................................345

J. de Nó, P. González de Santos

Instituto de Automatica Indusrial - CSIC, Spain

Developing a strategic research agenda for robotics in Europe .....................................................................................351

R. Bischoff, T. P. Guhl

KUKA Roboter GmbH, Germany

O. Schwandner

Fraunhofer-Institute für Produktionstechnik und Automatisierung (IPA), Germany

SESSION TH-A-2 - COGNITIVE ROBOTICS

Improved Vibration based Terrain Classification using Temporal Coherence .................................................................359

P. Komma, C. Weiss, A. Zell

University of Tübingen, Germany

A cognitive System based on Emotions for Autonomous Robots ...................................................................................365

I. Chang

Universidad Tecnológica de Panamá, Panamá

M. Álvarez, R. Galán

Universidad Politécnica de Madrid, Spain

An Evolutionary Approach to Maximum Likelihood Estimation for Generative Stochastic Models ..................................371

R. C. Kelley, M. Nicolescu, M. Nicolescu, S. Louis

University of Nevada, USA

3D Internet Simulation of the Collective Behaviour of Robot Swarms ...........................................................................377

F. P. Bonsignorio

Heron Robots s.r.l., Italy

15

Hybrid Collaborative Stereo Vision Systemfor Mobile Robots Formation Navigation

F. Roberti†, J.M. Toibero†, C. Soria†, R.F. Vassallo‡ and R. Carelli†

†Instituto de Automática, Universidad Nacional de San Juan, Av. San Martín Oeste 1109 - J5400ARL- ARGENTINA (e-mail: [email protected])

‡Dpto. de Enghenaria Elétrica, Universidade Federal do Espirito Santo Av. Fernando Ferrari 514, Vitória, ES, BRASIL

Abstract: This paper presents the use of a hybrid collaborative stereo vision system (3D-distributed visual sensing using different kind of vision cameras) for the autonomous navigation of a wheeled robot team. It is proposed a triangulation-based method for the 3D-posture computation of an unknown object by considering the collaborative hybrid stereo vision system, and this way to steer the robots team to a desired position relative to such object while maintaining a given robot formation. Experimental results with real mobile robots are included to validate the proposed vision system.

1. INTRODUCTION

Artificial vision systems have been widely used as external sensors in mobile robotics applications due to the large amount of information that they can offer. For this reason, they become nowadays in the most used sensors in tasks such as surveillance, search, exploration, rescue, mapping and even they have been used for autonomous navigation (Soria et al., 2007; Carelli et al., 2006b; Okamoto y Grassi, 2002; Vassallo et al., 2005). Moreover, the use of two or more cameras simultaneously gives to the system the 3D perception, allowing it to successfully perform different tasks in completely unknown environments. Additional advantages could be obtained by including onmidirectional cameras into the system (Correa y Okamoto, 2005). These cameras allow increasing the horizontal visual field to 360º but loosing image resolution. Another choice is to combine catadioptric cameras with perspective-transformation cameras constructing a new hybrid stereo vision system, which have the advantages of both mentioned kind of vision cameras (Adorni et al., 2001; Sturm, 2002). It is a well known fact that most of the previously mentioned tasks could be done more efficiently by considering two o more robots working cooperatively (Carelli et al., 2006a; Das et al., 2002; Roberti et al., 2007; Toibero et al., 2008). A similar situation could be thought for the stereo vision system, where each camera is on a different robot introducing a new collaboration degree between the robots in the team. Hence, the robots not only execute a cooperative task but also help to perform an environment information recollection which is necessary to carry out this task. This concept of vision sensors distribution among the different robots in the team not only reduces the computational effort by dividing image processing tasks, but also allows introducing a reconfigurable stereo vision system able to be adapted to the requisites imposed by the robot surroundings (Zhu et al.., 2004; Cervera, 2005).

In this paper it is considered the use of a collaborative hybrid stereo vision system (3D distributed visual sensing using different types of vision cameras) for the autonomous navigation of a mobile robots team. It is proposed a triangulation-based method for the posture computation of an unknown object in the tridimensional space by using the hybrid collaborative stereo vision system steering the robots team to a desired goal position relative to such object (Soria et al., 2007) while maintaining an a priori known robot formation (Roberti et al., 2007).

Other previously published papers consider reconfigurable stereo vision systems. In the work of (Zhu et al., 2004) it is introduced a reconfigurable vision system composed by two onmidirectional cameras and it is proposed its use to perform a human being following task within a surveillance scope. In the work of (Cervera, 2005) the vision system is integrated exclusively by perspective transformation cameras and exposes its use in object-manipulation tasks by considering a mobile manipulator. Different from these papers, this work proposes the construction of a hybrid collaborative vision system and its use for the autonomous navigation of a robot team.

The remainder of this paper is organized as follows: Section 2 summarizes the different vision systems models employed along the paper and presents the proposed hybrid stereo vision system. Sections 3 and 4 deal with the control strategies considered for the autonomous robot team navigation. Section 5 exposes the experimental results obtained and finally, Section 6 states conclusions and describe future related works.

2. VISION SYSTEMS MODELS

A vision camera transforms a 3D space into a 2D projection on the image plane, where the vision sensor is located. This

113

projection causes the lost of the depth perception, which means that each point on the image plane corresponds to a ray in the 3D space.

2.1 Perspective projection camera model

Several projection models for the representation of the image formation process have been proposed. The most used is the perspective projection model or “pin-hole” model. In this model, a coordinate system Z,Y,X,O PPPP attached to the

camera is defined in such a way that the X and Y axes define a base for the image plane and the Z axis is parallel to the optic axis. The origin of the framework Z,Y,X,O PPPP is

located at the focus of the camera lens. From Fig.1.a, a fixed point P in the 3D space with coordinates T

PPP ZYXPon the framework attached to the perspective camera will be projected on the image plane as a point with coordinates

TPP

P vu given by,

TPP

P

PP YXZf

(1)

where fP is the focal length of the camera expressed in pixels.

2.2 Omnidirectional vision system

Omnidirectional vision systems allow obtaining 360º field-of-view images. These images could be obtained by using a single camera rotating around its own Y-axis, by using multiple cameras, each one oriented in a different direction or by using a catadioptric system (Yagi, 1999). Central catadioptric cameras combine convex reflective surfaces and perspective cameras with the aim of attaining a 360º field-of-view with a single image. These vision systems are formed by mounting a camera in front of a convex mirror in such a way that the camera sees the image reflected by the mirror (Baker and Nayar, 1998), as Fig.1b shows. The catadioptric camera used in this work was built with a conventional perspective projection color CCD camera and a hyperbolic mirror, by aligning the camera optic axis with the mirror axis, and making the optic centre of the camera coincident with the focus F’ of the hyperbola (Svoboda et al. 1998).

Fig.1.a) Perspective projection camera model and 1.b) Catadioptric vision system

Fig.2. Stereo vision system

In the framework Z,Y,X,O OOOO attached to the

perspective camera of the catadioptric system, the equation that describes the hyperbolic mirror geometric is,

12

22

2

2

byx

aez with 22 bae (2)

2.3 Hybrid stereo vision system

Both vision systems briefly described in the above Sections could be combined with the aim of constructing a stereo vision system in order to obtain depth perception (Adorni et al., 2001). With this stereo vision system, it will be possible to get the 3D coordinates of an object without any previous knowledge about it. The structure of the proposed vision system is shown in Fig. 2. In this figure the coordinates of the relevant points are,

TO 200 eF ; T

OOOO ZYXP

TPPP

P ZYXP ;T

mmmmO ZYXP

TOO

O vu ; T

PPP vu

(3)

where 0F is the focus of the hyperbolic mirror expressed in the omnidirectional vision system framework, 0P is the interest point P expressed in the omnidirectional vision system framework, 0Pm is the point where the ray PFcrosses the mirror expressed in the omnidirectional vision system framework, 0 is the projection of 0Pm on the omnidirectional image plane, PP is the interest point Pexpressed in the perspective projection camera framework, and P is the projection of P on the perspective projection camera image plane.

The objective of this Section is to find the equation system that allows getting the 3D coordinates of the interest point P in the framework attached to the perspective projection camera. Initially, it is necessary to express the point 0Pm as a function of the 0P-coordinates, finding the expression of the ray PF in the omnidirectional vision system framework,

eZezYyeZezXx

2/22/2

OO

OO (4)

F

P

Pm

OP

pZ

pY

fP fO

PO

OO F’

pX

OZ

OX-OY

P

YP

XP

ZP

PXPY

PZ

uP

vPOP

fP

114

and introducing (4) in (2), a second order equation in z, that represent the z-coordinates of the two points where the ray PF crosses the hyperbole, is obtained

1/2

22

2/ 22

OO

2

OO

22 beZ

ezYeZ

ezXaez (5)

Operating and reorganizing (5),

02 CzBzA (6)with

2O

2

2O

2O

2 21

eZbYX

aA ; 2

O2

2O

2O

2 242

eZbYXe

aeB

and 12

42

O2

2O

2O

2

2

2

eZbYXe

aeC .

Then, (6) can be solved by using

AACBBz

242

2,1 (7)

The value will be the z-coordinate of the point 0Pm. By introducing in (4), the 3D coordinates of the point 0Pm can be gotten as functions of 0P 3D coordinates,

eZeXX 2/2 OOm (8.1)eZeYY 2/2 OOm (8.2)

mZ (8.3)Now, the pin-hole model of the perspective camera in the omnidirectional system,

Tmm

m

OO YXZf

(9)

can be used to get the projection of the point 0Pm in the omnidirectional image. Next, by introducing (8.1), (8.2) and (8.3) in (9) the equations that relate the projection of the interest point P after its reflection on the mirror with its 3D coordinates in the framework attached to the omnidirectional vision system can be obtained,

OO

OOO

O

OO 2

2;

22

YeZfe

vXeZfe

u (10)

By taking into account that PP and 0P are the same interest point represented in two different coordinate systems, it is possible to relate them by

TPRP OP (11)

where 33ijrR and 13

itT are the rotation matrix and the translation vector that represent the relative position between both frameworks. Equation (11) can be split in the following three equations,

3O132O121O11P tZrtYrtXrX

3O232O221O21P tZrtYrtXrY

3O332O321O31P tZrtYrtXrZ(12)

Now, XP, YP, X0 and Y0 in (12) can be replaced using (1) and (10), obtaining the following three-equation system of three variables ( , Z0, ZP),

313212111PP

PO13O12O11 trtrtrZ

fu

Zrvrur

323222121PP

PO23O22O21 trtrtrZ

fv

Zrvrur

333232131PO33O32O31 trtrtrZZrvrur

(13)

where O

O

22

feeZ

.

The three-equation system (13) allows getting the z-coordinate of the interest point P in both coordinate systems (Z0, ZP) and parameter. Then, by using (1) and (10), the complete 3D position of the point in both frameworks can be obtained. As it is usual in stereo vision systems, it is necessary to know the extrinsic parameters (R and T), the focal length of the perspective projection camera fP, and the coordinate of the interest point on the image planes (O and P ), which are measured directly from the images using some image processing method. Although the feature extraction and the points’ correspondence between images are very interesting problems, they are not addressed and it is considered that O and P are obtained by some image processing technique.

Remark. Note that (13) has been found considering hyperbolic shape mirror and its application is restricted for this kind of catadioptric vision systems. Nevertheless, it can be generalized for any mirror shape if it is considered the general projection model (Geyer and Daniilidis, 2000). In this case, (13) becomes,

313212111

OO12O11

13PP

PO12O11

trtrtr

Zml

vrurrZ

fu

mlvrurl

323222121

OO22O21

23PP

PO22O21

trtrtr

Zml

vrurrZ

fv

mlvrurl

333232131

OO32O31

33PO32O31

trtrtr

Zml

vrurrZ

mlvrurl

(14)

where l and m are parameters of the general projection

model; 2O

2O

2O ZYX , ZP and Z0 are the unknown

variables of the three-equation system (14)

3. FORMATION CONTROL

The coordinated navigation for the robots team is achieved by considering the formation control proposed by (Roberti et al,2007), which allows the robots to reach a specific formation, and to maintain it while the robots navigate in the workspace. This formation controller is based on a decentralized leader-following technique, i.e., the leader robot navigates under its own control law sensing the followers’ posture (relative to its own reference framework). With this information the leader

115

computes the control actions to be send to each follower in order to successfully accomplish the navigation objective under formation. Follower robots are considered as unicycle-like robots navigating with linear velocity v and orientation on the coordinate system Y,X,O LLL attached to the leader

robot. By considering the robot as the punctual object C, the following equation set can describe this movement

'cossinsencos

vdvydvx

(15)

where v’ y are the leader robot linear and angular velocities (and hence, the velocities that rule its associated framework movement); is the follower robot angular velocity; d-distance and -angle define the follower robot position with respect to the leader robot according to Fig. 3.

In order to compute an error between the actual positions of each robot and its desired positions in the formation, let

Tiii yxL be the position vector of the i-th robot, and

Tdidii yxd

L be the i-th desired position, with i=1,2,..,n

both vectors are defined on the framework Y,X,O LLL

attached to the leader robot, as Fig.4 shows. For each case, the n position vectors can be arranged in the global vectors:

TTn

LT2

LT1

LL and TTLT2

LT1

LLndddd .

The difference between the actual and the desired robot’s position is,

dLLL ~

. (16)The formation error is defined as follows,

hhhhhhh dddLL ;;~ (17)

where h is the output variable, which captures information on the actual conditions of the robots team and dh represents

the desired output variable. The function h L must be defined in such a way that it is continuous and differentiable, and the Jacobian matrix J that relates h with L ,

Jh LL ; nnx22L

LL hJ , (18)

has full rank. Vector L has two different components, that

is lsLLL , where s

L is the time variation of L

due to the velocities of the follower robots; and lL is the

Fig.3. Robot kinematic model

Fig.4. Actual and desired positions

time variation of L due to the velocities of the leader robot. Now, the first equation of (18) can be written as:

lsJh LLL . (19)The control objective is to guarantee that the multi-robot system asymptotically reaches the desired formation defined by hd. Formally, the designed control system must satisfy that 0h t

t

~lim . First, from (19) a vector of reference

velocities for the robots is defined as,

lhfKhJ d1

rL

h~

LL ~(20)

where K is a diagonal and positive definite matrix, hfh

~~ is a

continuous saturation function applied to the output error, such that 0~~

~ hfh hT for all 0h~ ; for example hfh

~~ can

be selected as h~tanh . Vector rL represents the velocities

of the follower robots on the framework attached to the leader robot that allows them to reach the desired formation and keep it while following the leader. It can be proved that the following commands for linear and angular velocities

riiici fk ~ (21)

iiciv ~cosLr (22)

guarantee that the follower robots will asymptotically achieve the reference velocities defined in (20). In (21) and (22), k i

is a positive constant; irii~ is the angular error

between the i-th robot and the heading of its reference velocity; ri is the time derivative of the reference velocity heading for the i-th robot; if ~ is a continuous saturation function applied to the angular error, such that 0~~

ii ffor all 0~

i .

4. LEADER ROBOT NAVIGATION CONTROL

The leader robot navigates accordingly to the control laws proposed by (Soria et al., 2007). This controller generates the linear and angular velocity commands (v’c and c) in order to set its position (and consequently, the whole team) in front of an unknown object. Such commands are obtained by computing the robot-object relative posture. This posture is defined by the distance and the angles and , as can be seen in Fig.5.

The control objective is to maintain the robot at certain fixed distance d behind the object with = d considering only the robot-object information provided by the vision camera.

WX

WY

d

LYLX

v

v’

x

y

WO

| d|C

Ld1

Ld2

LX

LY yd2

xd2yd1

xd1

y1

y2

x1

x2

L2

L1

116

Fig.5. Robot-object relative posture

This way, some characteristic problems due to odometry errors can be avoided. Nevertheless, the vision system must be fast and precise, guaranteeing the controller quality. Being

d~ and d

~ , then the control objective could be specified as

twith0~;0~ (23)

The robot-object relative position evolution will be given by their time derivatives. Where the distance error variation is given by the difference between the velocity projections of the robot (v’) and the object (vT) on the line 3

L O P (Fig.5),

cos'cos~ vvT . (24)

Analogously, the -angle variation has three terms: the leader robot angular velocity and the rotational effect produced by the linear velocities of both: the robot and the objective. This could be written as

sin'sin~ vvT . (25)

Next, it is proposed the following controller which satisfies the control objective (23),

cos)~(cos

'fv

v Tc (26)

sin'sin)~( vvf Tc (27)

where )~(,)~( ff are continuous saturation functions such that 0xfx for all 0x . For this paper,

)~tanh()~( kf and )~tanh()~( kf , being k ,, k and positive constants. The -distance and the angles

necessaries for the control laws computation could be obtained from the (not vertically aligned) unknown object corner positions P1 and P2 as is shown in Fig.5,

22P1P

22P1P2

1 yyxx (28)

2P1P2P1P1 /tan yyxx (29)

2P1P

2P1P1tan2 xx

yy(30)

where (xP1,yP1) and (xP2,yP2) are de coordinates of the points P1 and P2 obtained through the hybrid stereo vision system.

5. EXPERIMENTAL RESULTS

In order to validate the proposed method for reconstructing the 3D position of an unknown object, a collaborative sensing experiment using the hybrid stereo vision system was carried out. The experimental setup is a mobile robots team consisting of two mobile robots Pioneer (manufactured by Mobile Robots Inc.). The leader robot has the catadioptric vision system and the follower robot has the conventional perspective projection camera. In the experiment, the team of robots must autonomous navigate maintaining a desired formation until they reach a desired posture relative to a static unknown object (vT = 0). In Fig. 6, the complete structure of the proposed control system is showed.

By using the hybrid collaborative stereo vision system, the team gets the posture of the unknown object relative to the leader robot, necessary in (26) and (27); and with the catadioptric vision system, the leader robot obtains the posture of the follower robot relative to its own framework, required in (20), (21) and (22). Additionally, the posture of the follower robot allows the leader to determine the matrix R and the vector T that define the relative posture between both vision systems, needed in (13).

Fig.6. Proposed control system

Figures 7, 8 and 9 show the results of the experiment. Figure 7 shows the evolution of the distance error d

~ ;whereas Fig. 8 shows the evolution error d

~ ; and the trajectories described by the robots are shown in Fig. 9.

Fig.7. Catadioptric Vision System: Robot-object position error

Leader controller

Followerrobot

Formationcontroller

d

Ld

L

O

P

v’c

c

vT vc

c

-

-

v’

-d

H

Leader robot

Perspective camera

Omnidirectional camera

Unknown Object

LY

LX

Leader Robot

v’

yP1

yP2

xP2

xP1

P1 P2P3

117

Fig.8. Catadioptric Vision System: Robot-object angular error

Fig.9. Trajectories described by the robots

6. CONCLUSIONS

In this paper, it has been presented a collaborative hybrid stereo vision system, i.e., a stereo vision system composed by a perspective transformation camera and a catadioptric camera, each one mounted on different mobile robots. This way, both robots collaborates on the environment information extraction needed to satisfy the proposed control objectives. Also, the stereo vision system can be re-configured with the aim of obtaining the best quantity and the better environment information quality. Furthermore, experimental results that clearly show the good performance of this vision system when applied to the mobile robot navigation have been presented.

Future works on this subject, will address the implementation of collaborative vision systems with more than two cameras, the proposal for new algorithms to compute the best vision system configuration and consequently the desired position of each robot into the desired formation. Furthermore, the consideration of Scale Invariant Features Transform (SIFT) algorithms for the image features extraction and for the determination of correspondence between the points on different images would add robustness to the vision system.

REFERENCES

Adorni, G.; L. Bolognini, S. Cagnoni and M. Mordonini (2001). A non-traditional omnidirectional vision system

with stereo capabilities for autonomous robots, Proc. of Congress of the Italian Assoc. for Artif. Intell., Bari, Italy.

Baker, S., and S.K. Nayar (1998). A Theory of Catadioptric Image Formation, Proc. of Int. Conf. on Computer Vision,pp. 35-42, Bombay, India.

Carelli, R., C. De la Cruz and F. Roberti (2006a). Centralized formation control of non-holonomic mobile robots, Latin American Applied Research, 36(2):63-69.

Carelli, R., J. Santos-Victor, F. Roberti and S. Tosetti (2006b). Direct visual tracking control of remote cellular robots, Rob&Autonomous Systems, 54(10):805-814.

Cervera, E. (2005). Distributed visual servoing: a cross-platform agent-based implementation. Proc. of IEEE/RSJ Int. Conf. on Intell.Rob&Syst, pp. 319-324, Edmonton, Alberta, Canada.

Correa, F.R, and J. Okamoto (2005). Omnidirectional stereovision system for occupancy grid, Proc. of IEEE Int. Conf. on Advanced Rob, pp. 628-634, Seattle, WA, USA.

Das, A. K., R. Fierro, V. Kumar, J.P. Ostrowski, J. Spletzer and C.J. Taylor (2002). A vision-based formation control framework, IEEE Trans. on Rob & Autom, 18(5):813-825.

Geyer, C., and K. Daniilidis (2000). Equivalence of catadioptric projections and mappings of the sphere, Proc. of IEEE Workshops on Omnidirectional Vision, pp. 91–96, Hilton Head Island, SC, USA.

Okamoto J., and V. Grassi (2002). Visual Servo Control of a Mobile Robot using Omnidirectional Vision, Proc. of Mechatronics 2002, pp. 413-422, Netherlands.

Roberti, F., J.M. Toibero, R. Carelli and R. Vassallo (2007). Stable formation control for a team of wheeled mobile robots, Reunión de Trabajo en Procesamiento de la Información y Control, Rio Gallegos, Argentina.

Soria, C., L. Pari, R. Carelli and J.M. Sebastian (2007). Homography-Based Tracking Control for Mobile Robot, Proc. of IEEE International Symposium on Intelligent Signal Processing, pp. 1-6, Alcalá de Henares, Spain.

Sturm, P. (2002). Mixing Catadioptric and Perspective Cameras, Proc. of IEEE Workshop on Omnidirectional Vision, Copenhagen , Denmark.

Svoboda, T., T. Pajdla and V. Hlavac (1998). Central panoramic cameras: Geometry and design, Proc. of Computer Vision Winter Workshop, pp. 120-133, Gozd Martuljek, Slovenia.

Toibero, J.M., F. Roberti, R. Carelli and P. Fiorini (2008). Hybrid Formation Control for Non-Holonomic Wheeled Mobile Robots, LNCIS: Recent Progress in Robotics; Viable Robotic Service to Human, 370, pp. 21-34. Springer

Vassallo, R., A. Franca and H. Schneebeli (2005). Detecção de obstáculos através de um Fluxo Óptico Padrão obtido a partir de imagens omnidirecionais, Proc. of Latin America IEEE Robotics Symp. (SBAI/IEEE-LARS), São Luis,Brasil.

Yagi, Y., (1999). Omnidirectional sensing and its applications, IEICE Trans.on Inf&Syst, E82D(3):568-579.

Zhu, Z., D.R. Karuppiah, E.M. Riseman and A.R. Hanson. (2004). Keeping smart, omnidirectional eyes on you, IEEE Rob&Aut Magazine, 11(3):67-78.

118