A simplified control for robot convoy

6
Abstract— This paper presents a simplified control of robot convoy. This control is a multi-level “bang-bang” derived by integration of Sensory Input Map (SIM) and Control Output Map (COM). We demonstrate that the robots are able to follow the leader while keeping a safe constant distance between them. The developed control is validated in simulation and tested with up-to five mini-robots. I. INTRODUCTION obot convoy driving is an interesting research topic in multi-robot system because it obtains potential applications for military purposes, e.g., a LF convoy for the High Mobility Multipurpose Wheeled Vehicles (HMMMV’s) has conducted by the US Army [1] and for civilian purposes, e.g., intelligent product transportation with Automatic Guided Vehicles (AGVs) [2]. Robot convoy can be seen as a special case of platooning or formation control of multi-robot systems [3], [4]. There are convoy driving models which shown in [5] as Gazis-Herman- Rothery model, Safety distance or collision avoidance model, Linear model, Psychophysical or action point model, and Fuzzy logic-based model. In this paper, the robots are controlled in order to follow the leading robot while keeping a safe constant distance between them. In Figure 1, it shows a convoy of N robots in the Cartesian coordinate system. The robot ith is denoted by Ri where i=0,…,N-1. The pair (xi,yi) are the coordinates of the robot Ri in reference to the Cartesian coordinate system. Manuscript received May 14, 2012. This work was supported by Project of Teaching and Research Improvement Grant, No.4328 (TRIG). Pham Duy Hung and Tran Quang Vinh are with the Faculty of Electronic and Telecommunication, University of Engineering and Technology (UET- VNU), Hanoi, VietNam (e-mail: [email protected]) Ngo Trung Dung was with Aalborg University, Denmark. He is now with the Faculty of Science, University of Brunei Darussalam (UBD), Bandar Seri Begawan, Brunei (e-mail: [email protected]) Previous works have designed many controllers to drive the convoy. In [6], a theoretical framework of convoy control of wheeled mobile robots is presented. The control strategy is derived on guidance laws based on geometrical rules. These guidance laws are applied to issue decentralized control laws for the angular and linear velocities. The control law for the orientation angle is directly derived the guidance law equations after elaboration of the kinematics equations between successive robots. The control law for linear velocity aims to keep the constant distance between robots. In [7], the authors discussed how the position and motion of the other robots in convoy is measured by using visual indication and/or communication between the robots. Two followers and leader models are addressed: following without communication and following with exchanging orientation angles between them. They showed that the performance of the convoy is enhanced if the leader is able to communication its intended orientation before it starts moving. Another multi-robot convoying behavior utilizing neuro-fuzzy control is presented in [8]. The NiF-T architecture has been applied to control the speed of the following robot. The follower pursues the leader in close distance with instant change of its velocity and stops at the desired distance. More interestingly, the follower can move backward according to the reverse of the leader. In [9], a control solution for guidance of robot units in convoy even in non-linear trajectories using Mamdani fuzzy controller is presented. Each follower unit implements a fuzzy decentralized control considering sources of information: motion state of the owner unit and of the preceding one as well as the trajectory coordinates taken by the leader. The controller has been designed to enable both linear and angular velocities to be calculated independently. An adaptive non-linear control of the following vehicle that allow the follower to track the trajectory of the leader in the presence of unknown leader linear and angular velocities is shown in [10] with mathematical model of a Two-vehicle convoy. In this paper, we developed a velocity pursuit based control by simplifying the complexity using “bang-bang” control model. The control is simply derived from the integration of S ensors I nput M ap (SIM) and C ontrol O utput M ap (COM), which works in switching modes for distance maintenance between the follower and the leader. This paper is organized as follows. In section II, we describe modeling and control of robotic convoy. Hardware A Simplified Control for Robot Convoy Pham Duy Hung, Tran Quang Vinh, and Ngo Trung Dung R Fig. 1. Robotic convoy in the Cartesian coordinate system

Transcript of A simplified control for robot convoy

Abstract— This paper presents a simplified control of robot

convoy. This control is a multi-level “bang-bang” derived by

integration of Sensory Input Map (SIM) and Control Output

Map (COM). We demonstrate that the robots are able to

follow the leader while keeping a safe constant distance

between them. The developed control is validated in

simulation and tested with up-to five mini-robots.

I. INTRODUCTION

obot convoy driving is an interesting research topic in multi-robot system because it obtains potential

applications for military purposes, e.g., a LF convoy for the High Mobility Multipurpose Wheeled Vehicles (HMMMV’s) has conducted by the US Army [1] and for civilian purposes, e.g., intelligent product transportation with Automatic Guided Vehicles (AGVs) [2]. Robot convoy can be seen as a special case of platooning or formation control of multi-robot systems [3], [4]. There are convoy driving models which shown in [5] as Gazis-Herman-Rothery model, Safety distance or collision avoidance model, Linear model, Psychophysical or action point model, and Fuzzy logic-based model. In this paper, the robots are controlled in order to follow the leading robot while keeping a safe constant distance between them. In Figure 1, it shows a convoy of N robots in the Cartesian coordinate system. The robot ith is denoted by Ri where i=0,…,N-1. The pair (xi,yi) are the coordinates of the robot Ri in reference to the Cartesian coordinate system.

Manuscript received May 14, 2012. This work was supported by Project of Teaching and Research Improvement Grant, No.4328 (TRIG).

Pham Duy Hung and Tran Quang Vinh are with the Faculty of Electronic and Telecommunication, University of Engineering and Technology (UET-VNU), Hanoi, VietNam (e-mail: [email protected])

Ngo Trung Dung was with Aalborg University, Denmark. He is now with the Faculty of Science, University of Brunei Darussalam (UBD), Bandar Seri Begawan, Brunei (e-mail: [email protected])

Previous works have designed many controllers to drive the convoy. In [6], a theoretical framework of convoy control of wheeled mobile robots is presented. The control strategy is derived on guidance laws based on geometrical rules. These guidance laws are applied to issue decentralized control laws for the angular and linear velocities. The control law for the orientation angle is directly derived the guidance law equations after elaboration of the kinematics equations between successive robots. The control law for linear velocity aims to keep the constant distance between robots. In [7], the authors discussed how the position and motion of the other robots in convoy is measured by using visual indication and/or communication between the robots. Two followers and leader models are addressed: following without communication and following with exchanging orientation angles between them. They showed that the performance of the convoy is enhanced if the leader is able to communication its intended orientation before it starts moving. Another multi-robot convoying behavior utilizing neuro-fuzzy control is presented in [8]. The NiF-T architecture has been applied to control the speed of the following robot. The follower pursues the leader in close distance with instant change of its velocity and stops at the desired distance. More interestingly, the follower can move backward according to the reverse of the leader. In [9], a control solution for guidance of robot units in convoy even in non-linear trajectories using Mamdani fuzzy controller is presented. Each follower unit implements a fuzzy decentralized control considering sources of information: motion state of the owner unit and of the preceding one as well as the trajectory coordinates taken by the leader. The controller has been designed to enable both linear and angular velocities to be calculated independently. An adaptive non-linear control of the following vehicle that allow the follower to track the trajectory of the leader in the presence of unknown leader linear and angular velocities is shown in [10] with mathematical model of a Two-vehicle convoy.

In this paper, we developed a velocity pursuit based control by simplifying the complexity using “bang-bang” control model. The control is simply derived from the integration of Sensors Input Map (SIM) and Control Output Map (COM), which works in switching modes for distance maintenance between the follower and the leader.

This paper is organized as follows. In section II, we describe modeling and control of robotic convoy. Hardware

A Simplified Control for Robot Convoy

Pham Duy Hung, Tran Quang Vinh, and Ngo Trung Dung

R

Fig. 1. Robotic convoy in the Cartesian coordinate system

and software of mini-robots used for the experiments are presented in section III. Experiment results are addressed and analyzed in section IV. Lastly, the final remark of this paper is concluded in section V.

II. MODELING AND CONTROL OF ROBOT CONVOY In [6], the authors presented a kinematics model for the

robot convoy using velocity pursuit for constant distance maintenance between robots as follows.

)cos(1,1 iiiii

vv (1)

1,11 iiii (2)

Equation (1) represents the relative linear velocity of the leading robot Ri and the following robot Ri+1, along the line of sight Li,i+1 which is the virtual straight line starting from robot Ri+1 and directed toward robot Ri. Equation (2) represents orientation angle of robot Ri+1 with respect to the positive x-axis. i+1 represents the angle between the line of sight and the velocity vector of Robot Ri+1. i,i+1 is the angle between the positive x-axis and the line of sight, called the line of sight angle (see figure 2).

When the robots are controlled based on the Velocity

Pursuit Guidance law as in [6], the orientation angle of Ri+1 is given by:

1,1 iii (3)

Thus, we have:

011 iiii (4)

vi 1 v

icos(

i i 1) vi (5)

Two successive robots always maintain their distance, d, so linear velocity of following robot is directly proportional to relative distance. Thus, we can represent vi+1 as a function of the distance d.

)(1 dfvvi

(6) The relative angle is usually determined by position of

sensors and relative distance. To build a map of distance according to relative angle, we measured relative distance using the position of sensors and relative distance. Equation (3) can be represented as a function of the sensors status s.

)(1 sfi

(7)

Graph of functions fv(d) and f (s) are shown in figure 3. Figure 4 illustrates the robot equipped with three ranging sensors.

In short, active area of each robot is presented by set of

distance values and combination of sensor status called Sensory Input Map (SIM) as figure 5.

nnnn

n

n

dSdSdS

dSdS

dSdSdS

SIM

,...,,............,,,....,,

10

101

01000

(8)

where Si for i=0,…n is combination of sensor status. For instance, if our robot is equipped with three distance measuring sensors, we have seven sensor status. di for i=0,…n is distance from itself to object.

By using a couple of functions fv(d) and f (s), we estimated output values to set the look-up table for “bang-bang” control corresponding with input values. Set of control output values create to a Control Output Map (COM).

Fig. 3. Graph of functions: (a) fv(d) and (b) f (s)

Fig. 5. Sensory Input Map of robot.

Fig. 4. Relative angle is determined by combination of sensors status

(a) i,i+1=300, (b) i,i+1=150, (c) i,i+1=00, (d) i,i+1=-150, (e) i,i+1=-300

Fig. 2. Geometry of two successive robots

nnnn

n

n

vvv

vvv

vvv

COM

,...,,............,...,,,...,,

10

11101

01000

(9)

In this case, Control modeling of robot convoy can be shown in figure 6.

III. ROBOT CONVOY WITH MINI-ROBOTS

A. Configuration of Mini-Robots

In this paper, we use five mini-robots having configuration schematic as figure 7 for robotic convoy driving.

Each mini-robot is equipped with four ranging infra-red

sensors as depicted in figure 9. The ranging sensors, namely GP2D12, provide measurement of effective range from 0 to 80 cm through analog output. Analog signal of the sensors is converted to digital signal by analog-to-digital converter (ADC) of micro-controller AVR using average filter algorithm, which has been calibrated and measured as depicted in figure 8.

The SIM map is built based on three in-front sensors of

mini-robot installed with angle 60o tilt to the front direction

as illustrated figure 9. The mini-robot samples distance value of sensors with frequency 100 Hz.

The microcontroller AVR maps values in SIM into one of control output signal in COM in order to determine the control signal. The output of the controller is sent to motor controller L298N to steer two differential wheels of the mini-robot. The velocity of mini-robot is managed by PWM module with frequency 600 Hz.

B. The integration of Sensory Input Map (SIM) and

Control Output Map (COM)

In this section, we implement experiments of robot convoy with mini-robots in the indoor environment. The mini-robots are controlled to follow the leading robot while keeping a safe distance from each other without communication. The robots are able to avoid obstacles when driving in the convoy.

In the experiments, the followers use SIM map to determine the control output in COM at each time with constraints as orientation angle of the leader varying slowly, linear velocity of the leader is a constant.

In experiments, we use SIM with d0=20 cm, d1=30 cm,

d2= 40 cm, d3=50 cm, and seven combined values of sensors measurements and motor speeds: SIM with s0=001, s1=01m1, s2=011m, s3=010, s4=11m0, s5=1m10, s6=100; COM with v1=40%, v2=50%, v3=60%, v4=70% duty cycle of PWM, and 0=600, 1=450, 2=300, 3=00, 4=-300, 5=-450,

6=-600, where m expresses sensor having minimum distance value. The SIM and COM can be represented as follows:

)50,100()40,100()30,100()20,100()50,101()40,101()30,101()20,101()50,011()40,011()30,011()20,011(

50,01040,01030,01020,010)50,011()40,011()30,011()20,011()50,101()40,101)30,101()20,101(

50,00140,00130,00120,001

mmmm

mmmm

mmmm

mmmm

SIM

and

Fig. 8. Characteristic of infra-red sensor. Digital signal samples

are fitted in linear function of relative distance

Fig. 7. Configuration schematic of mini-robot

Fig. 6. The schematic of control modeling for robot convoy

Fig. 9. Front side of mini-robot

)70,60()60,60()50,60()40,60()70,45()60,45()50,45()40,45()70,30()60,30()50,30()40,30(

)70,0()60,0()50,0()40,0()70,30()60,30()50,30()40,30()70,45()60,45()50,45()40,45()70,60()60,60()50,60()40,60(

COM

The control is automatically derived as a hierarchical

multi-level “bang-bang” control by linearly mapping a value of the SIM map into a value of the COM map. In the reference to control architectures of mobile robots, e.g., reactive control, deliberative control, behavior-based control and learning control, this control is extremely simple because it works as multi-level reactive control with the switching modes issued by the sensory input map.

IV. EXPERIMENTS AND RESULTS

A. Simulation

We simulated the mobile robots as differential driven wheeled vehicles with three sensor beams (flower-like shape) in front side of robot. Angular velocity and linear velocity are controlled by rate equations for the motor speeds. Experiments of robot convoy using the multi-level “bang-bang” control are carried out with up-to five robots as shown in figure 10. The leading robot is set with a pre-defined circle trajectory as the reference for the control verification.

Initially, the leading robot is set at a reasonable speed and we did experiments with one up to four followers. The path traced by the following robots showed that they were able to pursue the leading robots accurately (see figure 11). However, when we suddenly speeded up the leading robot, the convoy could not maintain itself.

The results in figure 12 and 13 show that the relative

orientation between the leading and following robots varies within an average range of 250. Because the variance is less than a half of 600 made between the side sensor and the front sensor, the robots in the convoy stably maintain their formation.

B. Real Experiments

We repeated our experiments in simulation with five real mini-robots. The multi-level “bang-bang” control was installed with parameters described in section III.B on all robots. We conducted four types of experiments as depicted figure 14 and executed the robot several times for observation and measurements.

With only one follower the convoy works pretty wells as the follower can catch up the leader, even when the leader turns randomly with speed 16 cm/s (at 40% duty cycle of

Fig. 12. Orientation of robots in the convoy: (a) two robots,

(b) three robots, (c) four robots, and (d) five robots

Fig. 11. The traced paths of the robots showed that the following robots can follow its leader: (a) two robots, (b) three robots,

(c) four robots, and (d) five robots.

Fig. 10. Simulation of robot convoy: (a) with two robots, (b) with three robots, (c) with four robots, and (d) with five robots

PWM) to avoid static obstacles. The stability of the control is gently decreased when we add more the following robots to the convoy or when we make the leader driving faster. When the speed of the leader is greater than 24 cm/s (at 60% duty cycle of PWM), the formation can not maintain.

C. Results and Analyses

We have demonstrated the robot convoy with the following constraints:

- No dynamic obstacles in the environment. - The leading robot moves gently according the speed of

the followers. The results of simulation and real experiments 1 in the

test-bed shown that the multi-level “bang-bang” control can

1 Video clips of simulated and real experiments can be seen on the website: www.morelab.org

be used for robot convoy with simple ranging sensors without the need of communication between the robots. The robot with only ranging sensors can maintain the distance between the leader and up-to-five followers at maximum speed 24 cm/s.

However, through the experiments, we find out the drawbacks of “bang-bang” control in the robot convoy:

- The control cannot fully adapt to mobility of the leader, especially when the convoy works in the dynamic environments. Even we have used multi-level control with auto-switching modes, it is not sufficient to overcome the unpredictable situations of the leading robot.

- The control is only stable if the leader moves gently and slowly. Once the mobility of the leader is rapidly changed or its magnitude varies in a large boundary, the control cannot maintain the distance between robots properly. It states that the control is not fully adaptive to the change of the systems.

V. CONCLUSION In this paper, we addressed a simplified control for the

robot convoy. The multi-level “bang-bang” control is directly derived from velocity pursuit model using Sensors Input Map (SIM) and Control Output Map (COM). The following robots can maintain the distance with the leading robot without mutual communication between them. While related works in the subject of robotic convoy using more complex sensor as vision sensors and communication mechanism [7], [8], our robots are equipped with ranging sensors and using a simplified multi-level “bang-bang” control. We aimed at demonstrating that it is possible to make robot convoy, a complicated systems, using only ranging sensors and simple control. The convoy is used for education.

ACKNOWLEDGMENT This work has been supported by Project B of Teaching

and Research Improvement Grant, No. 4328 (TRIG).

REFERENCES [1] K.C. Cheok, G.E.Smid, J.L. Overholt, and P. Lescoe, “A Fuzzy Logic

Intelligent Control System Architecture for an Autonomous Leader-Following Vehicle”, Proceedings of the 1997 American Control Conference, 522-526, vol.1.

[2] F. L. Almeida, B.M. Terra, P.A. Dias, and G.M. Goncalves, “Transport with Automatic Guided Vehicles in the Factory of the future, In Proceedings of 15th IEEE International Conference on Emerging Technology and Factory (ETFA 2010), 1-4.

[3] F. Belkhouche, K. Bendjilali, and B. Belkhouche, “Robot Formation Modeling and Control based on the Relative Kinematics Equation”, International Journal of Robotics and Automation, DOI:10.2316/Journal.206.2009.1.206-3220.

[4] Laura E. Barnes, “A Potential Field based Formation Control Methodology for Robot Swarms”, Theses and Dissertations, University of South Florida, 2008.

[5] Mark Brackstone, Mike McDonal, “Car-Following: A Historical Review”, Transportation Research Part F2 (1999), 181-196.

Fig. 13. Offset of relative orientation among robots

Fig 14. Real experiments of Robot convoy: (a) with two robots, (b) with three robots, (c) with four robots, and (d) with five robots

[6] Fethi Belkhouche, and Bomediene Belkhouche, “Modeling and controlling a Robotic convoy using guidance laws strategies”, IEEE Transactions on systems, MAN, and cybernetics-Part B:cybernetics, Vol. 35, No. 4, August 2005.

[7] G. Dudek, M. Jenkin, M. Milios, and D. Wilkes, “Experiments in Sensing and communication for robot convoy navigation”, In proceedings IEEE/RSJ International Conference on Intelligent Robots and System: Human Robot Interaction and Cooperative Robots, 1995.

[8] Kim C. Ng and Mohan M. Trivedi, “Multi-robot Convoying Using Neuro-Fuzzy Control”, In Proceedings of the 13th International Conference on Pattern Recognition, 1996.

[9] Carlos Santos, Felipe Espinosa, Daniel Pizarro, Fernando Valdés, Enrique Santiso, Isabel Díaz, “Fuzzy Decentralized Control for Guidance of a Convoy of Robots in Non-Linear Trajectories”, IEEE Conference on Emerging Technologies and Factory Automation (ETFA), 2010.

[10] P. Petrov, “Nonlinear Adaptive Control of a Two – Vehicle Convoy”, The Open Cybernetics and Systemics Journal, 2009, 3, 70-78.