Behavior control architecture for a life-like creature: "The Robotaurus

6
Behavior Control Architecture for a Life-Like Creature: “The Robotaurus” I. Lafoz, A. Mora, D. Rodríguez-Losada, M. Hernando, A. Barrientos Universidad Politécnica de Madrid José Gutiérrez Abascal 2 28006 Madrid, Spain E-mail [email protected] Abstract A novel application of a life-like robot is presented in this paper. The "Robotaurus", a vision controlled bull- like robotic creature has been built. It has been tested in an educational contest based on the bullfighting, a typical Spanish spectacle. Several problems of different research fields have been solved for the whole system development. The interaction capabilities obtained through a behavior control architecture have been proved with the "ole!" cheering of the audience. 1 Introduction The main purpose of this paper is to describe a behavior control architecture that allows a life-like robot to imitate the behavior of a real creature. This control architecture has been implemented in a robot, the Robotaurus, that imitates the form and the behavior of an fighting bull. The life-like behavior of robots is a wide research field and there are some interesting approaches: the behaviors of the pet-robot AIBO [1] [2], the fish-like microrobot described in [3], creatures with qualities of other aquatic animals [4], or some legged insect approaches [5]. Other objective of this work is to study the interaction of this system with other robots and with external human beings. There is an increasing interest for studying the influence of robots and the psychological effects in people [6]. The behavior control architecture described in this paper, has been designed for allowing a human observer to recognize actual behaviors in the bull-robot. To study the interaction with other robots and with the audience, a contest has been organized at Universidad Politécnica de Madrid. The students must build robots to take part in the contest: the bullfighter-robots (the robots that have to try out the behavior of the Robotaurus). The motivation for this contest was taken from a Spanish spectacle, “corrida de toros”, in which a bullfighter have to confront to a bull with the only help of a small piece of red fabric. The students have to make the bullfighter- robots and confront them with the bull-robot. The contest is a great opportunity for the students for applying some of the theoretical knowledge learnt during their studies. The benefits of this kind of contest are clear [7] [8] and it’s a good way to improve the learning. There are many kinds of prototype robots contests, but this is a new original idea in which the participants have to fight with a life-like adversary. The contest has an important component that is the entertainment. In this sense, the built robots can be considered entertainment robots [9]. This is another interesting research field in order to introduce robots into people’s daily life. The contest is a robotic version of the real spectacle. The bullfighter-robot wears a red balloon attached around it and it has to avoid that the bull-robot pricks the balloon with its horns. This is the most basic behavior of the Robotaurus: to pursue and attack red things in motion. The more time the bullfighter-robot resists in the arena with the balloon unimpaired, the more points it wins. The bullfighter-robot can also have a red cape for fooling the Robotaurus. It can also stick some banderillas into the back of the bull-robot. For each banderilla correctly placed, the bullfighter-robot scores 15 points more. At the end of the fight between the bull-robot and the bullfighter- robot, the public can request the ears and the tail of the bull-robot: shaking white handkerchiefs. A panel decides if the behavior of the bullfighter deserves these trophies: the first ear counts 5 points, the second ear 10 points and the tail 20 points. The criterions to concede the points are: - The worst bullfighter is the one who run away from the bull-robot. - The best bullfighter is the one who correctly use the red cape to fool the Robotaurus. More information about the Robotaurus contest can be found in www.disam.upm.es/cybertech (in Spanish). This paper is structured as follows: section 2 presents a global description of the system. Section 3 describes the bull-robot control architecture and its elements. These are detailed in sections 4, 5 and 6. Section 6 presents the bull- robot behavior control that generates the life-like behavior of the robot. Conclusions are described in section 7.

Transcript of Behavior control architecture for a life-like creature: "The Robotaurus

Behavior Control Architecture for a Life-Like Creature: “The Robotaurus”

I. Lafoz, A. Mora, D. Rodríguez-Losada, M. Hernando, A. Barrientos Universidad Politécnica de Madrid

José Gutiérrez Abascal 2 28006 Madrid, Spain

E-mail [email protected]

Abstract A novel application of a life-like robot is presented

in this paper. The "Robotaurus", a vision controlled bull-like robotic creature has been built. It has been tested in an educational contest based on the bullfighting, a typical Spanish spectacle. Several problems of different research fields have been solved for the whole system development. The interaction capabilities obtained through a behavior control architecture have been proved with the "ole!" cheering of the audience.

1 Introduction The main purpose of this paper is to describe a

behavior control architecture that allows a life-like robot to imitate the behavior of a real creature. This control architecture has been implemented in a robot, the Robotaurus, that imitates the form and the behavior of an fighting bull. The life-like behavior of robots is a wide research field and there are some interesting approaches: the behaviors of the pet-robot AIBO [1] [2], the fish-like microrobot described in [3], creatures with qualities of other aquatic animals [4], or some legged insect approaches [5].

Other objective of this work is to study the interaction

of this system with other robots and with external human beings. There is an increasing interest for studying the influence of robots and the psychological effects in people [6]. The behavior control architecture described in this paper, has been designed for allowing a human observer to recognize actual behaviors in the bull-robot.

To study the interaction with other robots and with the

audience, a contest has been organized at Universidad Politécnica de Madrid. The students must build robots to take part in the contest: the bullfighter-robots (the robots that have to try out the behavior of the Robotaurus). The motivation for this contest was taken from a Spanish spectacle, “corrida de toros”, in which a bullfighter have to confront to a bull with the only help of a small piece of red fabric. The students have to make the bullfighter-robots and confront them with the bull-robot. The contest

is a great opportunity for the students for applying some of the theoretical knowledge learnt during their studies. The benefits of this kind of contest are clear [7] [8] and it’s a good way to improve the learning.

There are many kinds of prototype robots contests, but this is a new original idea in which the participants have to fight with a life-like adversary.

The contest has an important component that is the entertainment. In this sense, the built robots can be considered entertainment robots [9]. This is another interesting research field in order to introduce robots into people’s daily life.

The contest is a robotic version of the real spectacle. The bullfighter-robot wears a red balloon attached around it and it has to avoid that the bull-robot pricks the balloon with its horns. This is the most basic behavior of the Robotaurus: to pursue and attack red things in motion. The more time the bullfighter-robot resists in the arena with the balloon unimpaired, the more points it wins. The bullfighter-robot can also have a red cape for fooling the Robotaurus. It can also stick some banderillas into the back of the bull-robot. For each banderilla correctly placed, the bullfighter-robot scores 15 points more. At the end of the fight between the bull-robot and the bullfighter-robot, the public can request the ears and the tail of the bull-robot: shaking white handkerchiefs. A panel decides if the behavior of the bullfighter deserves these trophies: the first ear counts 5 points, the second ear 10 points and the tail 20 points. The criterions to concede the points are:

- The worst bullfighter is the one who run away from the bull-robot.

- The best bullfighter is the one who correctly use the red cape to fool the Robotaurus.

More information about the Robotaurus contest can be found in www.disam.upm.es/cybertech (in Spanish).

This paper is structured as follows: section 2 presents

a global description of the system. Section 3 describes the bull-robot control architecture and its elements. These are detailed in sections 4, 5 and 6. Section 6 presents the bull-robot behavior control that generates the life-like behavior of the robot. Conclusions are described in section 7.

2 System Description As in any other contests, the challenge is defined by

a set of specifications. These specifications are a good description of the kind of heat that is being carried out. Four different elements configure the system: the bullring, the bull, the bullfighter and the communications and control system.

2.1 The Bullring All the heat is done on a platform like the one

shown in the figure 1. It constitutes in terms of bullfighting, the bullring. It is a 4 meters of diameter circular platform. The surface is divided in two zones, a white circular zone of 3 meters of diameter, and an external black ring. So much the bull as the bullfighter should carry out their movements on the white circle, remaining unevaluated any action carried out over the black zone.

All the movements done by the bull and the bullfighter, are obtained by the overall camera placed in the top of a four meters height structure. By means of a colours code and artificial vision algorithms, the bullfighter position and the bull position and orientation are extracted. This information is used for controlling the bull and it is given to the bullfighters through a wireless serial link.

Figure 1. System Layout

2.2 The Bull A commercial electric toy has been used as bed of

the bull robot. The necessary electronics for remotely controlling the bull has been added to this platform. It has four independent motors that drive each one of the four

wheels. The body is divided into two parts linked by a spring in a flexible way. This physical configuration is important for achieving more realistic movements of the bull.

Figure 2. The Bull

As it is shown in picture 2, a bull head has been built on the forward part that slyly hides a camera. The image obtained by this camera is mainly used by the control system to decide the attack movements of the bull robot. In this way the bullfighter can mislead the bull by using the red cape. These images are transmitted through a radio link to a computer (UHF channel 12) that carries out the image processing task.

For making easy the development of the control

algorithms, all the image processing and control computation are carried out on personal computers. The bull is controlled as any RC model car but the radio transmitter is driven by the control computer. This is done thanks to the teacher/pupil mode of some radio transmitters. The computer transmits the commands to a microcontroller that emulates the internal signals of a radio (35,100 MHz) transmitter (pupil) connected to a real radio transmitter (teacher) that directly controls the speed of the bull motors. The teacher radio transmitter supervises the commands received from the microcontroller allowing switching to a manual mode if needed. Therefore the bull carries a radio receiver, controllers and drivers for motors, a wireless video link for the onboard camera, and a power supply system (30 minutes of autonomy are obtained with a NiCd battery pack of 2400 mAh and 7.2 V).

2.3 The Bullfighter

Figure 3 shows one of the robotic bullfighters developed by the participants. The bullfighter should comply with a basic set of characteristics: its dimensions cannot exceed of 20 x 30 x 25 cm. The robot should have a smooth surface around the body that will serve to tie a red balloon around it.

Figure 3 : The Bullfighter

Camera

The bullfighter can make use of the red cape for deceiving to the bull, and of an additional arm for thrusting the banderillas into the bull. All the system should be totally autonomous, being able only to make use of an external serial port for connecting the radio receptor that receives the data obtained from the image processing.

All the system has been represented schematically

in figure 4, showing the nature of communications among the different elements.

Figure 4. Relation among elements

3 Control Architecture Once provided the physical platform, the biggest

challenge was to obtain a realistic behavior of the bull. A significant effort has been made in the implementation of the high-level control of the robot to achieve a simulated animal intelligence.

The control is distributed, running in two PCs due to hardware and image processing requirements issues.

All the software has been done in C++, with W2K as OS. Modularity has been a key issue for fast and simultaneous easy development by four programmers.

GUIs have been developed to achieve an easy control and supervision of the system: setting of image processing, bull “personality” and low level controller parameters and visualization of video, image processing results, control status, etc.

Figure 5 shows a simplified scheme of the

implemented control.

4 Low Level Control This controller directly interface the radio

transmitter via the microcontroller unit, so the output signals from this module to the robot are open-loop speed references of left/right side wheels.

It is important to highlight that there is not feedback at this stage, so all the references given to the robot by the low level controller are open-loop references.

It was decided to control the bull with linear

advance and turning speed references. An easy and functional interface was required for the development of the basic behaviors, so these behaviors were able to make correct movements without dealing with low level control problems (i.e. to issue the commands and “know” that the robot is going to perform approximately the required movement).

Figure 5. Control architecture.

There are several problems for developing a linear

open loop controller: - The 4 wheel differential driven locomotion

system is extremely non linear, due to the huge slippages required for movement.

- The gear reduction ratio is low, therefore slow movements are difficult to obtain due to static-dynamic friction.

- Different drive behavior for forward and backward movements.

- Different behavior of the left and the right side of the bull, due to motor, mechanics and electronics differences.

- Different surfaces are used, so friction parameters change from one place (lab floor) to another (bullring).

A discrete control was implemented for solving these

problems. Two Motor Control Matrices (MCM) were defined, one for the left side of the robot and another for the right side. The input to these MCMs are the advancing speed (column) and the turning speed (row). Each element

of the MCMs is a control value directly transferred to the radio transmitter.

Figure 6. Left and right side MCMs. The control values are discretized better than

interpolated between the fixed matrix values. It was noticed that interpolation between two close commands results in a movement that is not the interpolation of the two respective movements. For example, the interpolation between a command that drives the robot in a straight movement and another one that makes a big radius (almost straight line) circle may stop the robot or make the robot turn without advancing.

Figure 6 shows the control maps of the left and the right side or the robot for the actual bullring surface. The above described non linearities can be observed.

The MCMs values are experimentally set up for each surface and for a medium battery charge and stored in text files so they can be easily recovered and used.

5 Vision System The vision system is the only way for closing the

control loop, because the Robotaurus doesn’t have any other sensors. This vision system is divided in two parts: the overall vision system and the onboard one.

5.1 The overall vision system

The overall vision system consists of a fixed camera over the arena that provides the position and orientation of the bull-robot and the position of the bullfighter-robot. This system is used to detect two color marks on the bull-robot (one blue on the head and another green on the back) easily allowing the detection of its position and orientation. The bullfighter position is obtained with a red mark on top of it.

A robust algorithm for the detection of the colors has been made in order to deal with the changing lighting conditions.

The image processing algorithm performs two types of actions: actions for achieving a robust color thresholding and actions for correctly computing the center of gravity of the different colors.

Firstly, three types of thresholds have been fixed in order to reject erroneous pixels:

- A min threshold to reject white pixels with a big value of the three RGB components.

- A max threshold to reject black pixels that have a low value of the red, green and blue components.

- A threshold to analyze the difference between the max RGB component and the min RGB component of the pixels. A difference lower than the threshold implies that the color of the pixel isn’t one of the three basic colors searched.

Secondly, in order to obtain the correct positions of the color marks a modified canter of gravity computation has been implemented. This processing achieves the rejection of isolated pixels of the basic colors through the consideration of each pixel neighborhood.

5.2 The onboard vision system The onboard vision system consists of a small

CMOS color camera mounted in the head of the bull-robot. This camera permits the localization of the red objects seen by the robot but not its identification. This feature allows a realistic bull behavior because it is possible to deceive it. The onboard image processing computer sends all data to the control computer through a TCP/IP socket connection.

The analysis of the image is a simple thresholding

in order to detect the red color. Small detected red objects are rejected and the noise effect due to interferences of the radio link is avoided. A previous manual adjust of the threshold is required.

6 Behavior Control The robot behavior control is divided in two layers.

The lower layer is composed by several basic behaviors. The higher layer is a supervisor that uses the basic behaviors of the low layer.

6.1 Basic Behaviors Layer

In this layer several basic behaviors have been implemented. These behaviors correspond with basic real bull behaviors, like to scratch, to go to the bullring center, to attack, etc. Fifteen basic behaviors were developed.

The basic behaviors are :

1. “ To go away from bullfighter” 2. “ To nod” 3. “ To turn around of bullring” 4. “ To scratch” 5. “ To back down” 6. “ To spin” 7. “ To stop” 8. “To pester to the Bullfighter” 9. “To scan with overall camera” 10. “To scan and to track with overall camera” 11. “To attack with overall camera” 12. “To scan with on board camera” 13. “To scan and to track with on board camera”

14. “To attack with on board camera” 15. “To enter the Bullring”

These basic behaviors send velocity commands to

the low level control. Some of these behaviors do not handle the visual feedback, like: to nod, to scratch, to back down, to spin, to stop. Routines like to scan, to scan and to track and to attack has two types of implementation. The first one is using the onboard camera and the second one is with the overall camera. With the onboard camera the Robotaurus acts as real bull searching its objective, tracking it and then attacking it. The overall camera is used when it is necessary to know the bullfighter accurate position or to go to a specified point of the bullring.

When onboard camera is used to attack, the bull looks for the bullfighter spinning. If a red object is detected, the bull runs towards the center of the object.

All the basic behaviors inform the high level

control layer if the behavior task is active, it has been finished or it is not possible to complete it.

A “velocity” parameter has been defined for each

basic behavior. This parameter fixes the movement speed. For example a low value of velocity of behavior “To spin” will produce a slow spinning simulating a tired animal. On the other hand, with a high value the bull will spin very quickly looking like an angry and strong animal.

6.2 Behavior Supervisor The supervisor controls decides which basic

behavior will be executed at each moment trying to emulate a real bull behavior. The decision is based on following parameters:

• Time: The bull is more aggressive according with

time elapsed. • Bravery: It indicates the bull strength. • Madness: This parameter shows a crazy or

“animal” part in the bull behavior. The Robotaurus “personality” is defined by the

bravery and madness parameters. With a low value for bravery, the bull behavior is calm and not aggressive. It usually doesn’t attack the bullfighter. High values of bravery imply a very aggressive bull that repeatedly attacks the bullfighter. The purpose of madness parameter is to introduce the unexpected behavior typical of the live creatures. Like a dog, inexplicably doing something provokes the question: “Why does the dog do that?”.

In order to do the bull movement more realistic,

several sequences of basic behaviors were defined. Each of these sequences has six basic behaviors with a completion time assigned. Three types of sequences were created: Aggressive sequences, “madness” sequences and “calm” sequences. The aggressive sequences are composed by attack basic behaviors. In the “madness” sequences the Robotaurus does some crazy movements

like spinning or scratching. Finally in “calm” sequences behaviors like scanning and tracking were included.

A behavior probability function has been defined.

This function returns one out of three possible behavior values (Aggressive, Mad or Calm) each of them with different probability value defined by the personality parameters defined above. With this type of behavior a sequence is randomly selected among the group of types of corresponding sequences. In figure 7, sequence selection algorithm is shown.

Figure 7. Supervisor flowchart

For sequences execution, supervisor behaves as states machine. Each state corresponds with a basic behavior execution. The time and the returned value from the basic behaviors routines are the trigger to change to the next state. When a sequence is finished, a new random behavior is generated by the supervisor and a new sequence is selected to be execute it.

At the start of the bullfight, the supervisor assigns

random values to bravery and madness parameters in order to obtain different “bulls” like in a real bullfight. Each one of these bulls behaves different from the others. The Robotaurus initial state is “To stop”, when bullfight starts, its state changes to “To enter to bullring” and it goes to the center of the bullring and does some fast movements like a real bull. Then, the supervisor starts with the time count and with the state change routine as it has been described above. Additionally the supervisor increases the velocity parameter of basic behaviors according with elapsed time.

The result of this control is a bull robot with very

realistic and coordinated movements. It is able to do a complete attack movement: it runs away from bullfighter, then it scans and observes its opponent, waits and scratches and finally attacks the bullfighter’s red cape. Like an actual bull sometimes it only observes and tracks the bullfighter or turns around the arena or simply spins.

7 Conclusions The system has been completely installed and

shown in two occasions, accumulating a total of 5 days of intensive use:

- “Madrid por la Ciencia 2002” Educational exhibit at IFEMA.

- “Cybertech 2002”. Robot contest at Universidad Politécnica de Madrid organized by our lab DISAM. http://www.disam.upm.es

The mechanical platform chosen has been perfect for

simulating a bull. The central axis spring and the four motor drive is a good base for building a robot bull.

The low level controller has worked properly, and the

discrete control has shown a correct performance. This component has been very useful for the development of the upper control levels.

Image processing (both overall and onboard) has been

robust under different environment conditions. The algorithms, resolutions and parameters used have provided enough information for the control modules. Even more sensory information (encoders, range sensors, touch sensors, etc) would not be worthless, it has been proved that vision is enough to achieve the desired results.

The basic behaviors and the behavior supervisor is the

key component of the system. An actual bull behavior has been obtained, and the parameters for defining bull “personality” have been rightly chosen.

A high interest has been reached from the audience,

but also for the national community has the main national TV broadcasting companies and newspapers have been reporting news about our “Robotaurus”.

It can be concluded that our system has been a

complete success. The goal of interacting with people via a biomimetic behavior has been reached, and the people have reacted with our robotic bull as they will be with an actual one. The simulation of an actual bullfight has been completed, with all the elements that appear in that occasions: ceremony of dedication of the bullfighters, the audience asking for the bull’s ears for the bullfighter as a prize, the “ole” cheers, etc.

Figure 8. Bullfighting. Audience asking for bull’s ear with white handkerchiefs.

Acknowledgments

This project has been funded by the Sociedad Amigos de la ETS Ingenieros Industriales, (Universidad Politécnica de Madrid), Indra, ETSII, and Comunidad de Madrid. References [1] M. Fujita "Digital Creatures for Future Entertainment

Robotics", in Proceedings of the 2000 IEEE International Conference on Robotics & Automation, San Francisco, April 2000.

[2] M. Fujita "AIBO: Toward the Era of Digital Creatures", in The International Journal of Robotics Research, Vol. 20, No. 10, Sage Publications, October 2001, pp. 781-794.

[3] S. Guo et. all, "Fish-like Underwater Microrobot with Multi DOF", in 2001International Symposium on Micromechatronics and Human Science.

[4] K.A. McIsaac, J.P. Ostrowski "A Geometric Approach to Gait Generation for Eel-like Locomotion", in Proceedings of the 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2000.

[5] J.E. Clark et. all, "Biomimetic Design and Fabrication of a Hexapedal Running Robot", in Proceedings of the 2001 IEEE International Conference on Robotics & Automation, Seoul, Korea, May 2001.

[6] T. Nakata et. all, "Producing Animal-like and Friendly Impressions on Artifacts and Analyzing their Effect on Human Behavioral Attitudes”, IEEE 1999.

[7] R.R. Murphy "Competing for a Robotics Education", in IEEE Robotics & Automation Magazine, Vol. 8, No. 2, June 2001, pp. 44-55.

[8] M. Asada et. all "Robotics in Edutainment", in Proceedings of the 2000 IEEE International Conference on robotics & Automation, San Francisco, April 2000.

[9] J. Lee et. all, "Development of a Remote controlled mobile entertainment robot system", in SICE 2001, Nagoya, July 2001.