Effective implementation of a mapping swarm of robots

12
Effective Implementation of a Mapping Swarm of Robots Christopher Churavy, Maria Baker, Samarth Mehta, Ishu Pradhan, Nina Scheidegger, Steven Shanfelt, Rick Rarick, and Dan Simon 1 Cleveland State University Department of Electrical and Computer Engineering 2121 Euclid Avenue Cleveland, OH 44115 December 31, 2007 There are several driving factors involved in the development of a robotic swarm. On the hardware side, assembly should be adaptable, easily reproduced, and relatively inexpensive. Motors, mounts, wheels, and chassis material must be chosen, as well as appropriate sensors. A workable communication protocol must be determined. Software should be concise and common to each robot in the swarm, while meeting the requirements of the desired function. In the summer of 2007, Cleveland State University provided funding for various undergraduate research projects. One such project was to involve engineering students in the development of a robotic swarm. Under the supervision of one faculty member and one graduate student, six undergraduate students spent three months on the design and implementation of a mapping robotic swarm. This paper provides a description of the process and an explanation of individual hardware and software components. Introduction The main objective of this project was to produce a swarm of mapping robots that could be sent into an unknown building to produce a floor map. To accomplish this task, a square robot with differential steering was designed. This square design simplifies the manufacturing process, and the wheel configuration allows for easy control of each robot’s movements. Each robot is outfitted with an array of three ultrasonic range finders, an angular rate sensing gyro, one proximity switch, a wireless camera, and two wheel encoders. Other sensors were considered, but were either deemed unnecessary or unreliable. A Microchip PIC18F4520 microcontroller, programmed with the CCS C compiler, provides the brains. This particular microcontroller provides low cost, good speed, and a sufficient set of peripherals for use in various robotics applications. Communication is accomplished via radio, with a transmitter-receiver mounted on each robot. Each robot also has an LCD that can be used for debugging or displaying messages during operation. 1 Corresponding author. Email address [email protected], phone number 216-687-5407

Transcript of Effective implementation of a mapping swarm of robots

Effective Implementation of a Mapping Swarm of Robots

Christopher Churavy, Maria Baker, Samarth Mehta, Ishu Pradhan, Nina Scheidegger, Steven Shanfelt, Rick Rarick, and Dan Simon1

Cleveland State University

Department of Electrical and Computer Engineering 2121 Euclid Avenue

Cleveland, OH 44115

December 31, 2007 There are several driving factors involved in the development of a robotic swarm. On the hardware side, assembly should be adaptable, easily reproduced, and relatively inexpensive. Motors, mounts, wheels, and chassis material must be chosen, as well as appropriate sensors. A workable communication protocol must be determined. Software should be concise and common to each robot in the swarm, while meeting the requirements of the desired function. In the summer of 2007, Cleveland State University provided funding for various undergraduate research projects. One such project was to involve engineering students in the development of a robotic swarm. Under the supervision of one faculty member and one graduate student, six undergraduate students spent three months on the design and implementation of a mapping robotic swarm. This paper provides a description of the process and an explanation of individual hardware and software components. Introduction The main objective of this project was to produce a swarm of mapping robots that could be sent into an unknown building to produce a floor map. To accomplish this task, a square robot with differential steering was designed. This square design simplifies the manufacturing process, and the wheel configuration allows for easy control of each robot’s movements. Each robot is outfitted with an array of three ultrasonic range finders, an angular rate sensing gyro, one proximity switch, a wireless camera, and two wheel encoders. Other sensors were considered, but were either deemed unnecessary or unreliable. A Microchip PIC18F4520 microcontroller, programmed with the CCS C compiler, provides the brains. This particular microcontroller provides low cost, good speed, and a sufficient set of peripherals for use in various robotics applications. Communication is accomplished via radio, with a transmitter-receiver mounted on each robot. Each robot also has an LCD that can be used for debugging or displaying messages during operation.

1 Corresponding author. Email address [email protected], phone number 216-687-5407

The motivation for undertaking this project was to build an infrastructure with which future work could be performed. Cleveland State University is notable for the hands-on education that it provides both undergraduate and graduate students. As we develop new theory in our research programs in areas like controls, estimation, image processing, and artificial intelligence, we want to have a hardware platform available for testing and demonstration. Robotic vehicles provide an ideal platform because of their multidisciplinary nature. Assembly The assembly of each robot is designed to allow for expansion and quick assembly. Built on a 16 cm × 16 cm × 0.60 cm square piece of HDPE (high density polyethylene, otherwise known as plastic), the robots have three tiers of development space separated by 4 cm metal standoffs. Two Solarbotics GM-8 gear motors are mounted to the base with a hand fashioned brass motor mount. The mounts are made from inexpensive 1-inch brass strips that are drilled out, bent, and screwed to the chassis to provide strong support for the motors. Two rechargeable AA battery packs, eight batteries per pack, provide power to the motors and to the onboard electronics. The microcontroller, voltage regulators, and other required electronics are mounted on a two-layer printed circuit board (PCB) designed with the ExpressPCB software package. The dimensions of the PCB are 10.1 cm × 10.3 cm, and the design is simple enough that a board can be fully populated in about an hour by anyone handy with a soldering iron. The encoders mount easily to the motor casing and were fitted with an adaptor to connect directly to the board. The ultrasonic sensors are mounted on the middle tier, with the majority of space on that level left for future development. The third tier houses the radio module, PCB, camera, and LCD. Figure 1 shows two completely assembled robots ready to begin mapping. The robots weigh approximately 3.27 lb and cost about $584 US each, with total assembly and testing time of about two hours per robot.

Figure 1 - Two assembled robots, ready to swarm. Communication Communication between the robots and the base station is accomplished using the MaxStream 9Xtend RF transmitter/receiver. This particular device boasts an indoor range of up to 900 meters, a selectable output power of between 1 mW and 1 W, and selectable RF and serial interface baud rates. At about $180 US, this model falls within an acceptable price range for use on multiple robots. The base station and each of the robots in the swarm are outfitted with their own radios. The radio at the base station is connected with a serial cable to the PC’s USART (RS232) and is used to transmit and receive data to and from the robots. Each robot in the swarm is assigned a robot ID number which is stored in memory. When the base station wishes to address a robot, that robot’s ID number is loaded into the base microcontroller program and a wireless connection is established between the base station and that robot. When the robot is instructed to report, it sends a packet of encoded sensor data back to the base station over the previously established connection. This connection remains open until the base station wishes to addresses another robot, whereupon it loads another robot ID number into its program and establishes a new connection. As the swarm moves throughout the building, each robot is addressed in succession by the base station, continually providing information to the mapping program. In order for this all to work, the robots must be able to carry out the instructions that they are told to perform.

Sensors Each robot in the swarm has the ability to perform any of a number of different movements. Every time that a robot is contacted by the base program, the robot is instructed to perform one of these movements. Generally, when a robot is following along a wall, it will receive the “move forward” instruction. When this instruction is received, the robot moves forward five centimeters, stops and then take readings from the rangefinders, sending that data back to the base station. The next instruction given to the robot is dependent on the data the robot has just sent. If the data indicates an obstacle or a corner, the robot will be instructed to act accordingly. In order to do this effectively, the robots must consistently be able to gather accurate data from their sensors and communicate with the base station. To build a map, the robots must move the distance that they are instructed to move and they must make precise turns. • Wheel encoders Traveling a specified distance is accomplished by reading the wheel encoders, the Nubotics Wheel Watcher 2, and averaging the two values. As can be seen in Figure 2, the encoders are unobtrusive and easily attached to the motors. When moving forward, both motors are engaged using the PIC’s PWM (pulse width modulated) output pins with the signal sent through a 75441 motor driver chip to step the current up to an appropriate level. The PIC18F4520 has four onboard timers to choose from when programming for this task. The PIC’s PWM output uses timer2, and the encoders use timer0 and timer3, which are configured as counters to keep track of wheel ticks. Each Wheel Watcher 2 unit has three signal outputs. There are two standard quadrature outputs and one decoded output that produces a pulse on any change of either quadrature pin. This decoded output is ideal for controlling the forward and backward movements of the robots. The distance represented by one encoder count is determined by dividing the wheel diameter (6.83π cm) by the number of encoder counts per revolution (128). Multiplying the result by one-half gives the encoder accuracy of 0.8 mm. In practice, some wheel slippage and skidding occurs, which is a typical problem with dead reckoning navigation. Mapping experiments typically showed navigation errors of about 8 cm every 8 meters for a relative accuracy of 1%. Straight movement is achieved by continually monitoring and correcting the error between the two encoder values. The encoders can be used to feed the robot information during turns as well. With the simple knowledge of each robot’s turning radius and wheel size, the angle of rotation at any given instant can be determined. The drawback to this approach is that undesired wheel slippage and other forms of interference can prohibit the robots from making fully accurate turns. For this reason, another method was employed in the execution of turns, as discussed in the following two sections.

Figure 2 - Encoder mounted on motor • Compass Initially, turning of the robot was to be controlled by a digital compass. The compass being considered was the CMPS03 built by Devantech, as shown in Figure 3. This particular model has two methods of operation, both of which were easily realized with the PIC18F4520. The compass has a PWM pin that outputs a square wave with a maximum cycle time of 167 ms. The positive pulse width is proportional to the direction the compass is facing with respect to magnetic north. Each positive pulse is separated by a 65 ms period in which the output is set low. The other option provided by the compass is an inter-integrated circuit interface, or I2C. This method is lightning fast compared to PWM, and can be configured to either 8-bit or 16-bit resolution. I2C is a protocol that was developed by Philips Semiconductor for communication between two or more integrated circuits. On the workbench this approach seemed to work fine, providing accurate readings with relative consistency. However, once tested in an open hallway outside the lab, the digital compass quickly revealed itself as a potential snag in the robot design. Because the compass is designed to function in a mostly constant magnetic field such as that of the earth, and the interior of most buildings does not meet this requirement, the performance of the compass proved to be dicey, to say the least. If the compass were to be used in a wide open space free of large metal objects, such as outdoors or in a large warehouse, it could be expected to work fine. But when designing a swarm of precisely controlled robots for indoor use, the compass was not an ideal choice. Another option had to be considered, as discussed in the following section.

Figure 3 - Devantech electronic compass • Angular rate gyro To replace the compass, an angular rate sensing gyro was deemed to be an appropriate choice. The Analog Devices ADXRS401 provides an output voltage that is proportional to the rate of change of the sensor’s orientation about an axis normal to the sensors top. This device, shown in Figure 4, seemed to be an inexpensive and effective solution to the orientation problem. The device is available in a less-than-ideal Ball Grid Array (BGA) package for $22. Luckily, Analog Devices sells a development board version of the device that is available on a simple 20 pin DIP chip for about $50. The evaluation board includes all the assembly and filtering capacitors that are recommended on the gyro’s data sheet. The gyro noise is advertised as 0.2°/second (RMS). The gyros were used as a feedback signal to determine when the robot had completed a turn of a desired angle. In practice all robot turn commands were 90°. The robots were hard-coded to complete the turn in about 5 seconds. Based on the advertised gyro accuracy, this would give a turn accuracy of 1°. In practice we found that the gyro accuracy was closer to 5°. The difference between theory and practice could be due to such things as wheel slippage, and electrical noise from suboptimal wiring layouts and other sensors. However this 5° error did not limit the mapping performance. This is because the encoders were used by the robot to follow the wall (as described earlier), and the mapping algorithm assumed that the walls were at 90° angles. If this assumption were not valid for a given environment, then the gyro accuracy would need to be improved to get reasonable maps. The ADXRS401 null, or the output voltage when the gyro is not in motion, is supposed to be 2.5 V. Unfortunately though, the actual null typically falls somewhere between 2.3 V and 2.8 V with plenty of noise. The data sheet presents a method for correcting the null with a resistor, but this resistor value would have to calculated for each robot and would have to be connected to either Vcc or ground, depending on the initial null. Measuring the null of every single gyro and selecting a different configuration for each robot in the swarm would be tedious and would make large scale production impractical. Therefore,

deviations from the nominal null value are dealt with in the software by simply averaging a large number of samples before each time the gyro is used. Averaging the samples takes care of any noise and returns a fairly constant value for the null. Another problem with the gyro is the output scale factor. The data sheet claims that the typical output is 15 mV/°/second measured in either direction from the null voltage. In other words, if the null value is 2.5 V and the gyro is rotated at 5°/second clockwise, stopped, and then rotated at 5°/second counterclockwise, the output of the gyro should be 2.575 V, then 2.5 V, and finally 2.425 V. This scale factor proved to be different for clockwise and counterclockwise motion, and was in need of some type of correction. The solution to this problem was to calibrate each gyro with a simple program that calculates the actual scale factor and stores it in memory. Once calibrated, the gyros function properly. Getting an orientation from the gyro output is as simple as measuring the output voltage through the analog-to-digital converter (ADC) on the PIC, calculating the rate of change, and integrating to acquire a heading value. This is achieved with one of the PIC’s timer interrupts. When the gyro is to be used, the PIC’s timer1 is set with a prescaler of 4, allowing it to overflow with a frequency of 38 Hz with a 40 MHz oscillator. On each overflow the program reads the ADC, converts the bit value into a floating point value, multiplies the voltage by the overflow frequency, and then adds that value to a running tally to determine the direction of the robot at any given instant. The tendency for the gyro output to drift is eliminated by disregarding any rate changes under a certain threshold. However, disregarding small voltage fluctuations prevents the code from detecting slow robot turns.

Figure 4 - Angular rate gyro • Ultrasonic sensors The location of walls and other obstacles is obtained by three ultrasonic sensors mounted on each robot. Parallax’s Ping sensor was used for this project. When the microcontroller is instructed to acquire a distance measurement via an ultrasonic sensor, the PIC’s PWM module outputs a 10 microsecond positive pulse to the signal pin of the sensor. Upon receiving this pulse, the sensor produces a 40 kHz ultrasonic burst and sets its signal pin to logic high. The signal pin remains high until the ultrasonic burst reflects off of an object and returns to the sensor. The width of this pulse corresponds to the

distance between the sensor and an object. With the burst traveling through the air at 344.4 m/s, the distance is calculated by multiplying the pulse width by the velocity. The width of the burst is determined by again employing one of the PIC’s onboard timers, in this case timer1. Sensors are mounted on the front and both sides of each robot. Figure 5 shows an ultrasonic sensor; its ultrasonic burst is transmitted by one side and received by the other. Upon completion by the robot of each move instruction, the three sensors are fired in succession and the retrieved data is sent to the base station. These data points are then plotted with respect to the position of each robot, thus producing a floor map.

Figure 5 - Ultrasonic sensor

• Other sensors The robots also have a front-mounted proximity switch which will open if the floor in front of the robot drops off. For instance, if the robot were to approach a descending flight of stairs, the proximity switch would open and generate an interrupt in the microcontroller code. At this point the robot would automatically back up and then report the location of the drop-off to the base station. Each robot is additionally equipped with a 2.4 GHz wireless camera. Initially, the cameras were to be used in the mapping algorithm. However, as the project progressed, it became apparent that the cameras were not immediately necessary for implementation of swarm. The cameras were included in the final design for the purpose of further development of the robots’ image processing capabilities. Mapping algorithm and software Robotic mapping has been extensively studied since the 1980s. Mapping can be classified as either metric or topological. A metric approach is one that determines the geometric properties of the environment, while a topological approach is one that determines the relationships of locations of interest in the environment. Mapping can also be classified as either world-centric or robot-centric. World-centric mapping represents the map relative to some fixed coordinate system, while robot-centric mapping represents the map

relative to the robot. Many mapping approaches also use probabilistic techniques (such as Kalman filtering) in order to account for noise in sensor readings and navigation information. Robotic mapping continues to be an active research area. The robots discussed here use a metric, world-centric approach, in order to use the simplest possible mapping algorithm. The actions of the swarm are controlled from a base station PC over a wireless radio link with software written in the free Express Edition of Microsoft Visual Basic. Each robot moves forward from a known starting point until it detects a wall with one of its three ultrasonic sensors. Once a robot arrives at a wall, it follows the wall while sending sensor data to the base station. Wall following is controlled by the base station program through various commands that call functions within the robots’ microcontroller code. The microcontroller contains all communication, sensor, and motor control code, as well as a means of decoding the commands received from the base station. The robots transmit their relative position and orientation data (obtained from encoders and gyroscopes) to the base station PC. The robots also measure distances to nearby walls and obstacles using ultrasonic sensors, and transmit that data to the PC. The base station software uses that data to calculate the absolute position of walls and obstacles. The PC takes its knowledge of the robots’ starting positions and then uses simple trigonometry (along with the robots’ data transmissions) to calculate the robots’ absolute positions and orientations. The base station calculates the absolute position of walls and obstacles using trigonometry, robot ultrasonic sensor readings, and previous calculations of absolute robot positions and orientations. The PC includes various ad-hoc methods for dealing with erratic or erroneous sensor data. The PC keeps track of the absolute positions and orientations of all of the robots. The PC also keeps track of the relative positions and orientations of each robot with respect to each other in order to merge the information from the robots into a coherent map. The PC communicates with each robot one at a time, recalculating robot and obstacle positions after every communication cycle. The end result is a map created by the robotic swarm. The robots were programmed to transmit information every 5 cm of movement, so wall and obstacle locations are mapped with a precision of 5 cm. When drawn on a map of the scale used in this project (tens of meters), points that are 5 cm apart appear as a connected line. Figure 6 shows an example of a partial map created by the base station PC after receiving robot transmissions. Future work could involve the use of linear regression with less frequent robot transmissions in order to create the map lines.

Figure 6 – Map created by a mobile robot. This map is actually a sequence of ultrasonic sensor readings, one pixel per reading, each 5 cm apart.

trash can

doorway

corridor

doorway

46 meters

trash can starting point

ending point

Conclusion All of these components come together to form the basis for a robotic swarm. This particular project was limited by the usual constraints, mainly time and money. About three months were allotted for successful design and implementation of a mapping swarm. However, the door has now been opened for further development of the swarm. In the future, control of the robots’ actions will be ported to the robots themselves and inter-robot communication will be necessary. Task optimization and robot learning can be implemented, as well as increasing the role of the onboard camera (which at this point is all for show) with various image processing techniques. New sensors may be added, and others may be removed or their role altered. More complex tasks, aside from mapping, can be developed and perfected. Overall, the success of a swarm project depends on the foundation set by its early development. An effective, simple design allows not only for fast implementation, but also further exploration into the world of possibilities that exist with robotic swarms. Read more about it • The web site for this project is http://embeddedlab.csuohio.edu/RoboticSwarms/. • The WW-02 encoders were purchased from Nubotics (nubotics.com). • The ADXRS401 gyroscopes were purchased from Analog Devices (analog.com). • The CMPS03 compasses were purchased from Acroname (acroname.com). • The XTend radios and A09-HASM-675 antennas were purchased from Maxstream

(maxstream.net). • The PIC18F4520 microcontrollers were purchased from Microchip (microchip.com). • The proximity switches were purchased from Cherry Electrical (cherrycorp.com). • The serial LCDs were purchased from Parallax (parallax.com). • The GM8 motors were purchased from Solarbotics (solarbotics.com). • The plastic chassis platform material was purchased from Solarbotics

(solarbotics.com) and McMaster (mcmaster.com). • The wireless cameras were purchased from Superdroid (superdroid.com). • RJ45 and serial PCB adapters were purchased from Winford (winford.com). • The ultrasonic sensors were purchased from Lynxmotion (lynxmotion.com). • The boards were manufactured by ExpressPCB (expresspcb.com). • An overview of robotic mapping algorithms can be found in: S. Thrun, “Robotic

mapping: A survey,” in: Exploring Artificial Intelligence in the New Millennium (G. Lakemeyer and B. Nebel, eds.) Morgan Kaufmann, 2002. Also see: J. Castellanos, J. Neira, and J. Tardos, “Map building and SLAM algorithms,” in: Autonomous Mobile Robots (S. Ge and F. Lewis, eds.) CRC Press, 2006.

Part list

Part manufacturer cost $US quantity per robot

Angular rate gyro Analog Devices $50 1Wireless camera Superdroid $45 1Radio MaxStream $179 1Antenna MaxStream $20 1Wheel encoders Nubotics $22 2PIC18F4520 Microchip $10 1Motors Solarbotics $7 2Chassis material Various $10 1Ultrasonic sensors Parallax $30 3Proximity switch Cherry Electrical $2 1Battery packs (AA) Eagle Plastic Devices $2 2LCD display Parallax $30 1Batteries (AA) Various $2 8PCB manufacture ExpressPCB $50 1Misc. electronics and connectors Various $20 1

The total parts cost of each robot was approximately $584.