Experimental validation of forward kinematics of a leg of the walking-legged robot Hex-piderix,...

9
Procedia Computer Science 00 (2012) 1–9 Procedia Computer Science 2013 Iberoamerican Conference on Electronics Engineering and Computer Science Experimental validation of forward kinematics of a leg of the walking-legged robot Hex-piderix, using camera calibration technique X.Y. Sandoval-Castro a,* , P.M. Diaz-Juarez b , H.A. Sanchez-Flores b , E. Castillo-Castaneda a a Centro de Investigaci´ on en Ciencia Aplicada y Tecnolog´ ıa Avanzada (CICATA)—IPN, M´ exico b Instituto Tecnol´ ogico de Tuxtla Guti´ errez, M´ exico Abstract This paper presents a technique to evaluate the performance of the forward kinematics of a leg of the legged-walking robot “Hex- piderix”. This technique is based on well known camera calibration methods. In addition to estimating the camera intrinsic parameters, this technique is also able to estimate the extrinsic parameters by means of the calibration pattern. The extrinsic parameters describe the pose (orientation and position) of the calibration pattern by mean its rotation matrix and its translation vector. Since the calibration pattern is attached to the end eector, the leg displacements can be estimated using the extrinsic parameters. The measured displacements are compared with theoretical results of forward kinematics, obtaining the error between both methods. c 2013 Published by Elsevier Ltd. Keywords: Forward kinematics, camera calibration, Hex-piderix, norm, error, displacement 1. Introduction The legged robots have better performance in rough terrain than robots with wheels, thanks to the possibility of coordinated movements that provide greater flexibility and adaptability. The legged robots can be classified as: bipeds, quadrupeds, hexapods and octopods. Most of the climbing robots are hexapods or quadrupeds; the first ones provide a better static stability than the second ones. They can perform static gaits by supporting the robots body on five legs at any time, while quadrupeds can only walk steadily on a minimum of three legs. This feature makes hexapods much more stables than quadrupeds, since they can use a bigger support polygon. Notice that the stability is a fundamental issue for mobile robots. Speed is an important factor for robot locomotion; a quadruped robot is three times slower than a hexapod. In terms of reliability, the hexapod can continue walking even when one of its legs fails [1]. In [2], the inverse position problem for each end-efector (EE) of the legs of the quadruped robot CLIBO is solved by homogeneous transformation matrices.The same approach is presented in [3] for a hexapod robot assuming forward motion at a constant velocity in a straight path on flat surface with alternating tripod gait. In this case, the thorax is held at a constant height and kept parallel to the ground plane during motion, and the thorax gravity center is assumed * Corresponding author. Email address: [email protected] (X.Y. Sandoval-Castro) 1

Transcript of Experimental validation of forward kinematics of a leg of the walking-legged robot Hex-piderix,...

Procedia Computer Science 00 (2012) 1–9

ProcediaComputerScience

2013 Iberoamerican Conference on Electronics Engineering and Computer Science

Experimental validation of forward kinematics of a leg of thewalking-legged robot Hex-piderix, using camera calibration

technique

X.Y. Sandoval-Castroa,∗, P.M. Diaz-Juarezb, H.A. Sanchez-Floresb, E. Castillo-Castanedaa

aCentro de Investigacion en Ciencia Aplicada y Tecnologıa Avanzada (CICATA)—IPN, MexicobInstituto Tecnologico de Tuxtla Gutierrez, Mexico

Abstract

This paper presents a technique to evaluate the performance of the forward kinematics of a leg of the legged-walking robot “Hex-piderix”. This technique is based on well known camera calibration methods. In addition to estimating the camera intrinsicparameters, this technique is also able to estimate the extrinsic parameters by means of the calibration pattern. The extrinsicparameters describe the pose (orientation and position) of the calibration pattern by mean its rotation matrix and its translationvector. Since the calibration pattern is attached to the end effector, the leg displacements can be estimated using the extrinsicparameters. The measured displacements are compared with theoretical results of forward kinematics, obtaining the error betweenboth methods.

c© 2013 Published by Elsevier Ltd.

Keywords: Forward kinematics, camera calibration, Hex-piderix, norm, error, displacement

1. Introduction

The legged robots have better performance in rough terrain than robots with wheels, thanks to the possibility ofcoordinated movements that provide greater flexibility and adaptability. The legged robots can be classified as: bipeds,quadrupeds, hexapods and octopods. Most of the climbing robots are hexapods or quadrupeds; the first ones providea better static stability than the second ones. They can perform static gaits by supporting the robots body on five legsat any time, while quadrupeds can only walk steadily on a minimum of three legs. This feature makes hexapods muchmore stables than quadrupeds, since they can use a bigger support polygon. Notice that the stability is a fundamentalissue for mobile robots. Speed is an important factor for robot locomotion; a quadruped robot is three times slowerthan a hexapod. In terms of reliability, the hexapod can continue walking even when one of its legs fails [1].In [2], the inverse position problem for each end-efector (EE) of the legs of the quadruped robot CLIBO is solved byhomogeneous transformation matrices.The same approach is presented in [3] for a hexapod robot assuming forwardmotion at a constant velocity in a straight path on flat surface with alternating tripod gait. In this case, the thorax isheld at a constant height and kept parallel to the ground plane during motion, and the thorax gravity center is assumed

∗Corresponding author.Email address: [email protected] (X.Y. Sandoval-Castro)

1

/ Procedia Computer Science 00 (2012) 1–9 2

to be located at its geometric center. In [4], the kinematics problem of a hexapod robot is solved as in [3] but withoutany restriction.Robots should be designed and built with high accuracy and repeatability for both positioning and trajectory trackingtasks. Therefore, performance evaluation has become a necessity in most of the robots. Thus, a variety of methodsand devices have been developed for evaluating the performance of robots and machine tools. Most of these devicesare expensive, and many of them can only measure linear displacements, it means, they do not have the possibilityof measuring the orientation of the robot end-effector. On the other hand, computer vision techniques have beendeveloped much in recent years, mainly due to low cost cameras and acquisition systems, and also to the increasingavailability of open source image processing algorithms. A vision system consists mainly of three parts: the camera,the frame-grabber, and lighting devices. In robotics, applications of vision systems have focused on fields such as:recognition of the robot’s environment; pattern recognition for objects identification; classification and counting ofobjects; quality assessment; 3D vision / perception, and so on. An important part of efforts have focused on inspectionand quality control, for example to check the fill level of product in bottles, the quality in ICs mass fabrication, andas an alternative tool for measuring dimensions of objects. While it is now possible to detect and classify objectsthrough cameras and image processing, measuring their dimensions is not a fully validated technique. The cameraoptics introduces distortion in the images and the location of the CCD sensor may not have fully known geometriccharacteristics or symmetry [5]. Therefore, some works have been devoted to techniques for camera calibration [6],[7] and some others to estimate the objects dimensions.

2. Kinematics anlysis

2.1. Description of Hex-piderixThe basic configuration proposed for the robot Hex-piderix is shown in Fig. 1. The frame 0 represents the origin,

while Q indicates the local frame attached to the robot thorax. The robot consists of six identical legs, L1, , L6, thatare distributed symmetrically around the thorax of the robot. Each limb has 3 degrees of freedom (DOF) that consistof three links, coxa, femur, and tibia, connected by rotational joints. In the tip of the tibia there is a suction cup toadhere to vertical surfaces.

Figure 1: Robot configuration.

2

/ Procedia Computer Science 00 (2012) 1–9 3

2.2. Position analysisThe analysis is performed with the D-H algorithm for a leg. In Fig. 2 the joint variables of the mechanism leg,

q1, q2, and q3 are defined, as well as the lengths of the coxa, femur and tibia, a1, a2, and a3 respectively.

Z0

Z1

Z2

Z3

Z4

Z5

X0 X1

X4

X3X2

X5

q1

q3

q5

a1

a5

d2

a3 d4

D-H Parameters

θi αi di ai

q1 0 0 a1

0 90 -d2 0

q3 -90 0 a3

0 90 -d4 0

q5-90 0 0 a5

Figure 2: D-H parameters of i-leg.

The limbs described above are similar to 3 DOF serial manipulators. From this, the homogeneous transformationmatrices between frames are obtained, taking into account the D-H parameters shown in Fig. 2, which are: link length(ai), link twist ( αi ), joint distance (di ) and joint angle ( θi), required to completely describe the three joints of thelegs.

The homogeneous transformation matrix that describes the translation and rotation between frames i and i− 1 [8],is shown in Eq. 1.

T i−1i =

cos θi − sin θi cosαi sin θi sinαi ai cos θisin θi cos θi cosαi − cos θi sinαi ai sin θi

0 sinαi cosαi di0 0 0 1

(1)

Hence, the pose of the 0 system with respect to 5, can be expressed in Eq. 2.

T 05 = T 0

1 T 12 T 2

3 T 34 T 4

5 =

nx ox ax px

ny oy ay py

nz oz az pz

0 0 0 1

(2)

where,nx = C4C5(C1C2C3 −C3S 1S 2) −C5S 4(C1S 2 + C2S 1) − S 5(C1C2S 3 − S 1S 2S 3)ny = C4C5(C1C3S 2 + C2C3S 1) −C5S 4(S 1S 2 −C1C2) − S 5(C1S 2S 3 + C2S 1S 3)nz = C3S 5 + C4C5S 3ox = S 4S 5(C1S 2 + C2S 1) −C5(C1C2S 3 − S 1S 2S 3) −C4S 5(C1C2C3 −C3S 1S 2)oy = S 4S 5(S 1S 2 −C1C2) −C5(C1S 2S 3 + C2S 1S 3) −C4S 5(C1C3S 2 + C2C3S 1oz = C3C5 −C4S 3S 5ax = S 4(C1C2C3 −C3S 1S 2) + C4(C1S 2 + C2S 1)

3

/ Procedia Computer Science 00 (2012) 1–9 4

ay = S 4(C1C3S 2 + C2C3S 1) + C4(S 1S 2 −C1C2)az = S 3S 4px = C1a1+(d4−S 5a5)(C1C2S 3−S 1S 2S 3)+C4C5a5(C1C2C3−C3S 1S 2)+C1C2C3a3−C5S 4a5(C1S 2+C2S 1)−C3S 1S 2a3py = S 1a1+(d4−S 5a5)(C1S 2S 3+C2S 1S 3)+C4C5a5(C1C3S 2+C2C3S 1)+C1C3S 2a3+C2C3S 1a3−C5S 4a5(S 1S 2−C1C2)pz = S 3a3 − d2 −C3(d4 − S 5a5) + C4C5S 3a5

Where, cos(qi) = Ci, and sin(qi) = S i

3. Experimental validation

3.1. Camera calibration technique

In order to validate the forward kinematics of end-effector, camera calibration was performed to determinateits orientation and position. Camera calibration in the context of three-dimensional machine vision is the processof determining the internal camera geometric and optical characteristics (intrinsic parameters) a 3-D position andorientation of the camera frame relative to a certain world coordinate system (extrinsic parameters), [9].Intrinsic parameters of a camera are focal length, principal point, skew coefficient and distortions. Extrinsic parametersare rotation matrix and translation vector, with all of these information is possible to obtain position and orientationof an object. The technique only requires the camera to observe a planar pattern shown at a few different orientations.

The Figure 3 shows the main elements considered in the geometrical model of the camera. The point M hascoordinates (x,y,z) in the real world, its corresponding point m has coordinates (u, v) in the image plane.

Figure 3: Elements of the camera geometrical model.

From this way, M coordinates have two transformations:

• A projection that transforms a 3D point into a 2D point on the image.

• A transformation from the camera frame (in metric units) to the image frame (in pixels).

This two transformations are represented mathematically in [6], and [7].

4

/ Procedia Computer Science 00 (2012) 1–9 5

3.2. Numerical example of forward kinematics

In this section a numerical example is provided with the purpose of showing how to deal with the kinematics ofthis model. Hereafter the units used are millimeters, otherwise stated. In this case, the dimensions of the links length,are, a1 = 57.61, a3 = 64.54 and a5 = 94.7, and the joint distances d2 = 38.5 and d4 = 22.5.

Forward kinematics equations were programmed in C embedded language using AVR Studio Compiler designedfor ATMEL microcontrollers programming.

Kinematics was validated for one leg which consists of three AX-12+ Dynamixel motors energized with 12V.3 trajectories were programmed and are shown below in Table 1. Each trajectory is formed by ten positions (EEpositions), the the joint angles are found for each position. Results are also shown in Table 1.

Table 1: Trayectories of the leg.

TrajectoriesEE Positions 1 2 3

qiq1 q2 q3 q1 q2 q3 q1 q2 q3−25 −25 −65 −25 −25 −65 −25 −65 −65

0 (97.4, -45.4, -181.2) (97.4, -45.4, -181.3) (97.4, -45.4, -181.3)−20 −25 −65 −20 −23 −65 −20 −65 −67

1 (101.0, -36.7, -181.3) (105.6, -38.4, -179.4) (97.9, -35.6, -181.3)−15 −25 −65 −15 −21 −65 −15 −65 −69

2 (103.8, -27.8, -181.2) (113.3, -30.3, -177.4) (97.4, -26.1, -181.0)−15 −25 −65 −10 −19 −65 −10 −65 −71

3 (105.8, -18.6, -181.2) (120.2, -21.2, -175.2) (96.1, -16.9, -180.7)−5 −25 −65 −5 −17 −65 −5 −65 −73

4 (107.0, -93.6, -181.2) (126.4, -11.0, -172.9) (93.9, -8.2, -180.3)0 −25 −65 0 −15 −65 0 −65 −75

5 (107.5, 0, -181.2) (131.5, 0, -170.4) (91.0, 0, -179.8)5 −25 −65 5 −13 −65 5 −65 −77

6 (107.0, 93.6, -181.2) (135.5, 11.8, -167.7) (87.4, 7.6, -179.2)10 −25 −65 10 −11 −65 10 −65 −79

7 (105.8, 18.6, -181.2) (138.4, 24.4, -164.9) (83.3, 14.68, -178.4)15 −25 −65 15 −9 −65 15 −65 −81

8 (103.8, 27.8, -181.2) (139.9, 37.5, -162.0) (78.6, 21.0, -177.6)20 −25 −65 20 −7 −65 20 −65 −83

9 (101.017, 36.767, -181.290) (140.1, 51.0, -158.8) (73.5, 26.7, -176.6)25 −25 −65 25 −5 −65 25 −65 −85

10 (97.4, 45.4, -181.2) (138.9, 64.7, -155.6) (68.0, 31.7, -175.5)

The Fig. 4 show the plot of the positions displayed on Table 1 that form the three trajectories.

3.3. Experimental technique validation

A MATLAB calibration Toolbox (open source code) was used to estimate intrinsic and extrinsic parameters. Thesoftware can be downloaded from: http : //www.vision.caltech.edu/bouguet j/calib−doc/

In order to validate the forward kinematics of the leg, camera calibration was performed to determinate the positionof the end-effector.

The technique only requires the camera to observe a planar pattern shown at a few different orientations. Fig. 5shows the experimental set up, the image on the left represents the way to attach the patron to the leg, and the otherimage shows the displacement of the patron through the first trajectory.

5

/ Procedia Computer Science 00 (2012) 1–9 6

Figure 4: Trajectories.

Figure 5: Experimental set up for technique validation.

6

/ Procedia Computer Science 00 (2012) 1–9 7

Materials and equipments.

• Checkerboard pattern model PT036-056 from OPTO ENGINEERING. The technical specifications are shownin Table 2.

Table 2: Pattern technical specifications.

Part number PT036-056Compatible Telecentric Types 36.48 and 56Compatible Pattern Mounts CMPH036-056

Dimensions (w x h) (mm x mm) 66 x 52Thickness (mm) 3.0

Active Area (w x h) (mm x mm) 51 x 64Squares ( width & spacing) (mm) 1.4

Dimensional Accuracy (micron) 1.90Surface Quality 60 - 40

Flatness λ/2percm2

Material Soda Lime GlassCoating Chrome

• Camera USB CMOS color, model DFK − 72AUC02 with a mega-pixel lens model M0814 − MP2, both fromIMAGING SOURCE. The software to acquire images is IC-CAPTURE version 2.2 from the same company.The size of images was 2256x1504.

Procedure.

The images were taken of the pattern with different positions and orientations in order to calibrate, setting the cameraaround 14.1 inches in front of it, the Fig. 6 shows the 25 aquired images for calibration.

Figure 6: Calibration images.

7

/ Procedia Computer Science 00 (2012) 1–9 8

To calibrate the camera, it is necessary to extract grid corners manually, by clicking on the four extreme cornersof the rectangular checkerboard pattern from all images and define the size of each square only for the first image (Fig7). The table in this figure, display the results of the calibration camera.

Figure 7: Corners extraction results.

The pattern was set on the end-effectors center, then 180 images were obtained from 3 trajectories of it, sampling10 positions and 6 measurements of each trajectory. Fig. 8 show an analyzed image and a table whit its extrinsicparameters.

Figure 8: Extrinsic parameters of a trajectory dot.

The positions of the six measurements were averaged for the three trajectories in order to obtain the standarddeviation of the end effector. It was made a subtraction of the point i-1 to point i, where i = 0 ... 10, and the normcalculation for displacement from one dot to another, and compare this result against the calculated norm.

Table 3 indicates the results of experimental validation, where is shown the calculated an measured norm and theerror between both norms. In general terms, the error is minor than 1mm, only in some positions there are highererrors, but without exceed the 3mm. Mainly, the error is due to the backlash that exists in the robot joints, this because

8

/ Procedia Computer Science 00 (2012) 1–9 9

of the joints were manufactured in plastic. Taken on account the results obtained in this study, some elements of therobots will be redesigned in materials more hardy.

Table 3: End effector norm errors.

TrajectoriesNorm 1 2 3

theoretical measured error theoretical measured error theoretical measured error0 to 1 9.3782 10.1727 0.7945 10.9488 11.6913 0.7425 9.8067 10.1764 0.36971 to 2 9.3782 8.6576 0.7206 11.3268 10.4595 0.8673 9.5359 8.4089 1.1272 to 3 9.3782 10.1074 0.7292 11.70294 12.5106 0.80766 9.2666 9.7335 0.46693 to 4 9.3782 10.1585 0.7803 12.0762 12.3207 0.2445 8.9993 9.304 0.30474 to 5 9.3782 8.7048 0.6734 12.4458 11.12 1.3258 8.7343 7.6764 1.05795 to 6 9.3782 9.2372 0.141 12.8108 12.8048 0.006 8.4722 8.8895 0.41736 to 7 9.3782 10.5649 1.1867 13.1706 13.1726 0.002 8.2132 8.8411 0.62797 to 8 9.3782 8.5224 0.8558 13.5244 11.2601 2.2643 7.9579 7.4259 0.5328 to 9 9.3782 10.04 0.6618 13.8715 13.9623 0.0908 7.7066 7.9589 0.2523

9 to 10 9.3782 9.8559 0.4777 14.2112 14.0408 0.1704 7.4598 8.4116 0.9518

4. conclusions

An experimental validation of the technique to estimate the pose of robots using camera calibration techniqueshas been performed. The measurement technique is based on camera calibration methods without contact betweenmeasurement system and end-effector. The technique requires only a high resolution camera and a calibration pattern.This technique was used to evaluate forward kinematics of one leg of robot Hex-piderix. The result shows errors lessthan 3 mm, in the worst case. The errors are mainly due to backlash on the joint level of the leg.This technique is a cheap way to validate the theoretical equations and compare its results against the real displace-ments of the end effector.The error can be reduced by redesigning some elements of the robot, this will reduce the backlash.

References

[1] P. Gonzalez de Santos, E. Garcia and J. Estremera. Quadrupedal locomotion: an Introduction to the control of four-legged robots.Springer,Verlag, London. 2006, pp. 3-32.

[2] A. Sintov, T. Avramovich and A. Shapiro. “Design and motion planning of an autonomous climbing robot with claws.” Robotics andAutonomous System, vol.59, pp.1008-1019, June 2011.

[3] S. Shekhar Roy, A. Kumar Singh and D. Kumar Pratihar. “Estimation of optimal feet forces and joint torques for on-line control of six-leggedrobot.” Robotics and Computer-Integrated Manufacturing, vol.27, pp.910-917, March 2011.

[4] M.C. Garcıa-Lopez, E. Gorrostieta-Hurtado, E. Vargas-Soto, J.M. Ramos-Arreguın A. Sotomayor-Olmedo and J.C. Moya-Morales. “Kine-matic analysis for trajectory generation in one leg fo a hexapod robot.” Procedia thecnology, vol.3, pp.342-350, 2012.

[5] D.C. Brown. “Decentering distortion of lenses.”. Photogrammetric Engineering. 7: 444462, 1996.[6] Z. Zhang, A flexible new technique for camera calibration, IEEE Transactions on Pattern Analysis and Machine Intelligence,

22(11):13301334, 2000.[7] J. Heikkila, O. Silven, A four-step camera calibration procedure with implicit image correction, Computer Vision and Pattern Recognition,

1106-1112, 1997.[8] Lung-Went Tsai. Robot analysis: The mechanics of serial and parallel manipulators. New York: John Wiley & Sons, Inc., 1999, pp. 54-109.[9] Tsai, R. Y. (1987) A versatile camera calibration technique for high- accuracy 3D machine vision metrology using off-the-shelf TV cameras

and lenses. IEEE Journal of Robotics and Automation RA-3 (4): 323-344.

9