Vision Based Simulation and Experiment for Performance Tests of Robot

14
DOI 10.1007/s00170-003-1963-9 ORIGINAL ARTICLE Int J Adv Manuf Technol (2005) 25: 1218–1231 M.H. Korayem · K. Khoshhal · H. Aliakbarpour Vision Based Simulation and Experiment for Performance Tests of Robot Received: 7 May 2003 / Accepted: 27 September 2003 / Published online: 25 March 2004 Springer-Verlag London Limited 2004 Abstract The feature-based visual servoing approach has been used to control robot through vision. In order to find the pos- ition of the end effector by vision and through robot perform- ance tests, computational kinematic approach has been used. The software carries out the duty of environment simulation and operation of an industrial robot. The disputes related to image capturing, image processing, target recognition, and how to con- trol robot by vision system ability have been carried out in the simulation tests. The vision based program has been defined in such a way that it can be carried out by a real robot with the least changes. In the experiment, the vision system will recognize the tar- get and control the robot by obtaining images from environment and processing them. At the first stage, images from environment are changed to a grayscale mode then it can diverse and identify objects and noises by using a threshold objects which are stored in different frames and then the main object will be recognized. This will control the robot to achieve the target. Finally, the is- sues of robot performance tests based on the two standards ISO 9283 and ANSI-RIA R15.05-2 have been accomplished through simulator program using vision system over the 3P robot for evaluating the path-related characteristics of the robot. To evalu- ate the performance of the proposed method experimental test is carried out. Keywords Performance tests · Vision robot · Simulation · Experiment 1 Introduction In order to automatize the outputs of Jacobian derivation, most of the researchers have used neural network in vision system for robot control. Miller used a neural network to demonstrate the M.H. Korayem () · K. Khoshhal · H. Aliakbarpour Robotic Research Laboratory, Mechanical Engineering Department, Iran University of Science & Technology, Narmak, Tehran, Iran Professor E-mail: [email protected] feasibility of the approach [1]. Wu and Staunley used a fuzzy decision network to manage a hierarchy of back propagation net- works to approximate the Jacobian over the entire workspace [2]. Other researchers have attempted to automate feature selec- tion. Feddema has shown that geometric feature selection can be aided by measuring the properties of the Jacobian [3]. An- guita used an averaging compression to create a feature vec- tor, which was used to train a neural network to approximate the inverse Jacobian [4]. Stanley and Wu have demonstrated the usage of neural networks to perform the PCX and vector quantization compression on images for data input by remov- ing the need for feature extraction entirely [5]. Their researches were used to automatically compute new features for different targets. Calibration of the kinematic transforms between the image and the world coordinate system has been examined by many researchers. The calibration-based method has been used to determine the coefficients of the transformation which sup- ports taking captured from the robot pair camera. Wang has described all details about the relationships between different frames and has applied three different methods to approx- imate the transform, ranging from the identified target and position case to unidentified target and position [6]. Wei has outlined an approach for computing the transformation based on active vision principles [7]. Zhuange described a system where both robot and camera could be calibrated simultan- eously [8]. The robot behaviour is simulated in order to show the robot positions in different moments. Pan and others were introduced their robots in different moments through a computer program in using turbo C Language [9]. Yan and Sarcar have simulated behaviour of a directive two-wheel robot by using a Matlab Simulink software [10]. Hiroos and the others used a software called ADAMS to show a robot motion [11]. A software called ROBOLAB is used to simulate a laboratoy ATLAS II robot [12]. Primary systems of visual servoing refer to 1980, which pro- cessing in visual controlling was low. But after a short time many researches have been done in this field. J. Muligan used this sys- tem for controlling spade of louder machines, and by processing

Transcript of Vision Based Simulation and Experiment for Performance Tests of Robot

DOI 10.1007/s00170-003-1963-9

O R I G I N A L A R T I C L E

Int J Adv Manuf Technol (2005) 25: 1218–1231

M.H. Korayem · K. Khoshhal · H. Aliakbarpour

Vision Based Simulation and Experiment for Performance Tests of Robot

Received: 7 May 2003 / Accepted: 27 September 2003 / Published online: 25 March 2004 Springer-Verlag London Limited 2004

Abstract The feature-based visual servoing approach has beenused to control robot through vision. In order to find the pos-ition of the end effector by vision and through robot perform-ance tests, computational kinematic approach has been used.The software carries out the duty of environment simulation andoperation of an industrial robot. The disputes related to imagecapturing, image processing, target recognition, and how to con-trol robot by vision system ability have been carried out in thesimulation tests. The vision based program has been defined insuch a way that it can be carried out by a real robot with the leastchanges.

In the experiment, the vision system will recognize the tar-get and control the robot by obtaining images from environmentand processing them. At the first stage, images from environmentare changed to a grayscale mode then it can diverse and identifyobjects and noises by using a threshold objects which are storedin different frames and then the main object will be recognized.This will control the robot to achieve the target. Finally, the is-sues of robot performance tests based on the two standards ISO9283 and ANSI-RIA R15.05-2 have been accomplished throughsimulator program using vision system over the 3P robot forevaluating the path-related characteristics of the robot. To evalu-ate the performance of the proposed method experimental test iscarried out.

Keywords Performance tests · Vision robot · Simulation ·Experiment

1 Introduction

In order to automatize the outputs of Jacobian derivation, mostof the researchers have used neural network in vision system forrobot control. Miller used a neural network to demonstrate the

M.H. Korayem (�) · K. Khoshhal · H. AliakbarpourRobotic Research Laboratory, Mechanical Engineering Department, IranUniversity of Science & Technology, Narmak, Tehran, IranProfessorE-mail: [email protected]

feasibility of the approach [1]. Wu and Staunley used a fuzzydecision network to manage a hierarchy of back propagation net-works to approximate the Jacobian over the entire workspace [2].Other researchers have attempted to automate feature selec-tion. Feddema has shown that geometric feature selection canbe aided by measuring the properties of the Jacobian [3]. An-guita used an averaging compression to create a feature vec-tor, which was used to train a neural network to approximatethe inverse Jacobian [4]. Stanley and Wu have demonstratedthe usage of neural networks to perform the PCX and vectorquantization compression on images for data input by remov-ing the need for feature extraction entirely [5]. Their researcheswere used to automatically compute new features for differenttargets.

Calibration of the kinematic transforms between the imageand the world coordinate system has been examined by manyresearchers. The calibration-based method has been used todetermine the coefficients of the transformation which sup-ports taking captured from the robot pair camera. Wang hasdescribed all details about the relationships between differentframes and has applied three different methods to approx-imate the transform, ranging from the identified target andposition case to unidentified target and position [6]. Wei hasoutlined an approach for computing the transformation basedon active vision principles [7]. Zhuange described a systemwhere both robot and camera could be calibrated simultan-eously [8].

The robot behaviour is simulated in order to show the robotpositions in different moments. Pan and others were introducedtheir robots in different moments through a computer programin using turbo C Language [9]. Yan and Sarcar have simulatedbehaviour of a directive two-wheel robot by using a MatlabSimulink software [10]. Hiroos and the others used a softwarecalled ADAMS to show a robot motion [11]. A software calledROBOLAB is used to simulate a laboratoy ATLAS II robot [12].

Primary systems of visual servoing refer to 1980, which pro-cessing in visual controlling was low. But after a short time manyresearches have been done in this field. J. Muligan used this sys-tem for controlling spade of louder machines, and by processing

1219

Fig. 1. Robot simulator program

the pictures specially edge detection, which are the most us-able styles for specifying the positions and controlling machine’sarm. [13] One of the most important matters in image processingis edge-detection which is usually done by software. A. Hajsonperformed this matter in hardware by using VLSI. Although pro-cessing has been done very fast but it maximized the expenses ofsystem and also flexibility of hardware was lower than softwaresystems. [14] Robotic systems are becoming more compressedand cheaper and applicable before undefined cases of past andso are vision systems. SVM1 is a small and cheap unit for pro-cessing images, which is a basic tool for usage of vision systemin computer. Kurt Konolige explained software and hardware ofa SVM construction. [15]

The main purpose of this article is the development and ap-plication of vision based system for controlling industrial robotsand performing laboratory tests based on path-related perform-ance characteristic tests. First, the structure and components of3P Cartesian robots are explained. Then the improved labelingalgorithm which has decreased the necessary time is presented.The detail implementation of a simulation software is explainedas shown in Fig. 1. Finally, the two different version of the simu-lation software are presented to investigate the robot error basedon two international standards in order to determine path relatedparameters. ISO 9283 and ANSI/RIA R15.05.2 are used to deter-mine path related parameters. [16] Finally, an experiment involv-ing a 3P Cartesian robot is considered and analysed in a situationwhere the robot is required to follow a specified trajectory.

1 – Small Vision Module

2 Labeling process

The main aim of this process is allocated an exclusive index to anydistrict in binary image through which all pixels having any qual-ity except zero will be related with each other. Iterative labelingalgorithm will analyze entrance on the image and each time a freeand no related pixel is found, a new label is allocated to it. If anyrelated, but having different labels district is seen, a series of it-erative label will be published and both districts get new label, sothat the least number of the labels are used. Iterative algorithm isverysimplebut toomuchslow, significantlywhen thereare someUform object in the picture, labeling algorithm of equivalence tableuses a table which registers the equivalences between the labels.Whenever the two related districts are seen with different labels,new entrance will be opened in equivalences schedule. The lastprocess has been done by reallocation of the labels using researchtechniquesinthegraph.Therewillbeanexcessivedifference,com-puting above mentioned approaches expenses costs. For instance,a picture in 700×356 sizes, which is labeled by iterative algo-rithm, takes about 50 seconds, while the necessary period for thealgorithm using equivalences table is about 12 seconds.Twoprocessesareneededinequivalences tablealgorithm.Thefirstprocess is the primary labeling and the second one, is to mergeneighborhood labels by equivalences table. Because of this re-searchbeingon-line, the12-secondperiodseemstobetoomuchforthe labeling task. So, another algorithm was created which couldcheck for every non-free pixel and calculate to find if there. If theresponse is positive, the act of merging will be done in the same

1220

place and after merging, the task of labeling will be continued. Theperiod spent for this algorithm will be decreased to two seconds.

3 How to load simulator software

Simulator software is designed and loaded in the form of object-oriented. The high maneuvered power of this approach the abilityof reading better for the individuals who like to study the systemand put into effect the probable changes over the system, will bequite useful. Visual C++6 programming language is used to ac-complish this plan. In this software, the picture is taken in the shapeof bitmap through a camera view which is installed over the endeffector by the capture frame module and it is returned from thepixels in the form of array. The ingredients of this module shouldbe changed and the picture must be captured from the camera forthe real mode.The real program code could be loaded exactly according to thesimulator program code in the other phases of picture process-ing. Because of object-oriented designing, the above importantact will be done quite easily and the only necessary change, is toexchange simulator program to real mode.

3.1 Introducing CArray class

In order to have one or some frames of a picture consisted ofdifferent elements, several dimension arrays with many elementsare needed. One of the benefits of designing through object-oriented approach is that you can capsulate a combination ofoperators and functions related to that variable generally forevery variable, which is being determined. So, in this researchwe have decided to create a class named CArray which it’s mainvariable is an array consist of n ×m ×k elements. This class car-ries out all necessary activities for image processing includingthresholding, image segmentation, labeling, feature extraction ofthe present object due to the picture such as space, area, height,width, gravity center, bounded rectangle, clone of the imageframe and their store and restore them into or from file.

4 Different versions of the software

Two separated versions of the simulator software have beenprepared.

4.1 The first version

We have taken the advantage of using two cameras in this soft-ware. The user can monitor by switching any of the two camerasat any moment. The task of the processor camera which has beeninstalled on the end effector is getting pictures from workspaceand sending them to the vision system for information process.The second camera is the observer camera. The aim of installa-tion of such camera is observing the whole space where the robotis and also supervising over the robot working and its reaction

will be observed by observer. This camera is moveable and theusers can change their position at any part of the space.

This version of the program has the ability of doing perform-ance tests to be accomplished over the robot on the simulatorsoftware without any limitation and watches the results quite nat-urally as well as finding the target object automatically throughthe vision system and directing end effector to reach to target bythe robot vision system.

The robot can read and perform its motion in two approaches,thefirstprocess ispoint topoint. In thisapproach, therateofmovingend effector in the space can be determined in the file that has beenloaded before by the user in such a way of line after line and willbe read and done through the program. But the second approach isseen like a continuous path and the user can determine path such ascircle (parameters: center and radius), rectangle (parameters: thelength of the lines and coordination of corner), line (parameters:coordination of start and end points) for the robot motion. In thesecond part of the program, attention has been paid to the test erroroperating of the robot motion through these paths.

4.1.1 Vision based algorithm

The first process is providing a buffer to keep the picture frame.This buffer is CArray type. Also, it is necessary to keep a seriesof the frames. Therefore, we create a CArray class type objectcalled “obj” in a way that, after segmentation, we keep the sep-arated objects in the collection of the frames. In the next step,FrameCapture module is being called. The main duty of thismodule is taking image from camera that connected to the endeffector, and keeping the picture ingredients in the bufl frame.After thresholding operations, segmentation, and labeling, thepresent objects in the buf1 picture will be extracted and eachsingle frame is conserved in different levels of the obj frames col-lection with its number. The desired object should be recognizedamong them after separation of the objects. This action will bedone through surveying of target object features.

Following the process, controller algorithm will compute theamount of necessary displacements to correct delta X and deltaY and then necessary commands will be given to X and Y axisof the robot. After the above-mentioned displacements, anotherimage will be taken from the workspace and the operations willbe repeated again. This performance will be continued till therobot’s X and Y axis’ error are getting the least amount. Thementioned algorithm controller tries to observe the object in thecenter of the image every time and the errors to reach the end ef-fector to the object based on the distance of object center fromthe center of the image. Then, delta Z is computed and reformed.This duty will be done just like the former case, with the differ-ence that this axis error, based on the rate of the object size differ-ence, observed in the image, the favorite size will be computed.

4.1.2 Recognition of the object and reach the end effectorto the target

There are different kinds of objects in the robot workspace.Robot identifies the desired object which is a spherical shape

1221

Fig. 2. Robot picture after the first step; left picture: observer camera view, right picture: processor camera view

Fig. 3. Robot picture: after the last step (34th step); left picture: observer camera view, right picture: processor camera view

Fig. 4. Robot picture after the first step; left picture: observer camera view, right picture: processor camera view

object by visual system and leads the end effector to it. Visualsystem carries out this process in 34 steps. Figure 2 shows theGantry robot in the first step and Fig. 3 shows the robot fromdifferent views after finishing the work (34th step).

Similarly, simulation tests can be done for the bridge robotFig. 4 shows the robot in the first step and Fig. 5, Fig. 6 show thebridge robot from different views after finishing the work (34th

step). The robot motion diagram also is drawn in the Fig. 7.

1222

Fig. 5. Robot picture: after the last step (34th step); left picture: observer camera view, right picture: processor camera view

Fig. 6. Another scheme from the robot in asituation where end effector has gotten thetarget (observer camera view)

Fig. 7. Robot motion diagram when endeffector reaches the target (by visionsystem)

1223

Fig. 8. Neural network convergencediagram

4.2 The second version

This version of software is allocated to the performance tests ofthe robot according to the international standards known as ISO9283 and ANSI/RIA R15.05-2. Two cameras with a certain dis-tance form each other are looking at the end effector. The twomentioned other is fixed and zoom at the end effector distanceto the cameras compute the distance as a result the end effectorcoordination in the image plan will be computed. But, to do theperformance tests it is needed to have the end effector positionsdue to the reference coordination. Usually, for the mapping fromimage plan of the reference coordination, equivalences solutionand calibrating will be used that in its turn, it is complicated. Inthis program, in order to do this mapping which is a non-linearmapping, we have used a neural network (Fig. 8).

5 The robot errors evaluation through simulatorsystem

In the actual robot, because of physical subjects and being care-less in mechanical spare tools construction, while axis moving,there will be mechanical errors. So, in the simulator program, inthe moving axis, these errors will be there figuratively, though,despite the recent international efforts by many of the standardcommittees, research and industrial labs, many of the robotsusers still sustain a loss, as a result of the lack of the standard,technical approaches and necessary determinations of the robotexaminations. This matter is caused by complexity of the mostrobot designations and their vast limitations. In this research wetry to do some of these approaches by using camera and visualsystem according to the standards such as ISO-9283, and ANSI-RIA that belong to the robot specifically.

5.1 Robot performance test according to ISO-9283 standard

The test operating approaches in this standard are divided intoeight general levels. In this research, two approaches among theother eight ones are implemented to measure the path related pa-rameters.

5.1.1 Error detection in point-to-point approach through visualsystem (single camera)

In order to survey this approach, we should take the aim in a spe-cific 3D-space of robot. Then, we order the robot to take in theaim position through point-to-point approach. The robot goesaway from the favorite point because of mechanical errors. Insuch a situation, we activate the visual system to determine theamount of error in order to correct it. When the robot motion fin-ishes, the visual system shows the amount of error in Y , X, Z ,three axes, in the pixel rate of 40, 30 and 50 in order to reach tospecific point. As shown in Fig. 9 the visual system will correctthese errors only after 9 steps. In this approach we use a cam-era to recognize and correct the errors. An eye-in-hand camerareveals the errors and it shows that, error’s correction is doneexactly since the work starts.

5.1.2 Determination of the error through visual system in a con-tinuous path (two cameras)

In this approach, the robot end effector is moved on a specialdirection. The position of the end effector is recognized at anymoment by two cameras which are fixed with a certain distancefrom each other. In this approach contrary to the former one, thecamera is not on the end effector. The camera No.1 determinesthe target coordination in a 2-D image plan. The depth of thepicture also is computed with the help of the second camera.

In this way, we have computed and got all target coordi-nations in the image plan. But in order to get the target spe-cifications in contrast to reference frame, it is needed to doa series of operations. Usually, non-linear programming is usedto determine transformation matrix. We prefer neural networksto non-linear programming. Neural networks behave like non-linear estimating functions. So, to estimate processing matrix, wecan use a set for training in such a way that this collection couldbe achieved through robot moving end effector in 330 mantlenetwork like x, y, z in a cube space with 1× 1× 2 dimensions(each unit equals 40 mm). The used neural network, is a back-propagation perception kind network with 5 layers which in thefirst layer a 3 node entrance including picture plan coordination,

1224

Fig. 9. Investigation and error elimi-nation through vision system by sin-gle camera (units are pixels)

layers 2, 3 and 4 take 20 node and finally output layer, 3 node in-cluding coordinations contrasted to earth-reference system. Thefunctions used for 2 to 4th layers are logarithm sigmoid and in-put and output layers are linear. Convergence diagram in Fig. 8shows this in an approach that, the network converging witha small error rapidly and then after 2000 iteration, it makes con-vergency with every axis, with a final error of 0/0009 or about0/036 mm.

Test-1 Error determination on rectangle path. In this Section,the robot error, moving on the rectangle path will be obtained.In order to do this, we move the robot end effector on the de-sired path using the R −1,−2, 2, 1 instructions and as the endeffector moves in this path, we take image from the end effec-tor by the two cameras fixed on the computer permanently. The

Fig. 10. Error investigation in variantrectangular path (by two cameras)

end effector coordination will be determined in the image plan.Using neural network, we change the image plan specificationsto the reference frame. The desired path and actual path havebeen drawn in Fig. 10.

Test-2 Error determination over the circular path. We investi-gate the error on the circular path for continuous moving case.We do C 0,−1.5, 0.5 instructions by the robot. A circle with 32continuous points is considered. According to the order, the cen-ter of the circle is (0,−1.5), it’s line 0.5 units and each unit insimulator equals to 40 mm. Like the explained cases during per-manent motion on the path, 32 images have been taken from endeffector using two cameras. In this way, end effector coordina-tion in image plan will be collected. Using neural network, imageplan coordinations of points will be transformed to the reference

1225

Fig. 11. Error investigation in variantcircular path

frame. The desired path and actual path by robot is shown inFig. 11.

5.2 Robot performance test based on the ANSI-RIA R15.05-2

The aim of this standard is providing technical information tohelp users to select the most convenient case for the robot.This standard defines important principles based on the path,and then different appearances will be seen to evaluate them.These principles are: approximate accuracy of the path, abso-lute accuracy of the path repetition ability of the path rapidnessspecifications, and corner variable. Evaluation of the men-tioned principles is one of the most convenient ways for eval-uation of the whole activity based on industrial robots path.Also, the measurement of these principles brings the oppor-tunity of contrasting similar robots operations. To make testsmore applicable, statistic analyzes according to ANSI-RIA stan-dard are performed on the achieved out puts in the formersections [16].

Maximum deviation of AC:. AC is the maximum distance be-tween any given attained path and corresponding reference path.This deviation could occur at any one of the measured evaluationpoints for a minimum of ten measurement cycles, as follows.

AC = nmaxi=1

mmaxj=1

√(uaij −urj )

2 + (vaij −vrj )2 (1)

where, (vaij , uaij ) and (vrj , urj ) correspond to the coordinates ofthe attained path and reference path on the evaluation plane, re-spectively. Parameter (n) specifies repeating and m parameter isnumber of points on the evaluation plane.

Average deviation AC: . AC is the average of the distance be-tween any given attained path and the corresponding referencepath as follows:

AC = 1

m

m∑j=1

√(uaj −urj )

2 + (vaj −vrj )2 (2)

uaij , vaij correspond to the coordinates of the barycenter pathwhich are defined as:

vaj = 1

n

n∑i=1

vaij (3)

uaj = 1

n

n∑i=1

uaij (4)

FOM calculation of path repeatability:. The mean reference pathis used in the evaluation of the path repeatability FOM. This pro-cess involves the calculation of the deviation of Dij between anevaluated point and its corresponding barycenter point.

Dij =√

(uaij − uaj )2 + (vaij − vaj )

2 (5)

PR = mmaxj=1

nmaxi=1

Dij (6)

PR = mmaxj=1

1

n

n∑i=1

Dij (7)

where, Dij is the path deviance, PR is the maximum path re-peatability and PR is average of path repeatability.

1226

Fig. 12. uaj Gravity centers

Cornering round-off errors:. Cornering round off CR is definedas the minimum distance between the corner point and any pointon the attained path. For each of the three corner points, the valuefor CR is calculated as follows

CR = kmink=1

√(Xe − Xak )

2 + (Ye −Yak )2 + (Ze − Zak )

2 (8)

where Xe,Ye, Ze are the position coordinates for each of the ref-erence corner points and Xak , Yak, Zak, are the coordinate ofpoints along the attained path.

Cornering overshoot:. Cornering overshoot CO is defined as thelargest deviation outside of the reference path after the robot haspassed the corner. It’s value is the maximum distance betweenthe reference path and the attained path. The distance is deter-mined in a plane perpendicular to a straight line between twoadjacent reference points, and is calculated as:

CO = kmaxk=1

√(Xe − Xak )

2 + (Ye −Yak )2 + (Ze − Zak )

2 (9)

where Xak, Yak and Zak are the position coordinates for discretedata points on the attained path. Xrk , Yrk and Zrk are the coordi-nates of the sample data points along reference path.

5.2.1 ANSI-RIA standard principles evaluation related to 3Probot through simulator software

In this part, we compute the principles and their relations. Therobot end effector moves over the rectangular path based on stan-dard platform (Fig. 10), and the number of points for evaluationwill be considered in m = 34 and this will be repeated 10 times(n = 10).

Using 2 cameras, looking at the end effector with a certaindistance at different periods, image taking is done from end ef-fector and its coordinations are achieved from image plan. Theachieved coordinations in this step show the end effector positionin the image plan (unit is pixel) and to do tests and analyzing, the

Reference CR CO

R1 0.0693 2.1736R2 0.0286 2.3201R3 0.0776 2.1675

Table 1. Parameters’ maximum and repetition averages

position of end effector with coordination of end efector relatesto reference coordination (a fixed point on each).

As it was mentioned before, in this work we have used neuralnetworks to change the coordinations from image plan to refer-ence system. Using neural networks we map coordinations fromimage plan into reference system, in order to have real distances.

R1 = (1,−2), R2 = (1,−1), R3 = (−1,−1)

The gravity centers of the paths based on the Eq. 1 andEq. 2 are computed (Fig. 12), and path deviation Dij is com-puted based on Eq. 3. Then Eq. 4 and Eq. 5 are determined byusing the average of repetition PR = 0.0154 and maximum ofrepetition,PR = 0.0273 (units are ×40 mm). Also CR cornerdeviation error and CO cornering over shoot for the above-mentioned text according to Eq. 6 and Eq. 7 are listed in Table 1.

6 Description of the system

In order to operate a visual system for controlling a robot, somefacilities are needed which may be different base on their us-age’s. Camera is the main hardware of visual system, and indifferent visual system the only similar part is camera. In thissystem the following components are needed (Fig. 13).

1- 3P Robot 2- System of Controlling Robot

3- System of Image Processing 4- Connecting Cable

5- Two Interface Board for 6- CameraControlling the Systems

1227

Fig. 13. 3P Cartesian bridgerobot

6.1 Method of operating the whole system

We should prepare the robot system and the vision system andthen control program. If the aim is getting object by end effector,then the vision system takes an image from environment ini-tially. That image will be processed and if the main object wasrecognized in that, image and processing was successful then itwill consider position. So, the necessary displacement of x, y,z for moving end effector will be sent from vision system torobot system by using interface board. The related program ofrobot will get information and move the robot. Meanwhile thevision system waits for information from the robot system inorder to be sure about the end of robot function and its stabi-lization. In other word, after robot task and it’s stabling, a signalis sent to the vision system to continue its operation. This willbe continued till the target exactly places in the middle of theimage.

6.2 Controlling software of vision system

This software is considered for controlling vision system byusing visual C++6. The applied algorithm of this software likethresholding and segmentation and etc. is similar to 3P robotsoftware. [17] So, some algorithms are changed for gettingproper result, because using that algorithm in experimental testshad problems. For example in the experiment our images fromenvironment aren’t ideal image, therefore some times we hadmany noise in image and it was difficult to remove all of thosenoise from image.

6.3 Controlling software of robot system

The source related to controlling robot software is written byC under MSDOS. In this program you face with two base ad-dresses. 300 Hex base address, which refer to base address ofinterface board and 240 Hex address refers to base address ofcontrolling robot actuator. We use initialization of control wordfor appointing the position of registers (input or output), here0×99 (10011001) that A and C registers are input ports and Bregister is output port and system is initialized in 0 mode.

7 Experimental tests

In this Section, we will control robot by vision system and meas-ure the rate of errors. Initially, the vision system will be testedwhich is assigned for taking images and reaching to the object.Before testing the vision system on robot, we test on camera inorder to get efficiency of different terms of light on an imageand inform about the process of objects recognition. By sometests we can specify the robot’s errors. Standard for robots testplatform are used to specify the error rate [16]. Path accuracyevaluation points for both circle and rectangular references pathare conducted (Fig. 14).

7.1 Processing of images in different term of environment light

Two similar images in different terms of light will have differentresults. Possibility of identifying an object in an image is in high

1228

Fig. 14. The diagram of error correc-tion using vision (per pixel)

Fig. 15. a Gray-scale and thresholdof image in very high light conditionfrom a side b Gray-scale and thresh-old of image in good light condition

range and vice versa. Here the range of threshold is determined.(Fig. 15a,b)

7.2 Controlling robot by camera

When there is an image and the main object is recognized,number of pixels between object and the place of its standcan be calculated through image. After giving this numberto robot control system for robot moving and prompt reach-ing to objects. Calculating pulse number bases on pixel num-ber for each 3P robot x, y, z axes is different. The differ-ence is the result of gearwheel sizes and each one is differentfrom the other. Traversing an equal rate for each axis needsdifferent pulses. These pulses should be obtained by try anderror, of course because of robot mechanical limitation, it maynot be able to transmit the robot exactly toward the target.(Fig. 14)

7.3 Performing standard tests on 3P robot

Path accuracy evaluation points for both rectangular and circlereference path is carried out based on ANSI/RIA R15.05-2 stan-dard path as shown in Figs. 16 and 17.

7.3.1 Rectangular and circle reference path

In this standard, a rectangular path should evaluate at least 20points of robot’s actual path and reference path as shown inFig. 16. Similarly, we should use at least 12 points for circlepath. (Fig. 17) It can be seen diagram of result of U and V ofthe experiment about rectangular paths for 10 times in Fig. 18 byusing Eq. 3 and 4. Similarly, the diagram of result of U and Vof the experiment about circle paths for 10 times are shown inFig. 19.

1229

Fig. 16. Diagram of rectangulartests of robot

Fig. 17. Diagram of circle testsof robot

1230

Fig. 18. Diagram of mean of U, V for eachevaluation point on the rectangular paths for10 times

Fig. 19. Diagram of mean of U and V for eachevaluation points on the circular paths for 10times

After performing concerned tests, result in Tables 2 and 3are obtained. Results show that the error on rectangle path isascending. Most of errors are related to x axis that its me-chanical movement system is weaker than the others. It shouldbe noted that the results of these tests are influenced by notonly robot’s mechanic or hardware, but also by controlling soft-ware.

Rectangular path Circle path

AC 20.0998 8AC 11.8209 0PR 7.3055 5.5009PR 3.5495 1.6180

Table 2. Result of path-related parameters

R1 R2 R3

CR 18 5 5.385CO 32.2 9.84 35.9

Table 3. Results of corner errors and corner overshoot in rectangular refer-ence path

8 Discussion

In this paper, usage of vision sensors in controlling robot hasbeen discussed. After determining different discussions relatedto vision system, two separated versions of simulator softwarehas been introduced, in such a way that these softwares usinga 3D graphic space could have done a real controlling operation

1231

by visual system robot. Because of robots standard importance,the second simulator software allocated to performance test an-alysis of robot. Among the present standards, two standards ISO-9283 and ANSI-RIA have been studied in order to determinepath related parameters.

In the first version of software, two cameras have been used.The first camera is used for processing phase and the second onefor user supervising from different places and different views onthe robot operation. In this research for servoing, the feature-based visual servoing approach has been used. The reason forusing a single camera in processing phase is leading the robot ina used approach known as feature-based visual servoing. But todo performance test, we need to use the second camera for infor-mation process and determining end effector position related tothe reference with the help of the second camera.

In this research, we could consider the details of a robot op-erations conducting simulation test to compute the errors usinga 3D space used for animation works to create personalities andspaces with the desired quality in order to animate the vision sys-tem to the real situation, in such a way that all installed camerasoperate exactly like the real cameras. Finally, the real parameterslike, focus distance, magnification rate and the weigh of visionare defined. So, before spending any expense and cost, user canstudy and investigate how to control a robot, visual system byusing this space and other related issues.

9 Conclusion

In this article, it is shown how to use a vision system to control anindustrial robot. It was observed that before using this vision sys-tem for controlling a robot, we are in needs of general requiredinformation of the robot and the camera function and affects oflight conditions on taken photos must be known. We can assumethese results from the system as a better and more accurate con-trol on robot around. In this system there is no need to know thefirst location of robot to calculate the required movement towardthe goal, because taken images will help us to know the distanceof the object to the end effector and this is one of the advantagesof this system.

Vision system can be used for path-related characteristics ofrobot. In this article, by applying vision system, the path-relatedparameters are found for industrial robot. The calculation of thepath accuracy is simplified by finding the intersection of the at-tained path.

Acknowledgement The authors gratefully acknowledge the support of theDepartment of Mechanical Engineering at the Iran University of Science& Technology. Support for this research project is also provided by IUSTResearch Program.

References

1. T.W. Miller III, “Neural Networks for Sensor Based Control of Robotswith Vision”, IEE Trans. on Systems Man and Cybernetics, Vol.19,No.4, 1989 pp. 826–831.

2. J. Wu, K. Stanley, “Modulare Neural-Visual Servoing Using a Neural-Fuzzy Decision Network”: IEEE Conference on Robotics and Automa-tion, Albuquerque, 1997, pp. 3238–3243.

3. J.T. Feddema, C .S. G Lee, O.R. Mitchell, “Weighted Selection of ImageFeatures for Resolved Rate Visual Feedback Control”: IEEE Transactionon Robotics and Automation, Vol. 7, No 1, 1991, pp. 31–47.

4. D. Anguita, G. Parodi, R. Zunio, “Neural Structures for Visual MotionTracking”, Machine Vision and Applications, August 1995.

5. J. Wu, K. Stanley, “Modulare Neural-Visual Servoing using a Neural-Fuzzy Decision Network”: IEEE Conference on Robotics and Automa-tion, Albuquerque.

6. Ching-Cheng Wang, “Extrinsic Calibration of a Vision Sensor Mountedon a Robot”: IEEE Trans. on Robotics and Automation, Vol. 8, No. 2,April 1992, pp. 161–175.

7. Guo-Qing Wei, Klaus Arbter, and Gerd Hirzinger, “Active Self-Calibration of Robotic Eyes and Hand-Eye Relationships with ModelIdentification, ” IEEE Trans. on Robotics and Automation, Vol. 14, No.1, February, 1998, pp. 158–166.

8. Hanqi Zhuang, Kuanchih Wang, Zvi S. Roth, “Simultaneous Calibra-tion of a Robot and a Hand-Mounted Camera, ” IEEE Trans. onRobotics and Automation, Vol. 11, No. 5, 1995, pp. 649–660.

9. Pan, T.J. and Luo, R. C, “motion obstacle”, Proc. IEEE Int. Conf. OnRobotics and Autom., Cincinnati, Ohio, pp. 573–583, 1990.

10. Yun, X. and Sarkar, N., “dynamic Feedback Control of Vehicles withTwo Steer able Wheels”: Proc. IEEE Int. Conf. On Robotics and Au-tom., Minneapolis, Minnesota, pp. 3105–3110, 1996.

11. Hiros, S., Fukushima, E.F. and Tsukagoshi, S. I., “Basic Steering Con-trol Methods for the Articulated Body Mobile Robot”: Proc. IEEE Int.Conf. On Robotics and Autom., San Diego, California, pp. 2384–2390,1994.

12. M.H. Korayem, “A Robotic Software for Autonomous Wheeled MobileRobot” Int. J. of Engineering, Vol. 12, No. 2, pp. 151–162, 2001.

13. Green, D. N., Sasiadek, J.Z. and Vukovich, G. S., “path tracking, ob-stacle avoidance and position estimation by an autonomous, wheeledplanetary rover”: Proc. IEEE Int. Conf. On Robotics and Autom., SanDiego, California, pp. 1300–1305, 1994.

14. A. Hajjan, “A new Real time Edge Linking Algorithm and its VLSIImplementation”, Colorado State University.

15. Kurt Konolige, “Small Vision Module”16. American National Standard for Industrial Robots and Robot Systems

Path-Related and Dynamic Performance Characteristics Evaluation.ANSI/RIA R15.05-2. Apr. 2000.

17. M.H. Korayem, H. Aliakbarpour, “Simulating of Vision System andPresenting Image Processing Algorithms for Robot”, 2nd Vision Con-ference, 2003.