Exploring Human Hand Capabilities into Embedded ...

10
1 Exploring Human Hand Capabilities into Embedded Multifingered Object Manipulation Honghai Liu, Senior Member, IEEE Abstract—This paper provides a comprehensive computational account of hand-centred research, which is principles, method- ologies and practical issues behind human hands, robot hands, rehabilitation hands, prosthetic hands and their applications. In order to help readers understand hand-centred research, this paper presents recent scientific findings and technologies including human hand analysis and synthesis, hand motion capture, recognition algorithms and applications, it serves the purpose of how to transfer human hand manipulation skills to related hand-centred applications in a computational context. The concluding discussion assesses the progress thus far and outlines some research challenges and future directions, and solution to which is essential to achieve the goals of human hand manipulation skill transfer. It is expected that the survey will also provide profound insights into an in-depth understanding of realtime hand-centred algorithms, human perception-action and potential hand-centred healthcare solutions. Index Terms—Motion recognition, human hand modeling, mo- tion capturing, multifingered robot manipulation and cognitive robotics. I. I NTRODUCTION I T is evident that the dexterity and multi-purposed ma- nipulation property of the human hand inspires cross- disciplinary research and application of robotics and artificial intelligence. It has been driven by the dream of developing an artificial hand with the human hand’s properties [1]. However, five decades on, it is clear that the priority of the dream coming true has been given to computational hand models [2], [3]. It is the primary challenge of how to transfer human hand capa- bilities into multi-fingered object manipulation, especially, in a realtime context. It underlies advanced artificial intelligence robotics and its related disciplines and applications. Recent innovations in motor technology and robotics have achieved impressive results in the hardware of robotic hands such as the Southampton Hand, DLR hands, Robonout hand, Barret hand, DEKA hand, Emolution hand, Shadow hand and iLimb hand [4]. Especially, the ACT hand [5] has not only the same kinematics but also the similar anatomical structure with the human hand, providing a good start for the new generation of anatomical robotic hands. However, anatomically correct robotic hand is still a far way to go H. Liu is with Intelligent Systems and Biomedical Robotics Group, School of Creative Technologies, University of Portsmouth, England, PO1 2DJ UK. Email: [email protected] The author would like to acknowledge the projects under grant No EP/G041377/1 funded by Engineering and Physical Science Research Council, grant No IJP08/R2 by the Royal Society. Copyright (c) 2009 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to [email protected]. due to the lack of appropriate sensory systems, unsolved human-robot interaction problems, mysterious neuroscience issues, etc. In recent decades, due to significant innovations in multifingered robot hands and mature algorithms in robot planning, priority has been given to multifingered robot ob- ject manipulation. As the hardware of multifingered robotic systems has developed, there have been parallel advances in three most important engineering challenges for the robotics community, namely, the optimal manipulation synthesis prob- lem, the real-time grasping force optimisation problem, and coordinated manipulation with finger gaiting [6], [7]. Stable multifingered robot manipulation is determined by engineering criteria, that is, force closures. Computer scientists have made significant advances in computational intelligence for robot manipulation. Gomez et al. [8] developed an adaptive learning mechanism, which allows a tendon driven robotic hand to explore its own movement possibilities, to interact with objects of different shapes, sizes and materials and learn how to grasp and manipulate them. It is recognised that the state of the art in hardware aspects of artificial hands confirms that hardware platforms have capabilities of accommodating advanced computational models for adapting the human hand manipulation skills though it is still a challenge to operate them in a real-time context [9]–[11]. However, the manipulation systems of the robotic hands are hardcoded to handle specific objects by their corresponding robot hands. It is evident that robot hand control and optimi- sation problems are very difficult to resolve in mathematical terms, however humans can solve their hand manipulation tasks easily using skills and experiences. Object manipulation algorithms are needed that have human-like manipulation capabilities, are independent of robot hand hardware and are conducted in realtime term. Hence, the main challenge that researchers now face is how to enable robot hands to use what can be learned from human hands, to manipulate objects, with the same degree of skill and delicacy as human hands in a reasonably fast manner. For instance, given the locations and shapes of a cup by off-the-shelf image processing algorithms, a robotic hand is required, inspired by human hand biological capabilities, to reach and manipulate the cup by continuously responding with appropriate shape configuration and force distribution among the fingers and palm. Transferring human manipulation skills to artificial hands involves modeling and understanding the human hand mo- tion capabilities, and advanced multifingered manipulation planning and control systems, manipulation abilities, sensory perception and motion algorithms of artificial hand systems. It connects user manipulation commands/intentions to the

Transcript of Exploring Human Hand Capabilities into Embedded ...

1

Exploring Human Hand Capabilities into EmbeddedMultifingered Object Manipulation

Honghai Liu,Senior Member, IEEE

Abstract—This paper provides a comprehensive computationalaccount of hand-centred research, which is principles, method-ologies and practical issues behind human hands, robot hands,rehabilitation hands, prosthetic hands and their applications.In order to help readers understand hand-centred research,this paper presents recent scientific findings and technologiesincluding human hand analysis and synthesis, hand motioncapture, recognition algorithms and applications, it serves thepurpose of how to transfer human hand manipulation skills torelated hand-centred applications in a computational context.The concluding discussion assesses the progress thus far andoutlines some research challenges and future directions, andsolution to which is essential to achieve the goals of human handmanipulation skill transfer. It is expected that the survey willalso provide profound insights into an in-depth understanding ofrealtime hand-centred algorithms, human perception-action andpotential hand-centred healthcare solutions.

Index Terms—Motion recognition, human hand modeling, mo-tion capturing, multifingered robot manipulation and cognitiverobotics.

I. I NTRODUCTION

I T is evident that the dexterity and multi-purposed ma-nipulation property of the human hand inspires cross-

disciplinary research and application of robotics and artificialintelligence. It has been driven by the dream of developing anartificial hand with the human hand’s properties [1]. However,five decades on, it is clear that the priority of the dream comingtrue has been given to computational hand models [2], [3]. Itis the primary challenge of how to transfer human hand capa-bilities into multi-fingered object manipulation, especially, ina realtime context. It underlies advanced artificial intelligencerobotics and its related disciplines and applications.

Recent innovations in motor technology and robotics haveachieved impressive results in the hardware of robotic handssuch as the Southampton Hand, DLR hands, Robonout hand,Barret hand, DEKA hand, Emolution hand, Shadow handand iLimb hand [4]. Especially, the ACT hand [5] has notonly the same kinematics but also the similar anatomicalstructure with the human hand, providing a good start forthe new generation of anatomical robotic hands. However,anatomically correct robotic hand is still a far way to go

H. Liu is with Intelligent Systems and Biomedical Robotics Group, Schoolof Creative Technologies, University of Portsmouth, England, PO1 2DJ UK.Email: [email protected]

The author would like to acknowledge the projects under grant NoEP/G041377/1 funded by Engineering and Physical Science Research Council,grant No IJP08/R2 by the Royal Society.

Copyright (c) 2009 IEEE. Personal use of this material is permitted.However, permission to use this material for any other purposes must beobtained from the IEEE by sending a request to [email protected].

due to the lack of appropriate sensory systems, unsolvedhuman-robot interaction problems, mysterious neuroscienceissues, etc. In recent decades, due to significant innovationsin multifingered robot hands and mature algorithms in robotplanning, priority has been given to multifingered robot ob-ject manipulation. As the hardware of multifingered roboticsystems has developed, there have been parallel advances inthree most important engineering challenges for the roboticscommunity, namely, the optimal manipulation synthesis prob-lem, the real-time grasping force optimisation problem, andcoordinated manipulation with finger gaiting [6], [7]. Stablemultifingered robot manipulation is determined by engineeringcriteria, that is, force closures. Computer scientists have madesignificant advances in computational intelligence for robotmanipulation. Gomezet al. [8] developed an adaptive learningmechanism, which allows a tendon driven robotic hand toexplore its own movement possibilities, to interact with objectsof different shapes, sizes and materials and learn how tograsp and manipulate them. It is recognised that the stateof the art in hardware aspects of artificial hands confirmsthat hardware platforms have capabilities of accommodatingadvanced computational models for adapting the human handmanipulation skills though it is still a challenge to operatethem in a real-time context [9]–[11].

However, the manipulation systems of the robotic hands arehardcoded to handle specific objects by their correspondingrobot hands. It is evident that robot hand control and optimi-sation problems are very difficult to resolve in mathematicalterms, however humans can solve their hand manipulationtasks easily using skills and experiences. Object manipulationalgorithms are needed that have human-like manipulationcapabilities, are independent of robot hand hardware and areconducted in realtime term. Hence, the main challenge thatresearchers now face is how to enable robot hands to use whatcan be learned from human hands, to manipulate objects, withthe same degree of skill and delicacy as human hands in areasonably fast manner. For instance, given the locations andshapes of a cup by off-the-shelf image processing algorithms,a robotic hand is required, inspired by human hand biologicalcapabilities, to reach and manipulate the cup by continuouslyresponding with appropriate shape configuration and forcedistribution among the fingers and palm.

Transferring human manipulation skills to artificial handsinvolves modeling and understanding the human hand mo-tion capabilities, and advanced multifingered manipulationplanning and control systems, manipulation abilities, sensoryperception and motion algorithms of artificial hand systems.It connects user manipulation commands/intentions to the

2

motion of a highly articulated and constrained human hand,containing 27 bones, giving it roughly 27 degrees of freedomor ways of moving. Chellaet. al. [12] confirmed that priorknowledge should be introduced in order to achieve fast andreusable learning in behavioural features, and integrate it inthe overall knowledge base of a system. It serves the pur-pose of fulfilling requirements such as reusability, scalability,explainability and software architecture. Carrozzaet. al. [13]designed artificial hand systems for dexterous manipulation;they showed that human-like functionality could be achievedeven if the structure of the system is not completely biologi-cally inspired. Learning from human hand motions is preferredfor human-robot skill transfer in that, unlike teleoperation-related methods, it provides non-contact skill transfer fromhuman motions to robot motions by a paradigm that canendow artefacts with the ability for skill growth and life-longadaptation without detailed programming [14]. In principle,not only does the provide a natural, user-friendly means ofimplicitly programming the robot, but it also makes the learn-ing problem become significantly more tractable by separatingredundancies from important characteristics of a task.

This paper presents a survey of most recent work in theprocedure of the human hand skill transfer to artificial handsystems, the detailed modules is illustrated in Fig. 1, namely,hand analysis and synthesis, hand motion capture, hand skilltransfer and hand-centred applications. Though a variety ofproblems of the multifingered artificial hand systems havebeen addressed, it is evident that the research communitiesandpractitioners require a unified version of recent improvementsof exploring human hand capabilities into multi-fingered robotmanipulation with an emphasis on computational models ofhand motion recognition. Not only does this paper providea comprehensive computational account of hand-centred re-search, but it also assesses the progress thus far and outlinessome research challenges and future directions, and solution towhich is essential to achieve the goals of human hand manip-ulation skill transfer. Though computational issues related toneuroscience, health sciences and developmental roboticsarenot addressed in this paper in order to focus on the computa-tional aspects of hand manipulation skills transfer to artificialhands. The remainder of this paper is organized as follows:Section II presents the human hand anatomy and summarizeshand motion capturing devices; it is also devoted to handgesture capturing based on glove, vision and electromyography(EMG). Section III discusses various recognition methods,with particular emphasis on Hidden Markov Models (HMM),Finite State Machine (FSM), and Connectionist approach.Section IV overviews the hand-centred application, and thelast section concludes this paper and indicates some existingchallenges and future research possibilities.

II. T HE HUMAN HAND MODELING

Every day human hands perform a huge amount of dex-terous grasps to fetch, move and use different tools instinctlydue to the innate sense for goal attainment and sensorimotorcontrol. However, everyday tasks in human environments arerelatively difficult for multi-fingered artificial hands mainly

Fig. 1. A schematic of exploring human hand capabilities into multifingeredartificial manipulation

due to the lack of appropriate sensor systems, unsolvedproblems involving with the human-robot interaction (HRI)and neuroscience, etc. Though artificial hands may performstronger and faster grasps than the human hand, the highdimensionality makes it hard to embedded program and ma-nipulate human-like robotic hand for dexterous grasps ashumans do.

A. The Human Hand

The human hand has complex kinematics and highly artic-ulated mechanism with 27 degrees of freedom (DOF), i.e., 4in each finger, 3 for extension and flexion and 1 for abductionand adduction; the thumb is more complicated and has 5DOF, and 6 DOF for the rotation and translation of the wrist.Natural anatomical restrictions subject to the muscle-tendoncontrolling mechanism enable the human hand capability ofsuper dexterity and powerfully usage of a wide range of tools.Of the 1000 or so different functions we perform daily withthe 19 bones in each hand as shown in Fig. 2. The wrist itselfcontains eight small bones called carpals. The carpals joinwith the two forearm bones, the radius and ulna, formingthe wrist joint. Further into the palm, the carpals connectto the metacarpals. There are five metacarpals forming thepalm of the hand. One metacarpal connects to each finger andthumb. Small bone shafts called phalanges line up to form eachfinger and thumb. The main knuckle joints are formed by theconnections of the phalanges to the metacarpals. These jointsare called the metacarpophalangeal joints (MCP joints). TheMCP joints work like a hinge when you bend and straightenyour fingers and thumb. The three phalanges in each fingerare separated by two joints, called interphalangeal joints(IPjoints). The one closest to the MCP joint (knuckle) is calledthe proximal IP joint (PIP joint). The joint near the end ofthe finger is called the distal IP joint (DIP joint). The joints

3

Fig. 2. Anatomical structure of the human hand

of the hand, fingers, and thumb are covered on the ends witharticular cartilage to absorb shock and provide an extremelysmooth surface to facilitate motion.

The human hand is most profound system having capabilityof performing more complicated, dexterous tasks than any ex-isting systems does. In terms of artificial intelligence robotics,all existing artificial hands can only mimic how the humanhand does in a rough grade, far from reflecting its inherentmotion capabilities, let alone to in-depth understanding of itsconnection to the human brian. Though there is a substantiallygrowing interest in related research in neuroscience, healthscience and cognitive robotics [15].

B. Hand Motion Capturing

Sensory information such as hand position, angle, force andtheir related parameters are required available to build upacomputational model of the human hand, further transfer themodel to hand-centred applications. That complexity and dex-terity of the human hand and its involvement in various toolshandling, makes it challenging to handle tradeoffs betweensufficiently acquired data representing hand manipulationca-pabilities and precision of hand computational models.

1) Hand Gesture Capturing:Hand gestures usually arerepresented by combination of the parameters captured byglove-based systems, marker-based capturing systems andconventionally clinical measurement tools. Vision based handcapturing systems are classified as an individual group par-tially due to its widely usage and promising applicationfuture. A comprehensive review is presented to intensivelysummarize the historical development of gloved-based systemsand their application [2], though there are very few innovationin dataglove systems, an update is provided to focus on handcapturing oriented glove systems [16] in Table I. Maker-basedcapturing systems are widely developed and employed forhand motion capturing in order to compensate the drawbacksof glove-based systems. For example, surface markers are

Fig. 3. Hand gesture capturing based on surface markers [17]

used to capture wrist, metacarpal arch, fingers and thumbmovements as shown in Fig. 3. Note that, however, handcapturing systems in this category are inconvenient to usesince they have to always cling to hands and time-consumingcalibration.

2) Vision Based Capturing:Computer vision-based humanmotion capture intensively reviews related capturing devicesand techniques before 2001 [18], though its overview isfocused on a taxonomy of system functionalities broken downinto initialisation, tracking, pose estimation and recognition.Priority of human motion capturing, especially hand motion,is moved to capturing devices based on multi-cameras, 3-D cameras and innovative devices such as kinetic [19]. Itis evident that the latter two types of capturing methodssuffer from low precision. Multiple cameras and depth cam-eras have been arranged for developing motion recognitionalgorithms subject to individual applications, public availablemulti-camera databases are summarized in [19], for instance,HumanEva-I dataset contains 7 calibrated video sequencesthat are synchronized with 3D body poses obtained froma motion capture system. Additionally, the Kinect’s depthsensor consists of an infrared laser projector combined with amonochrome CMOS sensor. It captures video data in 3D undera variety of ambient light conditions. Since the human handis highly articulated and constrained, vision based capturingdevices are used to achieve both 2D and 3D hand gesturemodels.

3) Haptics Capturing:Haptics or touch feedback refers toprovide touch feedback to the human hands or artificial handsin this paper. The data glove and camera can get sufficientinformation of human hand joint angles or positions, but theyare incapable of capturing the force of human manipulation.Tactile sensors are employed to collect data and recognizetactile discrimination of identifying the differences in texture,temperature and shapes of objects through touch alone [20].Usually haptic sensory information can be captured by eitheruse of force displays such as PHANToM to represent actualfriction force or changing friction properties of the contactsurface. Of a variety of tactile sensors very few can be appliedto provide accurate hand motion, Finger TPS system is widelyadapted with positive feedbacks as shown in Fig. 4. EachFingerTPS system supports two hands with up to 6 sensorsper hand, it includes the sensors, wrist-mounted interconnectharness, rechargeable wireless interface module, and USBBluetooth transceiver. In addition, Tekscan Inc. providesawide range of FlexiForce sensors which can be tailored intodifferent shapes subject to individual applications.

4

TABLE IADVANCED DATA GLOVE PRODUCTSSELECTED TOHIGHLIGHT THE STATE OF THE ART

Device Technology Sensors and locations Precision Speed

DG5-VHand accelerometer and piezo-resistive

6 (a 3 axes accelerometer in wrist; onebend sensor per finger )

10 bit 25 Hz

5DT Glove 14 fiber optic 14 (2 sensors per finger and abductionsensors between fingers)

8 bit Minimum75 Hz

X-IST Glove piezo-resistive, pressuresensor and accelerometer

14 (Sensor selection: 4 Bend sensorlenghts, 2 pressure sensor sizes, 1 two-axis accelerometer)

10 bit 60 Hz

CyberGlove II piezo-resistive 22 (three flexion sensors per finger, fourabduction sensors, a palm-arch sensor,and sensors to measure flexion and ab-duction)

8 bit minimum90 Hz

Humanglove hall-effect sensors 20/22 (three sensors per finger and thedbduction sensors between fingers)

0.4 deg 50 Hz

Shapehand glove bend sensors 40 ( flexions and adductions of wrist,fingers and thumb)

n.a. 100 Hzmaxi-mum

Fig. 4. Finger tactile pressure sensor

4) EMG Signal Capturing:Electromyography (EMG) isa technique involving testing, evaluating and recording theactivation signal of muscles including non-invasive capturedsignal, i.e., sEMG, and invasive captured signal, ie., iEMG,which can be used to indirect estimate manipulation force ahuman hand applies. sEMG signals have been used mainly forinteracting with machines due to its representation of globalmuscle activity. For instance, hand gestures are captured usingsEMG sensors in order to evaluate and record physiologicproperties of muscles at rest and while contracting [21]. Ontheother hand, recent innovation in the reliable and implantableelectrodes, iEMG has gained more interests in the myoelec-tric control. Additionally, Delsys Inc. provides a commercialsurface EMG sensors which provides more reliable signalsthan conventional surface EMG sensors as shown in Fig.5. It is worthwhile noting that there are inherent difficultiesin deriving a general model of the relationship between therecorded EMG and muscle output force in humans performingstatic and dynamic contractions.

III. H AND SKILL TRANSFER

Generally speaking, hand skill transfer is the process oflearning a new skill using aspects of other mastered hand-centred manipulation skills. In the context of this paper, handskill transfer is defined as the act of learning/retrieving askill for object manipulation of artificial hands based on the

Fig. 5. Surface EMG sensors [22]

knowledge/demonstration learned from human hand motioncapabilities. The skill transfer can be considered as a three-stage process: a) It is to construct a knowledge base, con-sisting of human hand motion primitives and manipulationscenarios, from human hand manipulation demonstration ordataset captured from hand movements; b) given a specifichand manipulation scenario, human hand skill is applied toartificial hands; c) software or hardware solution to bridgethe gap between constructed knowledge base and adaptationof retrieved hand skills into artificial hands [23]. Thanksto engineering robotics, it is relatively mature for adaptingmotion trajectories into artificial hands [24]. It is evidentthat it is a long-standing problem for bridge the gap oftransferring hand skills to artificial hands, hardcoded methodsare probably the only solution at an ad hoc basis in practice.The problem has inherent connection with symbolic groundingin a psychological context. Priority of this section is given tothe core technique of how to construct motion knowledge base,i.e., how to recognize hand motion.

A. Hand Motion Recognition

It is evident that recognizing hand motion is challenging,mainly, due to coupled temporal and spatial characteristics[25]. Human hand motion can be categorized into staticgestures, dynamic gestures and in-hand manipulation. Theformer two hand motions refer to hand poses and continuoushand motion. The later involves manipulating an object within

5

S 1 S 2 S 3 S 4 S 5

V 1 V 2 V 3 V 4

a 1 1 a 2 2 a 3 3 a 4 4 a 5 5

a 1 2 a 2 3 a 3 4 a 4 5

a 1 3 a 2 4 a 3 5

S t a r t S t a t e

E n d s t a t e

Fig. 6. Grasp recognition of five states and four observations HMM [26]

one hand, in which the fingers and thumb are used to bestposition the object for the activity. Due to the difficulty ofin-hand manipulation recognition, there are very few researchreported. It is even more challenging to recognize in-handmotion in a real-time context. A set of algorithms had beendeveloped for this purpose and achieved it at a suboptimalrealtime basis [26]. It is evident that difficulties confrontinghand motion recognition can be solved efficiently by fusionof multiple information sources such as tactile sensing andvision.

Hand gesture recognition, in general, is composed of proba-bilistic graphical models, neural networks, finite state automa-ton, rule-based reasoning and ad hoc methods. Probabilisticgraphical models such as hidden Markov models (HMM) hasdemonstrated its high potential in hand gesture recognition,an example is given in Fig. 6. A survey for general gesturerecognition can be found at [25]. In order to provide readersacomparative overview of hand gesture recognition for the pur-pose of hand skill transfer, motion recognition algorithmsareorganised in terms of capture devices, namely marker/visionbased sensors, tactile sensors and EMG-based capture devices,in this paper. Note that recognition algorithms mimic theprocess how humans recognize hand gestures in principle:hand gestures are naturally recognized firstly by segmentingthe gesture sequences in terms of the start and end spatial-temporal position points of a hand gesture sequence; thenhumans compare the specified partitions with ‘experience’in one’s mind, and determine the grasp types. According tostate of the art in hand motion recognition, this paper isfocused on recognition of static and dynamic gestures, intheory, the algorithms should work for in-hand manipulationrecognition as well though with rather poor recognition ratedue to complexity of in-hand motion.

B. Marker based Gesture Recognition

Marker based capture devices usually consist of glove-basedsystems and capture systems such as Vicon motion capture.Its corresponding recognition methods can be categorized intomodel-based methods and model-free methods. Model basedmethods can be decomposed into those with/without handmotion constraints.

Model-free approaches such as neural networks, rule-basedmethods, etc. have been effective for dealing with hand dex-terity and uncertainty occurred by a wide range of factors

such as high dimension raw sensory data. Glovetalk projectfirstly successfully demonstrated a neural network interfacemapping hand gestures to sign language [27]; hand gesturerecognition was achieved with recognition rate of 90.5% basedon Self-Growing and Self-Organized Neural Gas network [28];support vector machine was also introduced to recognize handgestures with competitive results; a comparison of classifi-cation methods of grasp recognition was intensively studied[29], it introduced a systematic approach for the evaluationof classification techniques for recognizing grasps performedwith a data glove. In particular, it distinguishes between 6settings that make different assumptions about the user groupsand objects to be grasped. On the other hand, fuzzy systemshave been outstandingly successful in the recent years. Forinstance, Rainer Palmet.al. [30] employed CyberGlove torecord the human grasp data to capture grasp primitives andmodeled hand trajectories of grasp primitives by T-S fuzzymodelling, and the research results had partially implementedinto prothetic hands.

Model-based methods are defined by matching the true ge-ometry of the human hand, trade-off has to be handled properlyregarding to hand models and hand motion constraints in orderto achieve suitable combination of recognition accuracy andalgorithm efficiency. For instance, Sudderth et. al. introduceda kinematic model in which nodes correspond to rigid bodiesand edges to joints in comparison with deformable pointdistribution hand model [31]. It is accepted that a whole handkinematical structure can not be presented without constrains,the primary advantage of considering the motion constraintsamong fingers and finger joints is to greatly reduce the size ordimensions of hand gesture search space. Motion constraintsare investigated for different reduction of hand DoFs andlearning from a large and representative set of training samples[32]: a)limits of finger motions for static constraints; b) limitsof dynamic constraints imposed on joints during motion andlimits in performing natural motion.

Furthermore, Chua et al. reduced 27 DoFs of the humanhand model to 12 DoFs by analyzing the hand constraints interms of eight different types, these eight constraints couldbe defined as “weak constraints” in both static analysis anddynamic analysis [33]. One of the drawbacks is that the theseconstraints could lead to invalid solution. In addition, a graph-based approach was presented to understand the meaningof hand gestures by associating dynamic hand gestures withknown concepts and relevant knowledge. Hand gestures areunderstood by the comparison between concepts representedby sets of hand-gesture elements and existing knowledge inthe form of conceptual graphs combining graph morphismsand projection techniques [34].

It is evident that marker based methods offer much fasterand more precise recognition than vision based motion recog-nition. It would be also interesting to see whether or not mo-tion constraints could assist model-free approaches to achievebetter performance for both gesture recognition accuracy andefficiency. Misusing hand motion constraints could lead torather poor performance since a large number of hand motionconstraints are difficult and computationally expensive beingrepresented in closed forms.

6

C. Vision based Motion Recognition

Hand recognition is composed of two stages in this section:hand motion tracking and recognition. The motion trackingis to separate hand motion from contextual background; therecognition algorithms in section III-B, in theory, can bedirectly applied to vision based hand motion recognition. Thevision based recognition is organized in terms of static modelsand dynamic models. The two types of methods are overlappedwith generative and discriminative models of a more generallyaccepted recognition terms.

Static methods are dominated by template matching andits variants. 2D, 3D and depth-involved static hand modelsare categorized in terms of multiple factors such as cameratypes, camera viewpoints and occlusion, etc [35]. 2D handmodels usually depend on the extracted image parameters,which are derived from the hand image properties includingcontours and edges, image moments, image eigenvectors andother properties. Eigenvalues indicating the hand width andlength was extracted in [36] to build a hand gesture recognitionsystem for real time America Sign Language in unconstrainedenvironments, while Haar-like features are extracted from2Dhand images to recognize hand gestures. Additionally, otherfeatures such as ’bunch graph’ in [37], ’conducting featurepoints’ and ’grid of neurons’ were also extracted from 2Dimage frames. Note that 2D hand models are robust to self-occlusion since they extracted no 3D features and directlycompare the 2D image properties between the input imagesand the registered ones, but images without shadowed fin-gers can enhance the validity of 2D hand models [38]. 3Dhand models may lead to the computational complexity dueto inverse kinematics and 3D reconstruction. For instance,reduction of hand DoFs was employed to efficiently estimate3D hand postures from a set of eight 2D projected featurepoints. On the other hand, static hand model has been in-tensively studied using multiple cameras [39]. Not only doemploying multiple cameras overcome the difficulties such asself/object occlusion, but it also provide a wider range of posesand achieve more reliable 3D hand configuration. Combiningmultiple view information usually refers to: a) group all theshape contexts from all the images together before clusteringto build the histograms; b) estimate the pose from each viewindividually and combine the results at a high level usinggraphical models; c) the information is fused in hybrid terms ofthe other two multi-view combination. Additionally, infraredcameras are used to retrieve depth information in 3D handreconstruction via techniques such as stereo triangulation,sheet of light triangulation, coded aperture, etc.

Dynamic hand models involving spatial-temporal featurescan be generated by a range of methods such as probabilisticgraphical models, finite state automaton, etc. [40], an exampleis given as in Fig. 7. It is practically feasible to adapt statichand models into dynamic models by introducing temporal pa-rameters into hand modelling. Probabilistic graphical models,e.g., HMM, and finite state automaton are dominant methodsfor dynamic models. HMMs was proposed to model dynamicprocess in nature. Selected HMM-based research had beenreported as follows [19]. A recognition strategy was proposed

Fig. 7. Graphical model based hand gesture recognition [43]

by combining static shape recognition, Kalman filter basedhand tracking and a HMM based temporal characterizationscheme; A pseudo 3D Hidden Markov Model was introducedto recognize hand motion, which gained higher rate of recog-nition than a 2D HMM. [41]. Note that HMMs limited forhandling three or more independent processes efficiently. Toalleviate this problem, researchers had generalized HMMs intodynamic Bayesian networks. Dynamic Bayesian networks aredirected graphical models of a stochastic process, representingthe hidden and observed states having complex interdepen-dencies, which can be efficiently represented by the structureof the directed graphical models. On the other hand, finitestate automaton is a model of behavior composed of a finitenumber of states, transitions between those states and involvedactions. A finite state automaton can be represented by a statetransition diagram, in which the continuous stream sensorydata of a gesture are represented as a sequence of states. Forexample, Yeasin et al. employed a finite state automaton forautomatically interpret a gesture accurately and also avoidsthe computationally intensive task of image sequence warping[42]. Note that using vision sensors only such as conventionalcameras usually lead to intensive computation, marker-basedsystems enhance precision and efficiency of gesture recogni-tion.

D. Haptics based Motion Recognition

The human hand utilizes the distributed receptors both onthe surface and inside of the finger, where the stimuli changeswith the finger movement, to acquire rich information on themanipulated object. Haptic devices have been introduced tocapture such information in the dynamic interaction, whichcan also be used as the unique/additional motion feature fordiscriminating the hand motions [44]. Haptic devices havealso been integrated with data gloves to collect multiforminformation such as haptic data glove and SlipGlove [45].These gloves with haptic-IO capability provide the motioninformation of human hands and enhance the capturing ofhuman hand skills. Kondoet al. used contact state transitionto recognize complex hand motions captured by Cybergloveand a tactile sensor sheet, Nitta BIG-MAT quarter, attachedtothe lateral side of the cylindrical object [46]. Kawasakiet al.employed hand motions consisting of contact points, graspedforce, hand and object positions to explore the maximizedmanipulability a robotic hand by scaling the virtual handmodel [44].

On the other hand, the contact interaction between the fingertip and the object surface associates the different manipulatedobjects with different hand motions. Some researchers havestudied the object recognition with complicated static/dynamic

7

tactile pattern obtained from multi-contact, finally to differen-tiate the manipulations [47]. Dynamic haptic pattern refers tothe time serial of the haptic sensing change and is capableof providing more haptic information than static haptic pat-tern during the interactions between the hand and objects.Watanabeet al. utilized tactile spatiotemporal differentialinformation with a soft tactile sensor array attached to arobot hand to classify object shapes [48]. While Hosodaet al.described the learning of robust haptic recognition of a bionichand using regrasping strategy and neural network, throughthe dynamic interaction with the objects [49] .

One of the challenges remain in haptics based recognition,however, is how to model hand friction and its effects. Existingapproaches are based on point-contact friction models, eitherthe point friction model, cone friction model or compensatingfriction via control strategy. Both models are idealistic forpractical robot manipulation in that the contact between anobject and the corresponding robot hand is a surface insteadof a point. Force distributed sensors are introduced to capturethe magnitude of the force applied on a surface, but mostof them are incapable of achieving the force directions [50].The area and the complexity of the hand interacting spaceis limited due to the small size and resolution of the hapticsensors. Additionally, locality of the haptic sensor is anotherchallenge, which depends heavily on the contact conditions[51]. Although compliant joints and soft finger tips [49] havebeen proposed to simulate the hand soft tissues, it is still anopen problem to imitate the non-linearly viscoelastic propertypossessed by the finger soft tissues including inner skin andsubcutaneous tissue.

E. EMG based Hand Motion Recognition

There are two types of electromyogram: the intramuscu-lar EMG, nEMG, and surface EMG, sEMG. The formerinvolves inserting a needle electrode through the skin intothe muscle whose electrical activity is to be measured; thelatter refers placing the electrodes on the skin overlying themuscle to detect the electrical activity of the muscle. nEMGis predominantly used to evaluate motor unit function inclinical neurophysiology. nEMG can provide focal recordingsfrom deep muscles and independent signals relatively freeof crosstalk. Due to the improvement of the reliable andimplantable electrodes, the use of nEMG for human handmovement studies has been more explored. Farrellet al.[52] presented that the intramuscular electrodes has the sameperformance as surface electrodes on pattern classificationaccuracy for prosthesis control. Ernestet al. [53] proved thata selective iEMG recording is representative of the appliedgrasping force and can potentially be suitable for proportionalcontrol of prosthetic devices.

sEMG signals have been used as a dominant method ofinteraction with machines [54]. In an EMG-based interactionsystem, hand gestures are captured using sEMG sensors whichevaluate and record physiologic properties of muscles at restand while contracting [21]. Various classification methodolo-gies have been proposed for processing and discriminatingsEMG signals for hand motions. As a computation technique

that evolved from mathematical models of neurons and sys-tems of neurons, neural network is becoming one of themost useful methods. Other neural network based classificationalgorithms include log-linearized Gaussian mixture network,probabilistic neural network, fuzzy mean max neural networkand radial basis function artificial neural network [55]. Thestatistic classifiers such as HMM, Gaussian mixture model,support vector machine have also been used intensively insEMG recognition. A few studies has compared several dif-ferent methods [56], e.g., Claudioet al. [57] reported supportvector machine achieved a higher recognition in compari-son with neural networks and locally weighted projectionregression, while Liu proposed Cascaded Kernel LearningMachine which has been compared to other classifers suchas k-nearest neighbour, multilayer neural network and supportvector machine [58]. However, none of them has explainedwhy the performance is enhanced. In addition, there is a lackofconsideration of sEMG’s uncertainties such as non-stationarynature, muscle wasting, electrode position, different subjectsand temperature impact. Muscle wasting or muscle fatiguecan be considered as the decrease in the force generatingcapacity of a muscle and has been evidenced in numerousstudies. For the same hand motion, muscle fatigue results ina different sEMG signal which may cause a failure of therecognition method. Electrode position is also critical for thevalid sEMG signal and leads to estimates of sEMG variablesthat are different from those obtained in other nearby locations[59]. Temperature is additionally proved to have an importanteffect on the nerve conduction velocities and muscle actions[60]. These uncertainties need more consideration in extractingsEMG’s features which are determinant to the performance ofclassifiers.

IV. H AND-CENTRED APPLICATIONS

In recent decades, with the developments and innovationsin motor technology and robotics, exciting fruit has been seenin the design of physically artificial hands aiming to improvethe flexibility of robotized systems. Artificial hands can begenerally categorized into mechanical grippers, robotic hands,rehabilitation hands and prosthetic hands. The mechanicalgrippers, which are usually different from the mechanismof the human hand, have been widely used in industrialapplications for fast and effectively grasping and handlinga limited set of known objects [61], [62]. They are usuallydesigned for a specific task, executing a preprogrammed mo-tion trajectory and featuring low anthropomorphism and lowmanipulation capability [63]. Robotic hand and prosthetichandare anthropomorphically designed to mimic the performanceof human hands, targeting to learn human hand skills withan adaption in dynamic unstructured environment and evenbe competent for the work which human is incapable of. Asurvey on artificial hands focusing on manipulative dexterity,grasp robustness and human operability can be found at [64],an attempt to summarize up-to-date applications of artificialhands is provided as below.

Various anthropomorphic robotic hands had been developedor improved in the past ten years. Not only does the anthro-pomorphic design make the skill transfer from human hand

8

( a ) ( b ) ( c )

Fig. 8. Robotic hands:(a) DLR-HIT hand; (b) Robonaut Hand; (c) Shadowhand C6M

to robotic hand easier, but it also intend to equip the artificialhands with the natural heritage of adaption in human livingenvironment. The newly developed or improved robotic handhave more haptic sensors/feedbacks, such as DLR-HIT andGIFU III hand with 6-axis force sensor and Shadow handwith force and temperature sensor on the finger tip [65], [66].They are relatively smaller and lighter but more powerful,e.g., “Smart motor” actuation system has been introducedto Shadow hand C6M instead of the pneumatic air muscleactuation system, and integrated force and position controlelectronics, motor drive electronics, motor, gearbox, forcesensing and communications into a compact unit. The appear-ance has also been improved, for example, the bionic handin [49] has soft skin with distributed receptors. These robotichands can be generally categorized in two types: dexteroushands and under-actuated hands in terms of the number ofactuators of a finger. Dexterous hands has multiple actuatorsfor each finger, which enables the robotic hand to control everydegree of freedom by individual actuator. For example in theDLR-HIT hand has four fingers with four joints and threeactuators each as show in Fig. 8. However, the large amount ofactuators controlling DoFs makes the automatic determinationof their movement very difficulty, and the high dimensionalsearch space for a valid joint path makes the computationalcost extremely high [67]. Under-actuated hands have beendeveloped as one of the solutions for such problem, utilizingless actuators in one finger than its degree of freedom. Such amechanism is aimed to decrease the number of active degreesof freedom by means of connected differential mechanismsin the system [68]. It is still a bottleneck problem of how toachieve a proper mechanical design ensuring both the handeffectiveness and efficiency.

( a ) ( b ) ( c )

Fig. 9. Prosthetic hands: (a) Myohand; (b) Ilimb; (c) Bebionic Hand

As one of the first application fields envisaged for artificial

anthropomorphic hands, for obvious aesthetic as well as func-tional reason [63], prosthetic hand has gain intensive attentionin the past decade. The available commercial prosthetic handsmainly include Myohand from Otto Bock Ltd, I-limb fromTouch Bionics [69] and Bebionic hand [70] in Fig. 9. They areusually controlled by sEMG signals extracted from the users’residual functional muscles. The users need a period of timeto get used to the system, trying to generate the recognizablemuscle contractions. So far only a few simple but robustmotions, such as opening and closure, have been applied in thesystems, though i-Limb can generate different hand gesturesby different combination of the active actuators. Changeableforce can be carried out to grasp objects with different weightsusing limited actuators and surface electrodes. Though thecon-trol is simple and robust, a large amount of the amputees do notuse their prosthetic hand regularly mainly due to the weight,appearance/cosmetics and functionality [71]. In addition, thereare no haptic or proprioceptive feedback to the subject in theseprosthetic hands. Tasks are carried out automatically by thepre-programmed or pre-mapped desired trajectory triggered bythe continuous results of the EMG classifier. The only sensoryfeedback is the user’s direct vision, in which the user canstop or reset the task if it is not successful [72]. Besides thecommercial prosthetic hands, significant examples can alsobeseen in the research applications: Cyberhand, DEKA arm, ARhand and so on. Since it is evident that force feedback systemhas a wide variety of effects on the users [73], force sensorshave been widely applied to prosthetic hands by means ofvibrotactile or electrotactiles stimulation. For example, a forceresistive sensor has been integrated in the index finger and thethumb of FLUIDHAND III respectively, whose signals areused to stimulate a vibration motor attached to users’ skin [74].The user can sense the grasping force through the strength ofthe vibration.

It should be noted that it is relatively mature applicationsof artificial hands in virtual reality and gaming in comparisonwith physically artificial hands, for which one of the mainreasons is that animating virtual hands has a lot less constraintssuch as mechanical constraints. Vision-based methods havebeen proposed to estimate joint locations and creases on thepalmar surface based on extracted features and analysis of sur-face anatomy [75]. It is expected that simulating natural handmotion and producing more realistic hand animation wouldbe of a great help to physically artificial hand manipulationand synthesis of the sign languages. Since the human handis an important interface with complex shape and movement,it is also expected that the use of an individualized ratherthan generic hand representation can increase the sense ofimmersion and in some cases may lead to more effortless andaccurate interaction with the virtual world.

V. CONCLUDING REMARKS

Programming multifingered artificial hands to solve com-plex manipulation tasks has remained an elusive goal. Thecomplexity and unpredictability of the interactions of multipleeffectors with objects is an important reason for this difficulty.The paper has reviewed the cycle of hand skill transfer from

9

analyzing the human hands to hand-centred applications. Theprimary challenge that researchers now confront is how toenable artificial hands to use what can be learned from humanhands, to manipulate objects, with the same degree of skilland delicacy as human hands. The challenging issues in hand-centred research can be summarized as: a) there is lack ofgeneralized framework which dynamically merges hybrid rep-resentations for hand motion description rather than employinghardcoded methods. For instance, how to autonomously inter-pret the semantic meanings of a hand motion; b) there doesnot exist a scheme which provides hand sensory feedback toartificial hands. For instance, it is required a sensor or modelwhich generates necessary sensory feedback information forhaptics perception understanding, view-invariant hand motionof artificial hands, e.g. a feasible slip model; c) existingalgorithms fail to recognize and model human manipulation in-tention due to a variety of uncertainties, e.g., quality of sensoryinformation, individual manipulation habits and clinicalissues;d) It is evident that reliable interfacing/implanting intotheperipheral sensory nervous system and contextual informationsuch as environmental models is missing for further bridgingthe gap in artificial hands and human/environment interaction;e) feasible embedded algorithms is crucial, in terms of sen-sory hand/context information fusion, to have artificial handsoperational and functional in human environments [9], [76]. Itis expected that this paper has provided a relatively unifiedfeasibility account for problems, challenges and technicalspecifications for hand-centred research, considering allthemajor disciplines involved, namely, the human hand analysisand synthesis, hand motion capture, hand skill transfer andhand-centred applications. This account of the-state-of-the-art presented has also provided partial insights into an in-depth understanding of human perception-action and potentialhealthcare solutions.

REFERENCES

[1] N. Palastanga, D. Field, and R. Soames, “Anatomy and Human Move-ment - Structure and Function, Fifth Edition,”Elsevier Press, 2006.

[2] L. Dipietro, A. Sabatini, and P. Dario, “A Survey of Glove-BasedSystems and Their Applications,”IEEE Transactions on Systems, Manand Cybernetics, Part C, vol. 38, no. 4, pp. 461–482, 2008.

[3] B. Argall and A. Billard, “A Survey of Tactile Human-Robot Inter-actions,” Robotics and Autonomous Systems, vol. 58, pp. 1159–1176,2010.

[4] A. Muzumdar, “Powered upper limb prostheses,”Springer, p. 208, 2004.[5] Y. Matsuoka, P. Afshar, and M. Oh, “On the design of robotic hands for

brain–machine interface,”Neurosurgical Focus, vol. 20, no. 5, pp. 1–9,2006.

[6] X. Zhu and H. Ding, “Computation of force-closure grasps: An iterativealgorithm,” IEEE Transactions on Robotics, vol. 22, no. 1, pp. 172–179,2006.

[7] ——, “An efficient algorithm for grasp synthesis and fixture layoutdesign in discrete domain,”IEEE Transactions on Robotics, vol. 23,no. 1, pp. 157–163, 2007.

[8] G. Gomez, A. Hernandez, P. Eggenberger Hotz, and R. Pfeifer, “Anadaptive learning mechanism for teaching a robot to grasp,”Proc.International Symposium on Adaptive Motion of Animals and Machines,pp. 1–8, 2005.

[9] A. Malinowski and H. Yu, “Comparison of embedded system designfor industrial application,”IEEE Transactions on Industrial Informatics,vol. 7, no. 2, pp. 244–254, 2011.

[10] S. Fischmeister and P. Lam, “Time-aware instrumentation of embeddedsoftware,” IEEE Transactions on Industrial Informatics, vol. 6, no. 4,pp. 652–663, 2010.

[11] A. Quagli, D. Fontanelli, D. Greco, L. Palopoli, and A. Bicchi, “Designof embedded controllers based on anytime computing,”IEEE Transac-tions on Industrial Informatics, vol. 6, no. 4, pp. 492–502, 2010.

[12] A. Chella, H. Dzindo, I. Infantino, and I. Macaluso, “Aposture sequencelearning system for an anthropomorphic robotic hand,”Robotics andAutonomous Systems, vol. 47, no. 2-3, pp. 143–152, 2004.

[13] M. Carrozza, G. Cappiello, S. Micera, B. Edin, L. Beccai, and C. Cipri-ani, “Design of a cybernetic hand for perception and action,” Biologicalcybernetics, vol. 95, no. 6, pp. 629–644, 2006.

[14] S. Calinon, F. Guenter, and A. Billard, “On learning, representing, andgeneralizing a task in a humanoid robot,”IEEE Transactions on Systems,Man and Cybernetics, Part B, vol. 37, no. 2, pp. 286–298, 2007.

[15] G. R. Bradberry, T.J. and J. Contreras-Vidal, “Reconstructing Three-Dimensional Hand Movements from Non-Invasive Electroencephalo-graphic Signals,”Journal of Neuroscience, in press.

[16] R. Gentner and J. Classen, “Development and evaluationof a low-costsensor glove for assessment of human finger movements in neurophys-iological settings,”Journal of Neuroscience Methods, vol. 178, no. 1,pp. 138–147, 2009.

[17] C. Metcalf, S. Notley, P. Chappell, J. Burridge, and V. Yule, “Validationand Application of a Computational Model for Wrist Movements Us-ing Surface Markers,”IEEE Transactions on BIomedical Engineering,vol. 55, no. 3, pp. 1199–1210, 2008.

[18] T. Moeslund and E. Granum, “A survey of computer vision-based humanmotion capture,”Computer Vision and Image Understanding, vol. 81,no. 3, pp. 231–268, 2001.

[19] X. Ji and H. Liu, “Advances in view-invariant human motion analysis:A review,” IEEE Transactions on Systems, Man and Cybernetics, PartC, vol. 40, no. 1, pp. 13–24, 2010.

[20] C. Metcalf and S. Notley, “Modified Kinematic Techniquefor MeasuringPathological Hyperextension and Hypermobility of the InterphalangealJoints,” IEEE Transactions on Biomedical Engineering, in press.

[21] C. Fleischer and G. Hommel, “Calibration of an EMG-Based BodyModel with six Muscles to control a Leg Exoskeleton,”Proc. Interna-tional Conference on Robotics and Automation, pp. 2514–2519, 2007.

[22] “Delsys inc.” http://www.delsys.com.[23] H. Liu, “A Fuzzy Qualitative Framework for Connecting Robot Qual-

itative and Quantitative Representations,”IEEE Transactions on FuzzySystems, vol. 16, no. 6, pp. 1522–1530, 2008.

[24] T. Yoshikawa, “Multifingered robot hands: Control for grasping andmanipulation,”Annual Reviews in Control, vol. 34, pp. 199–208, 2010.

[25] S. Mitra and T. Acharya, “Gesture Recognition: A Survey,” IEEEtransactions on Systems, Man and Cybernetics, Part C, vol. 37, no. 3,pp. 311–324, 2007.

[26] Z. Ju and H. Liu, “A unified fuzzy framework for human handmotionrecognition,” IEEE Transactions on Fuzzy Systems, in press.

[27] S. Fels and G. Hinton, “Glove-talkii - a neural-networkinterface whichmaps gestures to parallel format speech synthesizer controls,” IEEETransactions Neural Networks, vol. 9, no. 1, pp. 205–212, 1998.

[28] E. Stergiopoulou and N. Papamarkos, “Hand gesture recognition usinga neural network shape fitting technique,”Engineering Applications ofArtificial Intelligence, vol. 22, pp. 1141–1158, 2009.

[29] G. Heumer, H. Amor, and B. Jung, “Grasp recognition for uncalibrateddata gloves: A machine learning approach,”PRESENCE: Teleoperatorsand Virtual Environments, vol. 17, no. 2, pp. 121–142, 2008.

[30] R. Palm, B. Iliev, and B. Kadmiry, “Recognition of humangrasps bytime-clustering and fuzzy modeling,”Robotics and Autonomous Systems,vol. 57, no. 5, pp. 484–495, 2009.

[31] E. Sudderth, M. Mandel, W. Freeman, and A. Willsky, “Visual HandTracking Using Nonparametric Belief Propagation,”Proc. InternationalConference on Computer Vision and Pattern Recognition, pp. 189–189,2004.

[32] J. Lin and T. Wu, “Modeling the Constraints of Human HandMotion,”Urbana, vol. 51, no. 61, pp. 801–809.

[33] C. Chua, H. Guan, and Y. Ho, “Model-based 3D hand postureestimationfrom a single 2D image,”Image and Vision Computing, vol. 20, no. 3,pp. 191–202, 2002.

[34] B. Miners, O. Basir, and M. Kamel, “Understanding hand gestures usingapproximate graph matching,”IEEE Transactions on Systems, Man andCybernetics, Part A, vol. 35, no. 2, pp. 239–248, 2005.

[35] C. Chan and H. Liu, “Fuzzy qualitative human motion analysis,” IEEETransactions on Fuzzy Systems, vol. 17, no. 4, pp. 851–862, 2009.

[36] N. Binh, E. Shuichi, and T. Ejima, “Real-Time Hand Tracking and Ges-ture Recognition System,”Proc. International Conference on Graphics,Vision and Image, pp. 362–368, 2005.

10

[37] J. Triesch and C. von der Malsburg, “Classification of hand posturesagainst complex backgrounds using elastic graph matching,” Image andVision Computing, vol. 20, no. 13-14, pp. 937–943, 2002.

[38] S. Ge, Y. Yang, and T. Lee, “Hand gesture recognition andtracking basedon distributed locally linear embedding,”Image and Vision Computing,vol. 26, no. 12, pp. 1607–1620, 2008.

[39] B. Stenger, A. Thayananthan, P. Torr, and R. Cipolla, “Estimating 3Dhand pose using hierarchical multi-label classification,”Image and VisionComputing, vol. 25, no. 12, pp. 1885–1894, 2007.

[40] A. Just and S. Marcel, “A comparative study of two state-of-the-artsequence processing techniques for hand gesture recognition,” ComputerVision and Image Understanding, vol. 113, no. 4, pp. 532–543, 2009.

[41] N. Binh and T. Ejima, “Real-Time hand Gesture Recognition UsingPseudo 3-D Hidden Markov Model,”Proc. International Conference onCognitive Informatics, vol. 2, pp. 1–8, 2006.

[42] M. Yeasin and S. Chaudhuri, “Visual understanding of dynamic handgestures,”Pattern Recognition, vol. 33, no. 11, pp. 1805–1817, 2000.

[43] J. Lin, “Visual hand tracking and gesture analysis,”Ph.D. Dissertation,University of Illinois at Urbana-Champaign, 2004.

[44] H. Kawasaki, T. Furukawa, S. Ueki, and T. Mouri, “Virtual RobotTeaching Based on Motion Analysis and Hand Manipulability for Multi-Fingered Robot,”Journal of Advanced Mechanical Design, Systems, andManufacturing, vol. 3, no. 1, pp. 1–12, 2009.

[45] Z. Wang, J. Yuan, and M. Buss, “Modelling of human hapticskill: Aframework and preliminary results,”Proc. 17th IFAC World Congress,pp. 1–8, 2008.

[46] M. Kondo, J. Ueda, and T. Ogasawara, “Recognition of in-hand manipu-lation using contact state transition for multifingered robot hand control,”Robotics and Autonomous Systems, vol. 56, no. 1, pp. 66–81, 2008.

[47] N. Gorges, S. Navarro, D. Goger, and H. Worn, “Haptic objectrecognition using passive joints and haptic key features,”Proc. IEEEInternational Conference on Robotics and Automation, pp. 2349–2355,2010.

[48] K. Watanabe, K. Ohkubo, S. Ichikawa, and F. Hara, “Classificationof Prism Object Shapes Utilizing Tactile Spatiotemporal DifferentialInformation Obtained from Grasping by Single-Finger RobotHandwith Soft Tactile Sensor Array,”Journal of Robotics and Mechatronics,vol. 19, no. 1, pp. 85–96, 2007.

[49] K. Hosoda and T. Iwase, “Robust haptic recognition by anthropomorphicbionic hand through dynamic interaction,”Proc. IEEE/RSJ InternationalConference on Intelligent Robots and Systems, pp. 1236–1241, 2010.

[50] K. Sato, K. Kamiyama, N. Kawakami, and S. Tachi, “Finger-ShapedGelForce: Sensor for Measuring Surface Traction Fields forRoboticHand,” IEEE Transactions on Haptics, vol. 3, no. 1, pp. 37–47, 2010.

[51] S. Takamuku, A. Fukuda, and K. Hosoda, “Repetitive grasping withanthropomorphic skin-covered hand enables robust haptic recognition,”Proc. IEEE/RSJ International Conference on Intelligent Robots andSystems, pp. 3212–3217, 2008.

[52] T. Farrell and R. Weir, “A comparison of the effects of electrode im-plantation and targeting on pattern classification accuracy for prosthesiscontrol.” IEEE Transactions on Biomedical Engineering, vol. 55, no. 9,pp. 2198–2211, 2008.

[53] E. N. Kamavuako, D. Farina, K. Yoshida, and W. Jensen, “Relationshipbetween grasping force and features of single-channel intramuscular emgsignals,”Journal of Neuroscience Methods, vol. 185, no. 1, pp. 143–150,2009.

[54] K. Wheeler, M. Chang, K. Knuth, N. Center, I. Div, and C. Moffett Field,“Gesture-based control and EMG decomposition,”IEEE Transactionson Systems, Man and Cybernetics, Part C, vol. 36, no. 4, pp. 503–514,2006.

[55] F. Mobasser and K. Hashtrudi-Zaad, “A method for onlineestimationof human arm dynamics,”Proc. IEEE International Conference onEngineering in Medicine and Biology Society, vol. 1, pp. 2412–2416,2006.

[56] S. Kawano, D. Okumura, H. Tamura, H. Tanaka, and K. Tanno,“Online learning method using support vector machine for surface-electromyogram recognition,”Artificial Life and Robotics, vol. 13, no. 2,pp. 483–487, 2009.

[57] C. Castellini and P. van der Smagt, “Surface EMG in advanced handprosthetics,”Biological Cybernetics, vol. 100, no. 1, pp. 35–47, 2009.

[58] Y. Liu, H. Huang, and C. Weng, “Recognition of electromyographic sig-nals using cascaded kernel learning machine,”IEEE/ASME Transactionson Mechatronics, vol. 12, no. 3, pp. 253–264, 2007.

[59] L. Mesin, R. Merletti, and A. Rainoldi, “Surface EMG: The issueof electrode location,”Journal of Electromyography and Kinesiology,vol. 19, no. 5, pp. 719–726, 2009.

[60] J. Feinberg, “EMG: Myths and Facts,”HSS Journal, vol. 2, no. 1, pp.19–21, 2006.

[61] X. Zhu, H. Ding, and M. Wang, “A numerical test for the closure prop-erties of 3-d grasps,”IEEE Transactions on Robotics and Automation,vol. 20, no. 3, pp. 543–549, 2004.

[62] X. Zhu and J. Wang, “Synthesis of force-closure grasps on 3-d objectsbased on the q distance,”IEEE Transactions on Robotics and Automa-tion, vol. 19, no. 4, pp. 669–679, 2003.

[63] L. Zollo, S. Roccella, E. Guglielmelli, M. Carrozza, and P. Dario,“Biomechatronic design and control of an anthropomorphic artificialhand for prosthetic and robotic applications,”IEEE/ASME Transactionson Mechatronics, vol. 12, no. 4, pp. 418–429, 2007.

[64] A. Bicchi, “Hands for dexterous manipulation and robust grasping: Adifficult road toward simplicity,” IEEE Transactions on Robotics andAutomation, vol. 16, no. 6, pp. 652–662, 2002.

[65] H. Liu, P. Meusel, N. Seitz, B. Willberg, G. Hirzinger, M. Jin, Y. Liu,R. Wei, and Z. Xie, “The modular multisensory DLR-HIT-Hand,”Mechanism and Machine Theory, vol. 42, no. 5, pp. 612–625, 2007.

[66] H. Kawasaki, T. Komatsu, and K. Uchiyama, “Dexterous anthropo-morphic robot hand with distributed tactile sensor: Gifu hand II,”IEEE/ASME Transactions on Mechatronics, vol. 7, no. 3, pp. 296–303,2002.

[67] S. Sun, C. Rosales, and R. Suarez, “Study of coordinatedmotions ofthe human hand for robotic applications,”Proc. IEEE InternationalConference on Information and Automation, pp. 776–781, 2010.

[68] T. Laliberte and C. Gosselin, “Simulation and design ofunderactuatedmechanical hands,”Mechanism and Machine Theory, vol. 33, no. 1-2,pp. 39–57, 1998.

[69] C. Connolly, “Prosthetic hands from Touch Bionics,”Industrial Robot:An International Journal, vol. 35, no. 4, pp. 290–293, 2008.

[70] Bebionic, “Bebionic hand,”http://www.bebionic.com/, 2011.[71] P. Kyberd, C. Wartenberg, L. Sandsoj, S. Jonsson, D. Gow, J. Frid,

C. Almstrom, and L. Sperling, “Survey of upper-extremity prosthesisusers in Sweden and the United Kingdom,”Journal of Prosthetics andOrthotics, vol. 19, no. 2, pp. 55–62, 2007.

[72] C. Cipriani, F. Zaccone, S. Micera, and M. Carrozza, “Onthe shared con-trol of an EMG-controlled prosthetic hand: analysis of user–prosthesisinteraction,”IEEE Transactions on Robotics, vol. 24, no. 1, pp. 170–184,2008.

[73] C. Pylatiuk, A. Kargov, and S. Schulz, “Design and evaluation of a low-cost force feedback system for myoelectric prosthetic hands,” Journalof Prosthetics and Orthotics, vol. 18, no. 2, pp. 57–61, 2006.

[74] I. Gaiser, C. Pylatiuk, S. Schulz, A. Kargov, R. Oberle,and T. Werner,“The FLUIDHAND III: A Multifunctional Prosthetic Hand,”Journal ofProsthetics and Orthotics, vol. 21, no. 2, pp. 91–97, 2009.

[75] T. Rhee, U. Neumann, and J. Lewis, “Human Hand Modeling fromSurface Anatomy,”Proc. Symposium on Interactive 3D Graphics andGames, pp. 1–6, 2006.

[76] N. Motoi, M. Ikebe, and K. Ohnishi, “Real-time gait planning forpushing motion of humanoid robot,”IEEE Transactions on IndustrialInformatics, vol. 3, no. 2, pp. 154–163, 2007.

Honghai Liu (M’02-SM’06) received his Ph.D de-gree in robotics from King’s college London, UK,in 2003. He joined the University of Portsmouth,UK in September 2005. He previously held re-search appointments at the Universities of Londonand Aberdeen, and project leader appointments inlarge-scale industrial control and system integrationindustry.

Dr Liu has published over 250 peer-reviewedinternational journals and conference papers includ-ing four best paper awards. He is interested in

approximate computation, pattern recognition, intelligent video analytics andcognitive robotics and their practical applications with an emphasis onapproaches which could make contribution to the intelligent connection ofperception to action using contextual information. He is Associate Editor ofIEEE Transactions on Industrial Informatics, IEEE Transactions on Systems,Man and Cybernetics, Part C and International Journal of Fuzzy Systems.