The ROBOSKIN Project: Challenges and Results

8
The ROBOSKIN Project: Challenges and Results Aude Billard 1 , Annalisa Bonfiglio 2 , Giorgio Cannata 3 , Piero Cosseddu 2 , Torbjorn Dahl 4 , Kerstin Dautenhahn 5 , Fulvio Mastrogiovanni 3 , Giorgio Metta 6 , Lorenzo Natale 6 , Ben Robins 5 , Lucia Seminara 3 , Maurizio Valle 3 1 Ecole Polytechnique Federale de Lausanne, Switzerland 2 University of Cagliari, Italy 3 University of Genova, Italy 4 University at Wales in Newport, U.K. 5 University of Hertfordshire, U.K. 6 Italian Institute of Technology, Italy Abstract The goal of the ROBOSKIN project 1 is to develop and demonstrate a range of new robot capabilities based on the tac- tile feedback provided by a robotic skin located on large areas of the robot body. So far, a principled investigation of these issues has been limited by the lack of tactile sensing technology enabling large scale experimental activities. As a matter of fact, skin based technology and embedded tactile sensors have been mostly demon- strated only at the stage of prototypes. The new capabilities are expected to improve the ability of robots to effectively and safely operate in unconstrained environments, as well as to communicate and co-operate with each other and with humans. 1 Introduction Tactile sensing is strategic for the design and the implementation of safe interaction processes involving robots, humans and objects present in the robot (possibly unstructured) Workspace. As a matter of fact, tactile sens- ing provides the most important and direct feedback to control contact phenomena both in case of voluntary and non-voluntary interactions with the environment. Beyond the classical robot interaction tasks (e.g., the peg in hole prob- lem) where the onset of interaction can be either expected or planned to occur at a specific location of the robot body (typically at the end effector 1 Please refer to the official ROBOSKIN webpage at www.roboskin.eu. 1

Transcript of The ROBOSKIN Project: Challenges and Results

The ROBOSKIN Project:Challenges and Results

Aude Billard1, Annalisa Bonfiglio2, Giorgio Cannata3, Piero Cosseddu2,Torbjorn Dahl4, Kerstin Dautenhahn5, Fulvio Mastrogiovanni3, GiorgioMetta6, Lorenzo Natale6, Ben Robins5, Lucia Seminara3, Maurizio Valle3

1 Ecole Polytechnique Federale de Lausanne, Switzerland2 University of Cagliari, Italy3 University of Genova, Italy

4 University at Wales in Newport, U.K.5 University of Hertfordshire, U.K.

6 Italian Institute of Technology, Italy

Abstract The goal of the ROBOSKIN project1 is to develop anddemonstrate a range of new robot capabilities based on the tac-tile feedback provided by a robotic skin located on large areas ofthe robot body. So far, a principled investigation of these issueshas been limited by the lack of tactile sensing technology enablinglarge scale experimental activities. As a matter of fact, skin basedtechnology and embedded tactile sensors have been mostly demon-strated only at the stage of prototypes. The new capabilities areexpected to improve the ability of robots to effectively and safelyoperate in unconstrained environments, as well as to communicateand co-operate with each other and with humans.

1 Introduction

Tactile sensing is strategic for the design and the implementation of safeinteraction processes involving robots, humans and objects present in therobot (possibly unstructured) Workspace. As a matter of fact, tactile sens-ing provides the most important and direct feedback to control contactphenomena both in case of voluntary and non-voluntary interactions withthe environment.

Beyond the classical robot interaction tasks (e.g., the peg in hole prob-lem) where the onset of interaction can be either expected or planned tooccur at a specific location of the robot body (typically at the end effector

1Please refer to the official ROBOSKIN webpage at www.roboskin.eu.

1

tip), more advanced applications require more complex forms of interaction(e.g., whole hand or whole arm grasping and manipulation or gait stabilitycontrol, just to name but few), where the location and the characteristics ofthe contact can not be predicted or possibly modelled in advance. Further-more, the control of physical interaction control can play a more complexperceptive role as in active sensory perception control loops, for instancein the case of robot programming by demonstration paradigms or in tactilebased social cognition tasks, where the physical modes of interaction at thesame time arise from and originate the human-robot interaction process.

In order to tackle these new research issues both at control and percep-tive levels, it is necessary to plan for a research agenda taking into accounta number of aspects, namely: i) novel sensory systems have to be developedwith the aim of measuring and characterizing interaction phenomena thatarise over (possibly) large contact areas; ii) these sensory systems have tobe properly interfaced with motor control modules to guarantee a reactiveand safe interaction between the robot and the environment; iii) appropri-ate robot perception and cognitive strategies must be designed and imple-mented, which are based on the underlying sensing structure, in order toenable meaningful human-robot interaction tasks.

According to these premises, the overall objective of ROBOSKIN is toimprove the ability of robots to act efficiently and safely during tasks in-volving a large degree of human-robot interaction. To this aim the projectfocuses on:

• the study of sensing technologies and methodologies for the develop-ment of distributed and modular components for building robot skin,

• the study of control and perception mechanisms that are required todevelop cognitive mechanisms exploiting tactile feedback to improvehuman-robot interaction capabilities.

So far, tactile sensing has been typically assumed to be a basic per-ception strategy to be used when purposively manipulating objects or toproperly control the effects caused by robot motion when in contact withthe environment. On the contrary, ROBOSKIN aims at addressing cogni-tive aspects related to tactile sensing, tactile perception and motion controlfrom a different perspective. On the one hand, we are interested in in-vestigating problems related to self-awareness and cognitive developmentthrough mechanisms exploiting tactile sensing to allow the robot to con-struct a model of its own body. On the other hand, we aim at studyingmechanisms of perception and safe reaction when objects or other physicalagents (typically, humans) interact with the robot by touch, and to provetheir effectiveness within application domains where touch can play a ma-jor role, such as robot programming by demonstration and skin based social

2

cognition.The article is organized as follows. Section 2 introduces the main RO-

BOSKIN objectives and discusses how the objectives have been met, witha specific emphasis on key achievements. Conclusions follow.

2 Results and Major Achievements

Research directions pursued in ROBOSKIN have been identified as a keyframework to progress further research in physical HRI as well as sensingand control in a more general perspective. For this reason, ROBOSKINwishes to prove that:

1. There exists a fundamental role of tactile sensing and perception forthe implementation of a class of application relevant HRI tasks, whichmotivates the development of large scale robot skin-like systems.

2. It is possible to implement procedures and methods for the develop-ment of large scale robot skin systems, using state of the art technologyand accessible servicing facilities, which can be tailored for differentrobot platforms.

3. It is possible to develop a SW framework bridging the tactile HWwith the perceptive and the control SW modules, also supporting theimplementation of high level skin-based cognitive robot interactiontasks.

According to these objectives, during the past three years the RO-BOSKIN consortium set out a research agenda pursuing technological, mod-elling and application results, which are briefly described as follows.

Figure 1. Left: networking design in skin patches. Right: a screenshot ofthe Skin Design Toolbox.

3

Result 1. Specification of procedures and technology for the constructionof robot skin systems for laboratory level robotic experiments. Skin designtools have been developed to aid robot skin design processes (Anghinolfiet al., 2012; Maiolino et al., 2011), see Figure 1. On the one hand, algo-rithms for the optimal placement (according to a number of criteria) of skinpatches over robot body parts, as well as for the optimal networking androuting of skin modules have been designed and developed. On the otherhand, SW tools to specifically tune geometric and mechanic parameters re-lated to the current skin prototypes have been developed. With the aimof tailoring robot skin for different robot platforms, it is possible to selectdifferent materials (according to their mechanical properties), varying skingeometrical parameters and to define trade-offs between these parametersand other functional requirements such as sensitivity and accuracy in theresponse.

Figure 2. Robots covered with skin patches: iCub, Kaspar and NAO.

Result 2. Implementation of large scale, multimodal, modular robot skinwith embedded electronics on different robot systems. Two different setsof activities have been carried out. On the one hand, real-time network-ing solutions for connecting large-scale tactile systems have been developed(Baglini et al., 2010): both general purpose and custom solutions have beeninvestigated. On the other hand, specific procedures have been envisagedto cover robots of different shape with large-scale tactile systems (Schmitzet al., 2011), see Figure 2.

Result 3. Implementation of middleware for tactile data integration andinterpretation, including integrated sensorimotor strategies for reactive pro-tective reflexes, and for robot tactile self-exploration. The goal of the SWframework (Youssefi et al., 2011) is to provide an abstract and HW in-dependent representation of the skin, which is organized in logical unitsrepresenting different body parts. Skin SW technologies include algorithms

4

Figure 3. Left: results of the skin spatial calibration process. Right: theactual body part to be calibrated.

and data structures allowing tactile data to travel from the lowest level (i.e.,the actual sensors) up to user level applications. Both general-purpose androbot-specific tactile data processing architectures have been investigatedand experimentally evaluated in terms of real-time performance, specifi-cally taking into account bandwidth, jitter and reliability issues. Specificemphasis has been put on the so-called skin spatial calibration problem, i.e.,the problem of self-estimating the location of tactile elements mounted ona robot body part, see Figure 3. To this aim, different solutions have beeninvestigated (Cannata et al., 2010a; Prete et al., 2011; McGregor et al.,2011).

Result 4. Implementation of an architecture for touch-based social inter-action, and development of classification algorithms for touch-based socialinteraction. Robot skin makes the design and implementation of a numberof robot behaviours possible both at the reflexive and purposive levels. Onthe one hand, skin based protective reflexes have been developed, whichare based on reflex receptive fields as reported in studies from human sub-jects (Pierris and Dahl, 2010). On the other hand, methods to obtain tac-tile based robot motion behaviours have been investigated (Cannata et al.,2010b,c).

Result 5. Design an autonomous robot capable of skin-based interactionspecifically for children with autism. Aspects related to cognitive and em-bodied learning have been investigated, with a specific emphasis on tactilesocial interaction tasks with children with autism. The teleoperated robotKaspar (see Figure 4) has been used as a mediator to investigate general

5

Figure 4. Children interacting with the robot Kaspar.

cognitive learning with very low functioning children with autism, specifi-cally exploring sad and happy expressions, cause and effect relationships aswell to develop coordination mechanisms (Wainer et al., 2010).

Figure 5. Tactile correction are provided to the robot after the demonstra-tion phase.

Result 6. Implementation of new means of teaching skills to a robot, basedon force-control, through touch-based kinesthetic training. The main idea isthat tactile feedback naturally extends the idea of teaching robots as hu-mans teach other humans (see Figure 5). The somewhat classical paradigm

6

of robot programming by demonstration has been extended with correctionpolicies based on tactile feedback. In particular, after the off-line demon-stration phase has occurred, on-line feedback is provided through the skinmounted on relevant robot body parts (Sauser et al., 2012).

3 Conclusions

In order to measure the success of ROBOSKIN and to assess the majorresults described in Section 2, a number of demonstrations have been set upthat are strictly related to the main project objectives. However, the mainresults consist in the fact that four different robot platforms have been pro-vided with a skin system based on the technology developed in the contextof ROBOSKIN, namely iCub, NAO, Kaspar and a Schunk manipulator2.This demonstrates the effectiveness of the developed HW and SW technol-ogy to adapt to different robot shapes, control architectures and tasks. Theoverall results of ROBOSKIN pave the way for further advances in tactilesensing and touch-based human-robot interaction processes.

4 Acknowledgements

The research leading to these results has received funding from the EuropeanCommission’s Seventh Framework Programme (FP7) under Grant Agree-ment no. 231500 (ROBOSKIN). The authors would like to acknowledge allthe people involved in the project in the past three years.

Bibliography

D. Anghinolfi, G. Cannata, F. Mastrogiovanni, C. Nattero, and M. Paolucci.Heuristic approaches for the optimal wiring in large scale robotic skindesign. Computers and Operation Research, 39(11):2715–2724, 2012.

E. Baglini, G. Cannata, and F. Mastrogiovanni. Design of an embeddednetworking infrastructure for whole-body tactile sensing in humanoidrobots. In Proceedings of the 2010 IEEE-RAS International Conferenceon Humanoid Robotics (HUMANOIDS 2010), Nashville, USA, December2010.

G. Cannata, S. Denei, and F. Mastrogiovanni. Towards automated self-calibration of robot skin. In Proceedings of the 2010 IEEE International

2It is worth noting that the Schunk platform has not been considered since the project

beginning. On the contrary, it has been considered on the run using a reviewed proposal

selection mechanism.

7

Conference on Robotics and Automation (ICRA 2010), Anchorage, USA,May 2010a.

G. Cannata, S. Denei, and F. Mastrogiovanni. A framework for representinginteraction tasks based on tactile data. In Proceedings of the 2010 IEEEInternational Symposium on Robot and Human Interactive Communica-tion (RO-MAN 2010), Viareggio, Italy, September 2010b.

G. Cannata, S. Denei, and F. Mastrogiovanni. Tactile sensing: Steps toartificial somatosensory maps. In Proceedings of the 2010 IEEE Inter-national Symposium on Robot and Human Interactive Communication(RO-MAN 2010), Viareggio, Italy, September 2010c.

P. Maiolino, T-H-L. Lee, A. Schmitz, F. Mastrogiovanni, and G. Cannata.A toolbox for supporting the design of large-scale tactile systems. In Pro-ceedings of the 2011 IEEE-RAS International Conference on HumanoidRobotics (HUMANOIDS 2011), Bled, Slovenia, November 2011.

S. McGregor, D. Polani, and K. Dautenhahn. Generation of tactile mapsfor artificial skin. PLoS ONE, 11(6), 2011.

G. Pierris and T. S. Dahl. Compressed sparse code hierarchical som onlearning and reproducing gestures in humanoid robots. In Proceedingsof the 2010 IEEE International Symposium in Robot and Human In-teractive Communication (ROMAN 2010), Viareggio, Italy, September2010.

A. Del Prete, S. Denei, L. Natale, F. Mastrogiovanni, F. Nori, G. Can-nata, and G. Metta. Skin spatial calibration using force / torque mea-surements. In Proceedings of the 2011 IEEE/RSJ International Confer-ence on Intelligent Robots and Systems (IROS 2011), San Francisco, CA,USA, October 2011.

E. Sauser, B. Argall, G. Metta, and A. Billard. Iterative learning of graspadaptation through human corrections. Robotics and Autonomous Sys-tems, 60(1):55–71, 2012.

A. Schmitz, P. Maiolino, M. Maggiali, L. Natale, G. Cannata, and G. Metta.Methods and technologies for the implementation of large-scale robottactile sensors. IEEE Transactions on Robotics, 3(27):389–400, 2011.

J. Wainer, K. Dautenhahn, B. Robins, and F. Amirabdollahian. Collab-orating with kaspar: Using an autonomous humanoid robot to fostercooperative dyadic play among children with autism. In Proceedingsof the 2010 IEEE-RAS International Conference on Humanoid Robotics(HUMANOIDS 2010), Nashville, USA, December 2010.

S. Youssefi, S. Denei, F. Mastrogiovanni, and G. Cannata. A middleware forwhole body skin-like tactile systems. In Proceedings of the 2011 IEEE-RAS International Conference on Humanoid Robotics (HUMANOIDS2011), Bled, Slovenia, November 2011.

8