Medical Robotics

7
I nformation and communication technology (ICT) and me- chatronics play a basic role in medical robotics and comput- er-aided therapy. In the last three decades, in fact, ICT tech- nology has strongly entered the health-care field, bringing in new techniques to support therapy and rehabilitation. In this frame, medical robotics is an expansion of the service and professional robotics as well as other technologies, as sur- gical navigation has been introduced especially in minimally invasive surgery. Localization systems also provide treatments in radiotherapy and radiosurgery with high precision. Virtual or augmented reality plays a role for both surgical training and planning and for safe rehabilitation in the first stage of the re- covery from neurological diseases. Also, in the chronic phase of motor diseases, robotics helps with special assistive devices and prostheses. Although, in the past, the actual need and advantage of navigation, localization, and robotics in surgery and therapy has been in doubt, today, the availability of better hardware (e.g., microrobots) and more sophisticated algo- rithms (e.g., machine learning and other cognitive approach- es) has largely increased the field of applications of these tech- nologies, making it more likely that, in the near future, their presence will be dramatically increased, taking advantage of the generational change of the end users and the increasing request of quality in health-care delivery and management. Computer-assisted surgery (CAS) refers to technologies including medical robotics, image-guided surgery, comput- er-integrated advanced orthopedics, stereotactic guidance, and computer-assisted medical interventions. Besides en- hancing the accuracy and repeatability, CAS systems im- prove the surgeon’s three-dimensional (3-D) perception of the surgical scenario and support the surgeon’s skill in performing highly demanding procedures [1]. All these features improve the surgical performance and clinical outcomes [2]. The CAS systems can also assist the surgeon in the planning of an operation. In general, applications include neurosurgery; ear, nose, and throat (ENT) surgery; and orthopedics. By Giancarlo Ferrigno, Guido Baroni, Federico Casolo, Elena De Momi, Giuseppina Gini, Matteo Matteucci, and Alessandra Pedrocchi Digital Object Identifier 10.1109/MPUL.2011.941523 Date of publication: 2 June 2011 Medical Robotics and Computer-Aided Therapy ©IMAGE SOURCE MAY/JUNE 2011 IEEE PULSE 55 2154-2287/11/$26.00©2011 IEEE

Transcript of Medical Robotics

Information and communication technology (ICT) and me-chatronics play a basic role in medical robotics and comput-er-aided therapy. In the last three decades, in fact, ICT tech-nology has strongly entered the health-care field, bringing in new techniques to support therapy and rehabilitation. In this frame, medical robotics is an expansion of the service

and professional robotics as well as other technologies, as sur-gical navigation has been introduced especially in minimally invasive surgery. Localization systems also provide treatments in radiotherapy and radiosurgery with high precision. Virtual or augmented reality plays a role for both surgical training and planning and for safe rehabilitation in the first stage of the re-covery from neurological diseases. Also, in the chronic phase of motor diseases, robotics helps with special assistive devices and prostheses. Although, in the past, the actual need and advantage of navigation, localization, and robotics in surgery and therapy has been in doubt, today, the availability of better

hardware (e.g., microrobots) and more sophisticated algo-rithms (e.g., machine learning and other cognitive approach-es) has largely increased the field of applications of these tech-nologies, making it more likely that, in the near future, their presence will be dramatically increased, taking advantage of the generational change of the end users and the increasing request of quality in health-care delivery and management.

Computer-assisted surgery (CAS) refers to technologies including medical robotics, image-guided surgery, comput-er-integrated advanced orthopedics, stereotactic guidance, and computer-assisted medical interventions. Besides en-hancing the accuracy and repeatability, CAS systems im-prove the surgeon’s three-dimensional (3-D) perception of the surgical scenario and support the surgeon’s skill in performing highly demanding procedures [1]. All these features improve the surgical performance and clinical outcomes [2]. The CAS systems can also assist the surgeon in the planning of an operation. In general, applications include neurosurgery; ear, nose, and throat (ENT) surgery; and orthopedics.

By Giancarlo Ferrigno, Guido Baroni, Federico Casolo, Elena De Momi, Giuseppina Gini, Matteo Matteucci, and Alessandra Pedrocchi

Digital Object Identifier 10.1109/MPUL.2011.941523

Date of publication: 2 June 2011

Medical Robotics and Computer-Aided Therapy

©IMAGE SOURCE

MAY/JUNE 2011 ▼ IEEE PULSE 552154-2287/11/$26.00©2011 IEEE

56 IEEE PULSE ▼ MAY/JUNE 2011

The same technologies and methods are exploited for patient-positioning sys-tems (PPSs) and patient-verification systems (PVSs) in radiotherapy and hadron therapy. The technological challenge is to combine the most modern image-guidance techniques with a high-accuracy positioning device in an automated procedure for real-time setup con-trol. Also, setup procedures must be designed to exploit the state-of-the-art technologies for accurate, automatic, and highly organized patient treatment, where time-consuming preparation activities can be handled in dedi-cated rooms.

The challenges in the use of robots in sur-gery are still having an impact on the operating room space, learning curve, reliability, and cost–benefit ratio in terms of economic investment, running costs, and time overheads. Ro-bots in medicine can play either an active role, such as mak-ing direct surgery, or a passive role, such as holding devices. Passive systems are an additional aid to the surgeon during handling of heavy equipment or an alternative to manual jigs and fixtures, such as Neuromate and ROSA, which replace the stereotactic frame in conventional neurosurgery.

Fully active systems contribute to the surgical proce-dure outcomes. The surgeon becomes a supervisor of the procedure, which is autonomously completed by the robot. Semiactive systems rely on a synergy between the skill and judgment of the surgeon, and the accuracy and repeatabil-ity of the robot are thus favored by the medical robotics re-search community. Within this class of systems, it is worth recalling the NeuroArm project [3] under development at the University of Calgary for microsurgery and stereotaxy brain procedures.

Communication between the surgeon and robotic sys-tem puts forward a requirement for an easy-to-use and reliable human–machine interface that can partially push the uptake of robots in an operation room. The interface has a crucial role in the design of systems interacting with complex devices as those for assistance in surgery. The complex multistep interface (doctor–robot machine–pa-tient and back to doctor) calls for new tactile and forced feedback.

Assistive devices for disabled people can be based on both rehabilitation and social robots [4]. Examples of such devices include smart wheelchairs, artificial limbs, and exoskeletons.

Autonomy, in particular for mobility, is one of the cutting-edge research topics in robotics, espe-cially when dealing with smart wheelchairs that may be thought as autonomous robots but with more complex requirements.

Very few commercial, active artificial up-per limbs currently include an active shoulder. The reason for this is found in the challenges raised by the design of the hardware and con-trol system. The common practice to drive the limb joint by joint hinders the smoothness of the movement. Research advancements in this area have been boosted by recent events such as the military campaigns in Afghanistan and Iraq that pushed the U.S. Defense Advanced Project

Research Agency (DARPA) to start a big program called revolu-tionizing prosthetic arm in 2007. Among the interesting results is a bionic arm from Dean Kamen’s idea, which has 22 degrees of freedom (DoF), 18 for the hand [5], in which three kinds of grasps are implemented, and four for the arm. Other important research results funded by DARPA came from Johns Hopkins Applied Physics Laboratory [6], which produced a new active prosthesis based on the targeted muscle reinnervation. How-ever, up to now, these important research projects have not yet generated simplified prosthetic arms available for the market.

In recent years, multipurpose robotic workstations to as-sist the disabled have been developed. The question is raised whether the independence provided by the devices in this category is offset by their complexity and costs [7]. Manus and intelligent ARM (iARM) are 6 DoF manipulandum for feeding, opening cabinet doors, and simple dressing for spinal cord injury (SCI) and neuromuscular disease patients. Ko-rea Advanced Institute of Science and Technology (KAIST) Rehabilitation Engineering System (KARES) (from Korea) is a manipulandum with several user interfaces: eye track-ing, electromyography (EMG) interface, head-based system, and shoulder interface [8]. Personal Mobility and Manipula-tion Appliance (PerMMA) is a bimanual robot mounted on a wheelchair with a custom, flexible input controller. Assistive robotics’ ease of use depends upon a suitable interface; me-chanical controls are very common, and voice-operated solu-tions have been developed. More complex control solutions based on eye- or head-tracking systems allow the control of a pointer to interface with an assistive robot screen [9]. In the past two decades, the brain–computer interface (BCI) has emerged as a new frontier in assistive technology, bypassing the motor step and directly communicating with the user’s central nervous system (CNS).

Research at Politecnico di Milano

Computer-Aided Robotic Surgery and Therapy

Human–Robot Interfaces in Surgery and Therapy

The EMG control is the most used approach in today’s prosthetic devices because it is noninvasive compared with the other methods. The method developed at Politecnico

The technological challenge is to

combine the most modern image-

guidance techniques with a high-accuracy

positioning device in an automated

procedure for real-time setup control.

FIGURE 1 Architecture of an EMG classifier.

Segmentation Module

ElectrodesRectification + Linear Envelope

+ Dynamic Thresholding

Feature Extraction

Wavelet (SVD) + iEMG + MAV ANN

Classifier

MAY/JUNE 2011 ▼ IEEE PULSE 57

di Milano for prosthesis interface guarantees high accuracy on the classification of seven motion patterns, which mostly involve the finger motions. The recording of the EMG signal is carried out on the extensor carpi ulnaris, extensor digitorum communis, and along the group of flexor muscles, while the reference is on the elbow. Each signal is rec-tified, enveloped, and subject to a dynamic threshold (modified moving average). After segmentation, each single burst is analyzed by the continuous wavelet transform (CWT). This is then processed by the singular value decomposition (SVD) to reduce its dimen-sionality. This is then related with two other parameters, the integral EMG and the mean absolute value, to form a feature vector representing the burst. An artificial neural network (ANN) is then trained to associate each feature vector with the corresponding hand movement (Figure 1).

When developing haptic devices for virtual reality and tele-control, the main focus is on obtaining realistic sensations. At Politecnico di Milano, a sensorized glove [10] was manufactured to give the sense of touch, using electrostim-ulation and force feedback with McKibben actuators (Figure 2). The current injected into the skin causes the depolarization of re-ceptors and the generation of action poten-tials that are interpreted by the CNS, such as touch sensation.

Specifically for laparoscopic surgery, a commercial haptic device was used in simu-lation and inserted into a telecontrol method based on force restitution. The constraints given by the trocar and virtual walls to pre-vent from accidentally entering the critical areas were modeled. The relevant point in the design is that the same interface to work in simulation and telecontrol is used.

Robot-Based Neurosurgery

Highly demanding surgical procedures have the requirements that cannot be met using standard robotic devices. The Ro-bots and sensors integration for Computer Aided Surgery and Therapy (ROBOCAST) project (FP7 ICT-2007-215190), coordinat-ed by Politecnico di Milano, has attempted to overcome some of these limitations. It has developed a system for the assistance of the surgeon in keyhole brain interven-tions. The mechatronic design consists of a modular robot holding the instrument for the surgeon and inserting them in the brain. The ROBOCAST system is a robotic chain of three robots with 13 DoFs [11]. The planned trajectory for the surgical

probe is automatically defined by the intel-ligence of the ROBOCAST system and is ap-proved by the surgeon, which is and remains to be responsible for the outcome.

By using the ROBOCAST platform, it is not possible to compensate for any unexpect-ed head movement in case of awake surgery procedures, with the purpose of the mapping of cortical areas and subcortical pathways in-volved in motor, sensory, language, and cogni-tive functions [12], [13]. The Active constraints for Ill defined and Volatile Environments (AC-TIVE) project, coordinated as well by Politec-nico di Milano (FP7 ICT-2007-270460), aims to push the boundaries of current surgical robotics by addressing many of the shortcomings of cur-

rent systems: the lack of touch feedback, need for rigid frames of reference, difficulties with soft-tissue manipulations, slow response time, and limitations in the control strategies devel-oped to date. The main objective of the ACTIVE project is to design and build a complete surgical robotic platform based on a new multirobot architecture for supporting the interven-tions on soft brain tissue (Figure 3).

FIGURE 2 Haptic glove.

FIGURE 3 The ACTIVE concept.

The main objective of the ACTIVE project

is to design and build a complete surgical robotic

platform based on a new multirobot architecture for supporting the

interventions on soft brain tissue.

58 IEEE PULSE ▼ MAY/JUNE 2011

Computer-Assisted Radio and Hadron-Therapy

The Italian National Center for Hadrontherapy [Centro Nazio-nale di Adroterapia Oncologica (CNAO)] is currently being com-missioned in Pavia. The center will be providing hadrontherapy treatments with scanned particle beams. Politecnico di Milano has been an institutional participant of CNAO since 2003.

The computer-aided positioning in hadron-therapy (CAPH) system combines the most modern image-guidance techniques with a high-accuracy positioning device in an automated pro-cedure for real-time setup control.

Imaging technologies, optical tracking sys-tem (OTS), and a high-precision robotic patient positioner were selected as the key compo-nents. A 6 DoF robotic PPS automatically drives the patient to the nominal position using the feedback pro-vided by the OTS, which is able to verify the positional repeat-ability in the marker configuration during treatment. A pro-grammable laser projector allows to sample the surface of the patient as a point cloud, which can be used to refine the initial registration obtained with the markers. The setup is then veri-fied, before radiation is delivered, by means of image-based registration between images acquired in the treatment room through the PVS and the treatment-planning computed to-mography (CT). The use of advanced optical tracking tech-nologies for real-time patient monitoring during treatment is exploited to compensate the breathing motion and for updat-ing at each session the geometric relationship between inter-nal structures and external fiducials.

Positioning-correction strategies based on fiducial markers and/or a combination of marker and surface topological informa-tion have been realized. Politecnico di Milano has been involved in the very early stage of CNAO design and has given strong sup-port in setting up the methods for calculating the shielding [14].

The cooperation between CNAO and Politecnico di Milano also resulted in the development of both passive and active in-novative neutron detectors. A dual-detector Roentgen equivalent

in man (rem) counter has been designed [15] to be used as a neutron survey meter for monitoring the neutron dose throughout the plant.

Assistive Devices and Prostheses

Active Upper Limb Prostheses

The research group of man–machine sys-tems of the Politecnico designed and pro-duced, during the last decade, joint compo-nents suitable for equipping total prosthetic arms or prostheses for transhumeral am-putation. The full arm system includes a 2 DoF shoulder, a 1 DoF elbow, and a commercial hand. Considering that the commercial hand component was already adequate to equip the artificial limbs, the research group focused attention on the de-

velopment of the other joints of the arm: the elbow and shoul-der. The main attributes of both components are its lightness, efficiency, and affordable cost. To increase the transmission ef-ficiency that, in some other prostheses, was measured to be less than 10%, a linkage system was adopted: a four-bar linkage, in

series with a linear guide made of a screw ball axis, constitutes the mechanism of both the el-bow joints. The most recent shoulder and elbow joints, designed by the same research group, are simplified by the adoption of the harmonic drive transmissions directly mounted on the motors axes (Figure 4). The main features of the prosthetic system are: lightness, reliability, ease of use, esthetic, to be noiseless, and to en-sure to the hand the adequate working volume. The new arm command strategies are based, as

mentioned above, on the choice to ask to the patient only to focus on the hand motion, while the movement of the rest of the arm chain will be automatically managed by the software of the mechatronic system of the limb.

Thus, the most critical problem is the choice of the signal to be collected from the patient to extrapolate his/her wish of motion for the hand. Up to now, the most effective and simple driving signals acquired from the patients seem to be the ones based on the head motion, e.g., maintaining the target object, on an axis fixed to the head and perpendicular to the line connecting the eyes, while another lateral head movement drives the distance from the target to the face. The head mo-tion is detected by a device, including a 3-D accelerometer and 3-D gyroscope, which can be positioned on the head in various locations.

Exoskeletons and Neuroprostheses

The multimodal neuroprosthesis for daily upper limb sup-port (MUNDUS) project is intended to adapt to different sce-narios of application, depending on the residual capabilities of the user so as to maximize the naturalness of the interaction between the user and the system and simultaneously exploit

FIGURE 4 Total arm prostheses (without the cover): the new shoulder can be coupled with both linkage or harmonic drive elbows.

The most effective and simple driving

signals acquired from the patients seem to be the ones based on

the head motion.

MAY/JUNE 2011 ▼ IEEE PULSE 59

all the residual capabilities of the user. The different scenarios will, however, be built on the same platform, and possible up-grades will be possible, depending on the progression of the disease. The integration of additional support takes advantage of its previous use so as to shorten the training time and make the transitions smoother. In the definition of the requirements of MUNDUS, three exemplary scenarios have been defined to drive the design of the system.

On the control level (Figure 5), MUNDUS will exploit any voluntary command that the user is able to send. In case of impairment of neuromuscular functions, there are few exploitable commanding strategies to detect the will-ingness to move and the capability to decide where to go: EMG signals, head motion, eye motion, and brain signals, when muscular activities are no more available. MUNDUS will pursue the modular implementation of both these

FIGURE 5 Possible examples of MUNDUS scenarios along with the corresponding sketch of the user condition. Red body districts are impaired while the green ones are still working. (a) Scenario 1, (b) Scenario 2, and (c) Scenario 3.

Passive AntigravityExoskeleton

Passive AntigravityExoskeleton

Passive AntigravityExoskeleton

Arm NMES

Arm NMES

Arm NMES

Hand NMES

Hand NMES

Hand NMES

Bioimpedance

Bioimpedance

Bioimpedance

EEG BCI

Eye/HeadTracking

Residual EMGArm

Residual EMGHand

Residual HeadGaze Control

Residual HeadGaze Control

Residual BrainControl

Residual BrainControl

Residual BrainControl

ResidualCapabilities

ResidualCapabilities

ResidualCapabilities

Object RFID

Object RFID

(a)

(b)

(c)

60 IEEE PULSE ▼ MAY/JUNE 2011

possible strategies. It will depend on the MUNDUS user to decide which control to use accordingly to his/her own ca-pabilities. On the execution level, MUNDUS will allow the choice of actuators, again, according to avail-able personal resources. When it will be pos-sible, the motion will be powered by the user’s own muscles, facilitated by an exoskeleton for gravity compensation. Alternatively, neuro-muscular electrical stimulation (NMES) will be used to get the desired motion, or when this will not be possible, a simple mechanism will be used to support the wrist and hand motions. The design of the exoskeleton sup-porting the arm will be tailored to a real light and noncumbersome solution usable in home and work environments.

At the hand level, it will be possible to use one’s own hand when, due to the spinal injury or disease prog-ress, the proximal motor function is more impaired than the distal one, and, consequently, the disabled person has an in-sufficient shoulder and elbow control to exploit his/her resid-ual hand mobility. Alternatively, an NMES-actuated grasping

glove, eventually supported by a specifically designed small and lightweight mechanism, will be available to assist the grasp of collaborative functional objects, identified by radio frequency

identification (RFID). MUNDUS aims at the iden-tification of a set of objects and related functions that have particular significance for the specific user and support their execution through the de-velopment of collaborative objects with construc-tion features that facilitate his/her manipulation by MUNDUS hand.

MUNDUS innovation mainly lies in the mod-ularity of the control/actuators and of the arm/hand functions and the continuous adaptation to user residual ability when the pathology pro-gresses. Each of the assistive functions included in MUNDUS (e.g., exoskeleton, NMES etc.) can work, in fact, as stand-alone functions, and the

availability of sensors and actuators suitable for different stages of the progressive diseases (e.g., control based on residual EMG and control based on gaze tracking) assures that the system can be used by the same users with different severity levels, taking advantage of already acquainted devices.

Autonomous-Assistive Robots

To meet the variable requirements of disabled people, an autonomous wheelchair control system [let unleashed robots crawl the house (LURCH)], which is a flexible system able to adapt to the necessity of completely locked-in individuals, has been designed at Politec-nico di Milano. In LURCH, the user has the opportunity to choose among several au-tonomy levels, ranging from simple obstacle avoidance to full autonomy, and different in-terfaces: a classical joystick, a touch screen, an EMG interface, and a BCI, i.e., a system that allows the user to convey his/her inten-tion by analyzing his/her brain signals [16]. Figure 6 outlines a scheme of LURCH. The image shows that it is completely separated from the wheelchair, and the only gateway between the LURCH and the vehicle is repre-sented by an electronic board that intercepts the analog signals coming from the joystick potentiometers and generates new analog signals to simulate a real joystick and drive the joystick electronics.

In other words, the system is not inte-grated with the wheelchair at the digital control bus level but instead we rely on the simulation of the signals from the joy-stick in the analogue domain. Though this choice could seem awkward, its motivations are twofold: 1) it is often hard to obtain the proprietary communication protocols of the wheelchair controllers or to understand how they exchange data with the motors and

FIGURE 6 The LURCH architecture; the control system is independent from the wheelchair, and the gateway between LURCH and the vehicle is represented by an electronic board that intercepts the analog signals from the joystick potentiom-eters and generates new analog signals to the wheelchair.

AutonomousNavigation

+Obstacle

Avoidance

LURCH System

SPIKE is a fast planner based

on a geometrical representation of

static and dynamic objects in an

environment modeled as a 2-D space.

MAY/JUNE 2011 ▼ IEEE PULSE 61

interfaces and 2) this solution improves the portability, since it avoids a direct interaction with the internal communication bus of the wheelchair. LURCH was designed by adopting the modular approach proposed by Bonarini et al. [17]. The lo-calization algorithm operates using a video camera and some passive markers. These are placed on the ceiling of the envi-ronment, since this allows to avoid occlusions and provide accurate and robust pose estimation. This restricts LURCH to indoor environments.

The trajectory planning is obtained by spike plans in known environments (SPIKE), a fast planner based on a geometrical representation of static and dynamic objects in an environment modeled as a 2-D space [17]. SPIKE exploits a multiresolution grid over the environment representation to build a proper path, using an adapted A* algorithm, from a starting position to the requested goal; this path is finally represented as a poly-line that does not intersect obstacles. To implement trajectory following and obstacle avoidance, multilevel ruling brian reacts inferencing action (MrBRIAN), a fuzzy behavior management system, was used.

In LURCH, to deal with the different user capabilities, a mul-timodal interface has been implemented. Commands can be is-sued to the autonomous wheelchair at different levels of abstrac-tion, from simple “turn right” or “go straight” to complex tasks such as “bring me to the kitchen.”

ConclusionsAlthough the field of research in medical robotics and comput-er-aided therapy is diverse, it is very widely explored at Politec-nico di Milano in several departments. Robotics, in particular, is exploited in both therapeutic and assistive fields, showing its great potential as an effective personal aid. Given the saturation and marginal cost reduction going on in the automotive mar-ket, health care represents one of the potential investments in the fields of service and professional robotics that will boost the research in mechatronics, augmented reality, and intelligence augmentation in the next years.

Potential benefits, especially for the elderly, are straightfor-ward in the field of robot-based assistive systems, which will allow for a better quality of life (daily life activities and mobil-ity) even for severely disabled patients.

Giancarlo Ferrigno ([email protected]), Guido Baroni ([email protected]), Elena De Momi ([email protected]), and Alessandra Pedrocchi ([email protected]) are with the Dipartimento di Bioingegneria, Politecnico di Milano. Federico Casolo ([email protected]) is with the Dipartimento di Meccanica, Politecnico di Milano . Giuseppina Gini ([email protected]) and Matteo Mat-teucci ([email protected]) are with the Dipartimento di Elettronica e Informazione, Politecnico di Milano.

References[1] S. Martelli, L. Nofrini, P. Vendruscolo, and A. Visani, “Criteria of

interface evaluation for computer assisted surgery systems,” Int. J.

Med. Inform., vol. 72, no. 3, pp. 35–45, 2003.

[2] S. L. Delp, S. D. Stulberg, B. Davies, F. Picard, and F. Leitner,

“Computer assisted knee replacement,” Clin. Orthopaed., vol. 354,

pp. 49–56, Sept. 1998.

[3] G. Sutherland, I. Latour, and A. Greer, “Integrating an image-

guided robot with intraoperative MRI: A review of the design and

construction of NeuroArm,” IEEE Eng. Med. Biol. Mag., vol. 27, no.

3, p. 59, pp. 59–65, 2008.

[4] J. Broekens, M. Heerink, and H. Rosendal, “Assistive social robots in

elderly care: A review,” Gerontechnology, vol. 8, no. 2, pp. 94–103, 2009.

[5] R. Weir, M. Mitchell, S. Clark, G. Puchhammer, K. Kelley, M.

Haslinger, N. Kumar, R. Hofbauer, P. Kuschnigg, V. Cornelius, M.

Eder, and R. Grausenburger, “New multifunctional prosthetic arm

and hand systems,” in Proc. 29th Annu. Int. Conf. IEEE EMBS, Lyon,

France, Aug. 23–26, 2007, pp. 23–26.

[6] J. Talan, “Reinnervated muscles produce EMG information for

real-time control of artificial arms,” Neurol. Today, vol. 9, no. 5, pp.

1–19, Mar. 5, 2009.

[7] H. F. M. Van der Loos, J. J. Wagner, N. Smaby, K. Chang, O.

Madrigal, L. J. Leifer, and O. Khatib, “ProVAR assistive robot

system architecture,” in Proc. IEEE Int. Conf. Robotics and Automa-

tion, Detroit, MI, 1999, vol. 1, pp. 741–746.

[8] Z. Bien, D. J. Kim, M. J. Chung, D. S. Kwon, and P. H. Chang,

“Development of a wheelchair-based rehabilitation robotic sys-

tem with various human-robot interaction interfaces for the

disabled,” in Proc. 2003 IEEE/ASME Int. Conf. Advances Intelligent

Mechatronics, Nashville, TN, 2003, vol. 2, pp. 902–907.

[9] Y. Kuno, N. Shimada, Y. Shirai, “Look where you’re going,” IEEE

Robot Automat. Mag., vol. 10, no. 1, pp. 26–34, 2003.

[10] G. Gini, M. Folgheraiter, and D. Vercesi, “A multi modal haptic

interface for virtual reality and robotics,” J. Intell. Robot. Syst.,

vol. 52, no. 4, pp. 465–488, 2008.

[11] E. De Momi and G. Ferrigno, “Robotic and artificial intelligence

for keyhole neurosurgery: The robocast project, a multi-modal

autonomous path planner,” Proc. Inst. Mech. Eng. H, J. Eng. Med.,

vol. 224, no. 5, pp. 715–727, 2010.

[12] O. Foerster and W. Penfield, “The structural basis of traumatic

epilepsy and results of radical operations,” Brain, vol. 53, no. 2,

pp. 99–119, 1930.

[13] A. Szelényi, L. Bello, H. Duffau, E. Fava, G. C. Feigl, M. Ga-

landa, G. Neuloh, F. Signorelli, and F. Sala, “Intraoperative

electrical stimulation in awake craniotomy: Methodological

aspects of current practice,” Neurosurg. Focus, vol. 28, no. 2,

p. E7, Feb. 2010.

[14] S. Agosteo, “Radiation protection at medical accelerators,” Radiat.

Prot. Dosimetry, vol. 96, no. 4, pp. 393–406, 2001.

[15] S. Agosteo, M. Caresana, M. Ferrarini, and M. Silari, “A dual-de-

tector extended range rem-counter,” Radiat. Meas, vol. 45, no. 10,

pp. 1217–1219, 2010.

[16] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller,

and T. M. Vaughan, “Brain–computer interfaces for communica-

tion and control,” Clin. Neurophysiol., vol. 113, no. 6, pp. 767–

791, 2002.

[17] A. Bonarini, M. Matteucci, and M. Restelli, “MRT: Robotics off-

the-shelf with the modular robotic toolkit,” in Software Engineer-

ing for Experimental Robotics (Springer Tracts in Advanced Robot-

ics, vol. 30), D. Brugali, Ed. Berlin: Springer-Verlag, Apr. 2007,

pp. 345–364.