Gaze-controlled Telepresence - DTU Orbit
-
Upload
khangminh22 -
Category
Documents
-
view
0 -
download
0
Transcript of Gaze-controlled Telepresence - DTU Orbit
General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
You may not further distribute the material or use it for any profit-making activity or commercial gain
You may freely distribute the URL identifying the publication in the public portal If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
Downloaded from orbit.dtu.dk on: Sep 13, 2022
Gaze-controlled Telepresence: Accessibility, Training and Evaluation
Zhang, Guangtao
Publication date:2021
Document VersionPublisher's PDF, also known as Version of record
Link back to DTU Orbit
Citation (APA):Zhang, G. (2021). Gaze-controlled Telepresence: Accessibility, Training and Evaluation.
Gazecontrolled Telepresence: Accessibility,
Training and Evaluation
Guangtao Zhang
PhD Thesis
June 2021
DTU Management
Technical University of Denmark
Title: Gaze-controlled Telepresence:
Accessibility, Training and Evaluation
Type: PhD thesis
Date: 02.06.2021
Author: Guangtao Zhang
Supervisors: John Paulin Hansen, Jakob Eyvind Bardram
University: Technical University of Denmark
Department: DTU Management
Division: Innovation
Address: Akademivej Building 358
DK-2800 Kgs. Lyngby
www.man.dtu.dk
SummaryRecent advances in robotic technology and mobile devices make it possible to be presentat a remote physical location via mobile video and audio transmission, s.c. telepresence.Telepresence now becomes mainstream in the form of telerobots. For people with motordisabilities in particular, telerobots may help them overcome physical barriers, and givethem access to places, events, work, and education, potentially improving their quality oflife. However, telerobots are usually controlled by keyboards or joysticks. This Ph.D. thesis explores gazecontrolled telepresence, which makes telerobots accessible for peoplewho are unable to use their hands due to motor disability. Gazeinteraction is a common modality for augmented and alternative communication, and soon to be included incommodity hardware like headmounted displays (HMD´s).
The thesis starts by grounding the research as a contribution to the sustainable development goals and their commitment to ”Leaving no one behind”, and arguing that telerobotsshould be regarded as an important assistive technology. A systematic literature reviewwasmade to survey prior research concerning accessibility of telerobots. The use of gazeinteraction for mobility control has only been addressed in a few studies and none of themconducted an evaluation of the impact of gazeinteraction on situational awareness or theeffect of training.
An experimental telerobotic system, including a virtualrealitybased simulation of therobot and the task scenario, was applied in two empirical studies. Our first experimentconfirmed that telerobots can be operated by gaze, but it also identified a set of challengeswhen compared to hand control. For this experiment we developed a novel mazebasedevaluation method. In the second experiment, we further explored if operator trainingimproves gaze control. We compared the performance of participants trained in a realphysical scenario with participants trained in the VRsimulator. Results indicate that thetwo forms of training are equally effective.
Situational awareness (SA) is crucial for teleoperation tasks. A substantial contribution ofthis thesis was to develop and test a minimally intrusive procedure for measuring SA. Wecompared stateoftheart methods (i.e. SPAMbased popup and a postexperimentalquestionnaire (SART)) with our new method, which is based on analysis of saccadic eyemovements that can be recorded by HMD´s. Response latency of saccades towards atarget stimuli showed to be a sensitive indicator of mental workload and correlated with anumber of metrics from the stateoftheart methods. Thus, it merits further investigationfor continuous monitoring of SA because it only interrupts the operator for a few seconds.
Finally, we conducted a field study in a care home. This confirmed gazecontrol of telerobots to be a viable option for our target user group. Our participants gave suggestionsfor how the technology could further be used to improve their quality of life, for instanceas a means of telework and shopping.
In summary, our studies have (i) argued for the accessibility to gazecontrolled telepresence, (ii) shown VRbased operator training to be effective, (iii) developed and tested anew method for measuring of SA and, (iv) our target users confirmed the feasibility andpotentials of this technology.
ii Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Resumé (Danish)Robotteknologi og mobile enheder kan nu bibringe en oplevelse af tilstedeværelse viavideo og lydtransmission; på engelsk betegnet som telepresence. Telepresence forventes i de kommende år en stor udbredelse i form af telerobotter. For menneskermed motoriske handicap kan telerobotter gøre det muligt at overvinde fysiske barriererog give adgang til steder, begivenheder, arbejde og uddannelse, og derved forbedrederes livskvalitet. Telerobotter styres dog normalt med tastaturer eller joystick. Denneph.d.afhandling undersøger blikstyret telepresence, som gør telerobotter tilgængeligefor mennesker, der ikke er i stand til at bruge deres hænder på grund af et motorisk handicap. Blikstyring er allerede udbredt i kommunikationshjælpemidler og begynder at vindeindpas i standard elektronik enheder, eksempelvis i VRbriller.
Indledningsvis forankres forskningen som et bidrag til verdensmålene for bæredygtig udvikling og understøtter princippet om ”Leaving no one behind”, hvor der argumenteresfor, at telerobotter skal betragtes som en vigtig hjælpemiddelteknologi. En systematisklitteraturgennemgang undersøger tidligere forskning i tilgængeligheden af telerobotter.Brugen af blikstyring til mobilitetskontrol er kun blevet behandlet i få studier, og ingenaf disse foretog en evaluering af situationsnærværelsen (engelsk situational awareness)eller af træningseffekter.
Et eksperimentelt telerobot system, som foruden den fysiske robot også indbefattede enVRsimulering af robotten og opgavescenariet, blev anvendt i to empiriske undersøgelser.Vores første eksperiment bekræftede, at telerobotter kan betjenes med blikket, men detidentificerede også en række særlige udfordringer sammenlignet med håndkontrol. I detandet eksperiment undersøgte vi yderligere, om træning af operatøren forbedrer blikstyringen. Vi sammenlignede præstationen for deltagere, der er trænet i et virkeligt fysiskscenario, med deltagere, der er trænet i VRsimulatoren. Resultaterne viste, at de toformer for træning er lige effektive.
Situationsnærværelse er afgørende for teleoperationsopgaver. Et af afhandlingens væsentligste bidrag fra var at udvikle og teste en metode til måling af situationsnærværelse,som er minimalt forstyrrende. Vi sammenlignede to anerkendte metoder, såkaldte SPAMbaseret popup og et SARTspørgeskemaet, med vores nye metode, der er baseret på enanalyse af saccadiske øjenbevægelser, som kan registreres i VRbriller. Reaktionstidenfor en saccadisk øjenbevægelsemod en stimulus viste sig at være en god indikator for denaktuelle mentale arbejdsbyrde og korrelerede med målinger fra de anerkendte metoder.Den nyudviklede metode fortjener derfor uddybende undersøgelser, da den kun afbryderoperatøren i et par sekunder.
Endelig gennemførte vi en feltundersøgelse på en institution med motorisk handicappedebeboere. Denne undersøgelse bekræftede at vores målgruppe kan anvende blikstyringaf telerobotter. Deltagerne gave desuden forslag til, hvordan teknologien kunne brugestil at forbedre deres livskvalitet, fx som et middel til telearbejde og shopping.
Sammenfattende har afhandlingen (i) argumenteret for udvikling af tilgængeligt blikstyrettelepresence, (ii) vist, at VRbaseret operatortræning er effektiv, (iii) udviklet og testeten ny metode til måling af situationsnærværelse og, (iv) bekræftet anvendeligheden ogpotentialerne i denne teknologi sammen med brugere fra målgruppen.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation iii
PrefaceThis thesis was accomplished at the Innovation Division, Department of Technology, Management and Economics at the Technical University of Denmark in fulfillment of the requirements for acquiring the degree of Doctor of Philosophy (Ph.D.).
The Ph.D. project was part of the GazeIT project (Accessibility by Gaze Tracking). Thethesis was written as a report of the Ph.D. project under supervision of Professor JohnPaulin Hansen and Professor Jakob Eyvind Bardram, and includes eight published scientific articles, and three scientific articles in submission.
Guangtao Zhang
Signature
Date
iv Gazecontrolled Telepresence: Accessibility, Training and Evaluation
02.06.2021
AcknowledgementsThis thesis could not be accomplished without the support I received from my advisors,researchers, family, and friends.
First of all, I would like to deeply thank my main advisor, John Paulin Hansen. I feel solucky that I had such a creative, inspiring, and considerate main advisor during my Ph.D.study. I am thankful not only for the supervision, but also for the training and support thathe gave me in scientific research. I also much appreciate his care about students’ livesbeyond research work. I would like to thank my supervisor Jakob Eyvind Bardram forteaching me how to do Ph.D. work at the beginning, for his feedback on my research, andfor providing me the opportunity to be a member of the Copenhagen Center for HealthTechnology, where I could learn from, and be inspired and encouraged by researchersand peer Ph.D. students from our university and the University of Copenhagen, especiallyin healthcarerelated topics. A special thanks to the IPM Group, the Design Group andthe Human Factors Group. I would also like to thank all members of these groups for theirfeedback on my research, and for the nice time working together. I would like to thank thesoftware developers and students affiliated with the project, Alexandre Alapetite, ZhongyuWang, Martin Thomsen, Marcin Zajaczkowski, Antony Nestoridis, Nils David Rasamoel,and Jacopo de Araujo, for their development of the project, support throughout the studies, and for solving numerous technical problems. Their patience, kindness, and encouragement are greatly appreciated. I would like to thank Katsumi Minakata for his supportof my research and my writing, especially in the first year. A special thanks to HenningBoje Andersen and Sadasivan Puthusserypady for their feedback on my experiments. Athanks to Christine Ipsen and Kathrin Kirchner for giving me tips on writing the thesis. Athanks to Per Dannemand Andersen, for inviting us to join the conference on our researchand the SDGs at the UN City Copenhagen, which also motivated me in thinking about myresearch, technology and the social implications. A thanks to Per Anker Jensen, LiisaVälikangas and Kasper Edwards for their comments on my research. A special thanks toTanya Bafna, Marie Kirkegaard, Darius Adam Rohani, Pegah Hafiz, Amonpat Poonjan,Steven Tom Jeuris, Devender Kumar, Raju Maharjan, Milton Mariani, Lene Elkjær, NeldaVendramin, Hemant Ghayvat, Agzam Idrissov and Andrea Bravo for their support andreminding me that I was not alone in the Ph.D. research. A thanks to Sabrina Woltmannand Dorrit Givskov for their support, especially at the beginning of my Ph.D. I would liketo thank Astrid Kofod Trudslev and Freddy Larsen for their support regarding the fieldstudy, as well as caregivers from the Jonstrupvang care home. I would like to expressmy gratitude to residents from the care home for participation in the interviews and fieldstudy. They motivated me a lot in my research. A thanks to Sebastian Hedegaard Hansenand Oliver Repholtz Behrens for doing the second experiment with me. A thanks to allparticipants in the pilot study and the experiments. I would like to thank researchers fromother universities, Scott MacKenzie, Yiannis Demiris, Kenji Itoh, Xiuzhu Gu, and HanaVrzakova for their feedback on my research and encouragement. A special thanks toWendy Anne Rogers for offering me an opportunity to have a research stay in her group,though it was unfortunately canceled due to the pandemic. A thanks to Kristina Davis forproofreading. I would like to thank the China Scholarship Council for the scholarship fordoing my Ph.D. study abroad, and the Bevica Foundation for funding the research project.Last but not least, a sincere thanks to my family for their encouragement and support. Iwould like to deeply thank my parents for encouraging me, especially during my studiesabroad for all of these years.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation v
ContentsSummary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiResumé (Danish) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vContents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixNomenclature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
1 Introduction 11.1 Context and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Research questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 Research Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.5 Research Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.6 Overview of the Scientific Publications . . . . . . . . . . . . . . . . . . . . . 71.7 Overview of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Background 112.1 Terminology and an Overview of Related Concepts . . . . . . . . . . . . . . 112.2 Assistive technology, SDG, and CRPD . . . . . . . . . . . . . . . . . . . . . 132.3 Examples of Advanced Assistive Technology . . . . . . . . . . . . . . . . . 132.4 Assistive Technology and Inclusion . . . . . . . . . . . . . . . . . . . . . . . 142.5 Challenges and Barriers of Assistive Technologies . . . . . . . . . . . . . . 142.6 Consideration for Our Project . . . . . . . . . . . . . . . . . . . . . . . . . . 152.7 Eyetracking and Gaze Interaction . . . . . . . . . . . . . . . . . . . . . . . 152.8 Telerobots, Teleoperation and Telepresence . . . . . . . . . . . . . . . . . . 182.9 Human Factors and Telepresence Robots . . . . . . . . . . . . . . . . . . . 222.10 Transfer of Learning from VR . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3 Gazecontrolled Telepresence 273.1 Robotic System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.2 Gazebased User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4 Methods, Data Collection and Analysis 334.1 Evaluation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.2 Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.3 Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5 Insights from the Systematic Literature Review 43
6 Insights from the Experiments 456.1 Accessibility and Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 456.2 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486.3 SA Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
vi Gazecontrolled Telepresence: Accessibility, Training and Evaluation
7 Insights from the Field study 557.1 Presence and Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567.2 Envisioning Telepresence and its Possibilities . . . . . . . . . . . . . . . . . 567.3 Beyond Wheeled Telerobots . . . . . . . . . . . . . . . . . . . . . . . . . . . 577.4 Independence and Assistance . . . . . . . . . . . . . . . . . . . . . . . . . 577.5 Target Users in Realistic Scenarios . . . . . . . . . . . . . . . . . . . . . . . 587.6 Implications for Future Design . . . . . . . . . . . . . . . . . . . . . . . . . . 58
8 Discussion 60
9 Conclusion 65
A Appendix Journal Article 81A.1 Telepresence Robots for People with Special Needs: a Systematic Review 81A.2 Gazecontrolled Telepresence: An Experimental Study . . . . . . . . . . . . 98A.3 Saccade Test as a New Tool for Estimating Operators’ Situation Awareness
in Teleoperation with an HMD . . . . . . . . . . . . . . . . . . . . . . . . . . 114
B Appendix Article in Proceedings 128B.1 Head and Gaze Control of a Telepresence Robot with an HMD . . . . . . . 128B.2 Eyegazecontrolled Telepresence Robots for People with Motor Disabilities 132B.3 Hand and Gazecontrol of Telepresence Robots . . . . . . . . . . . . . . . 135B.4 Accessible Control of Telepresence Robots based on Eye Tracking . . . . . 144B.5 Enabling Realtime Measurement of Situation Awareness in Robot Teleop
eration with a Headmounted Display . . . . . . . . . . . . . . . . . . . . . . 148B.6 A Virtual Reality Simulator for Training Gaze Control of Wheeled Telerobots152B.7 People with Motor Disabilities Using Gaze to Control Telerobots . . . . . . . 155B.8 Exploring EyeGaze Wheelchair Control . . . . . . . . . . . . . . . . . . . . 165
C Appendix Document 174C.1 Experiment 2 (Training) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Gazecontrolled Telepresence: Accessibility, Training and Evaluation vii
List of Tables2.1 A comparison of SA techniques (based on a summary from [161]) . . . . . 23
6.1 Difference between gaze control and hand control. . . . . . . . . . . . . . . 476.2 An overview of SA comparison based on our research practice . . . . . . . 53
8.1 A comparison of possible solutions for gazecontrolled telepresence for target users as operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
viii Gazecontrolled Telepresence: Accessibility, Training and Evaluation
List of Figures1.1 Persona: Lucas wearing an HMD at a care home. . . . . . . . . . . . . . . 11.2 Orihime robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Overview of research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.4 An overview of tasks performed at each level. . . . . . . . . . . . . . . . . . 51.5 Overview of the common research methods . . . . . . . . . . . . . . . . . . 6
2.1 Overview of terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.2 Eyetracking devices: builtin, headmounted (mobile) , and remote (fixed
under the monitor) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.3 A typical telepresence robot: Double Robot. . . . . . . . . . . . . . . . . . 202.4 Engagement with telepresence robots: 1) as a local operator (left); 2) as a
remote participant colocated with the robot (right). . . . . . . . . . . . . . . 22
3.1 A target user using gaze to control a commercial telerobot modified with a360 camera. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 The robotic system for gazecontrolled telepresence. . . . . . . . . . . . . . 283.3 An extended version of the robotic system with a VR simulator . . . . . . . 293.4 The first version of the gazebased UI. . . . . . . . . . . . . . . . . . . . . . 303.5 The updated version of the gazebased UI. . . . . . . . . . . . . . . . . . . 313.6 Gazebased UI: continuous. . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.7 Gazebased UI: overlay. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.8 Gazebased UI: waypoint. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.1 A participant drawing a maze sketch from memory (right), his final mazesketch (middel), and the actual maze layout used for the trial(right). Theyellow stars shows the location of where he had met a person in the room. 35
4.2 Process of the SPAMbased pop up. . . . . . . . . . . . . . . . . . . . . . . 364.3 SPAMbased popup: a preliminary query . . . . . . . . . . . . . . . . . . . 374.4 SPAMbased popup: a perceptionrelated query . . . . . . . . . . . . . . . 374.5 Saccade test. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374.6 Process of the saccade test. . . . . . . . . . . . . . . . . . . . . . . . . . . 384.7 Visualisation of gazecontrolled telerobots’ paths (left) and handcontrolled
telerobots’ paths (right). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404.8 A path plot from a participant using gaze control. A collision can be ob
served in the upper right part. . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.1 Overview of two experiments and research aims. . . . . . . . . . . . . . . . 456.2 Illustration of the experimental design of Experiment 1. . . . . . . . . . . . . 466.3 Illustration of the experimental design of Experiment 2. . . . . . . . . . . . . 496.4 Task completion time (training effects) . . . . . . . . . . . . . . . . . . . . . 506.5 Task completion time of each training session between groups in Experi
ment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
C.1 Number of collisions (training effects) . . . . . . . . . . . . . . . . . . . . . 175C.2 NASATLX: effort (training effects) . . . . . . . . . . . . . . . . . . . . . . . 175C.3 NASATLX: frustration (training effects) . . . . . . . . . . . . . . . . . . . . 176C.4 SA: RS to the preliminary question (training effects) . . . . . . . . . . . . . 176
Gazecontrolled Telepresence: Accessibility, Training and Evaluation ix
C.5 Latency of the first correct saccade (training effects) . . . . . . . . . . . . . 177C.6 SAM Pleasure (training effects) . . . . . . . . . . . . . . . . . . . . . . . . . 177C.7 SAM dominance (training effects) . . . . . . . . . . . . . . . . . . . . . . . . 178C.8 Level of confidence (training effects) . . . . . . . . . . . . . . . . . . . . . . 178
x Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Nomenclature2D Twodimensional
3D Threedimensional
AI Artificial intelligence
ALS Amyotrophic lateral sclerosis
ANOVA Analysis of variance
AOI Area of interest
API Application programming interface
AR Augmented reality
BCI Brain–computer interface
CAVE Cave automatic virtual environment
CRPD The Convention on the Rights of Persons with Disabilities
DIY Do it yourself
DOI Digital object identifier
fMRI Functional magnetic resonance imaging
FOV Field of view
FPS Frames per second
GCTS Gazecontrolled telepresence system
GSR Galvanic skin response
HCI Human–computer interaction
HMD Headmounted display
HRI Human–robot interaction
IDI Impairments, disabilities and inclusion
IPM Implementation and performance management
JSON JavaScript Object Notation
LCD Liquidcrystal display
MR Mixed reality
NASATLX NASATask Load Index
PRISMA Preferred Reporting Items for Systematic Reviews and MetaAnalyses
RMSD Rootmeansquare deviation
ROS Robot Operating System
RQ Research question
Gazecontrolled Telepresence: Accessibility, Training and Evaluation xi
RT Response time
SA Situational awareness
SAGAT Situation Awareness Global Assessment Technique
SAM Selfassessment Manikin
SART Situational Awareness Rating Technique
SD Standard deviation
SDG Sustainable Development Goal
SPAM Situation Present Assessment Technique
SWORD Subjective Workload Dominance
UCD Usercentered design
UI User interface
VR Virtual reality
WHO World Health Organization
xii Gazecontrolled Telepresence: Accessibility, Training and Evaluation
1 Introduction1.1 Context and MotivationTelepresence is a concept where humans can be present at a remote physical place andachieve the feeling of being there [1]. As the term tele is derived from a Greek word whichmeans distant [2], telepresence is generalized to imply presence in a distant environmentwith a barrier that prevents a person from physically reaching the remote environment.
Assistive technologies help people with disabilities, solve problems related to their impairments, ensure their basic rights, further social inclusion, and even to meet the SustainableDevelopment Goals [3], especially SDG 1, SDG 4, SDG 10, and SDG 11. Recent advances in robotic technology and mobile devices provide more possibilities to overcomecommunication barriers and achieve the goal of telepresence. If telepresence is providedto people with disabilities through assistive technology, this could bring many benefits tothem and society. Thus, one important step is to combine telepresence with stateoftheart technologies in order to make them accessible. This is the overall motivation for thisproject.
In this research, we focus on people with motor disabilities, especially those with impairments of their hands. To better understand this use case, a persona [4] was created fromstudies of our target user groups through literature review and interviews:
Figure 1.1: Persona: Lucas wearing an HMD at a care home.
“ Lucas, 22 years old, has motor impairments due to cerebral palsy (c.f. Figure 1.1). Helives alone at a care home, 20 km from his family in Copenhagen. As a consequence ofhis mobility problems, he cannot visit his family or friends very often. Nor is it convenient
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 1
for him to visit other places and events, like museums, concerts, and universities. He hasimpaired manual activity, leading to problems of interacting with various daily electronicdevices, including smartphones. ”
Telerobots can provide people like Lucas a variety of possibilities for achieving the experience of being at another physical location, i.e. telepresence. Previous research havestudied how these robots has been used at school, for meetings [5], family [6], outdooractivities [7], telemedicine [8] and telenursing [9]. These robots could be wheeled robots(e.g. Fig. 2.3), or even humanlike (e.g. Fig. 1.2 1).
The research goal of this project was to explore the gazecontrolled telepresence for people with motor disabilities, by focusing on accessibility, training, and evaluation. The finalgoal is to improve their quality of life with such robots.
Figure 1.2: Orihime robots
1.2 Problem StatementAsmentioned above, telepresence robots can bring more opportunities to those who havemobility problems due to motor impairment. However, the main problem is that usually theproducts are not designed with our target user groups in mind. Previous studies mainlyexplored telepresence robots without involvement of our target user groups. First, thereis a lack of systematic overview on the use of telerobots for people with special needs,especially those with motor disabilities. It is unclear what roles telerobots may have in
1http://ces15.orylab.com/ [last accessed 28052021]
2 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
addressing barriers due to disabilities, promoting social inclusion, and advancing longterm Sustainable Development Goals.
Secondly, the importance of designing and developing gazecontrolled telepresence robotsneed to be explored, for instance by use of existing HMDs with builtin eye trackers. Gazecontrol has become a viable interaction method for many people with motor disabilities,but only a limited number of studies have focused on the potentials of gaze interactionfor telerobot operation [10, 11, 12]. It is thus adviseable to explore gazecontrolled telepresence robots by focusing on their accessibility and existing challenges to use them.Because most users are novices, useproblems and training needs are to be expected.In addition to exploring the effectiveness of traditional training, VRbased simulation training may be an alternative solution that is safe and costeffective. However, is still unclearif VRbased training is effective in the domain fo telerobots.
The process of controlling of a telepresence robot remotely is called teleoperation [13].It is necessary to know what to evaluate, how to evaluate, and what challenges exist inmeasuring essential metrics within the telerobot domain. Situational awareness (SA) isone of the important metrics within humnan factors research. A basic knowledge on theimportant of SA is is needed for our research. Moreover, the question of how to measureSA is also relevant in this case. Even though there are existing methods for measuringSA, it is still unclear which methods are most suitable within the new context of telerobotoperation.
1.3 Research questionsAddressing the motivation and problems mentioned above, the research questions (c.f.Fig. 1.3) for the PhD project are:
• RQ 1: Is it advisable to introduce telepresence robots for people with motor disabilities?
• RQ 2: Is gaze interaction a viable method to control a telerobot?
• RQ 3: What are the difficulties when controlling a telerobot by gaze and how canthey be measured?
• RQ 4: Can we train people in VR to drive a telerobot in real life?
• RQ 5: Is it important to measure situational awareness when driving a telerobot?
• RQ 6: How can we measure situational awareness when driving a robot with anHMD?
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 3
Experiment 1 Experiment 2
RQ 1: Is it advisable to introducetelepresence robots for people withmotor disabilities? RQ 3: What are the difficulties when
controlling a telerobot by gaze and how can they be measured?
RQ 4: Can we train people in VR to drive a telerobot in real life?
RQ 6: How can we measure situational awareness when driving a robot with an HMD?RQ 5: Is it important to measuresituational awareness when driving atelerobot?
RQ 2: Is gaze interaction a viablemethod to control a telerobot?
Literature review
Field Study with target users
Gaze-controlled
Telepresence
SA
Figure 1.3: Overview of research
The rest of this section provides an overview of the tasks performed and their roles inseeking answers for the research questions.
In line with RQ 1 and RQ 2, a systematic literature review was conducted. In line with RQ2 and RQ 3, Experiment 1 together with a pilot study were conducted. RQ2 could be partlyanswered using the findings from the systematic literature review. Experiment 1 helpedus to answer RQ 1 with empirical evidence of using a gazecontrolled telepresence robotwith an HMD. In line with RQ 4, Experiment 2 was conducted.
The last two research questions focus on the SA measure. In line with RQ 5, literature focusing on SA and teleportation were reviewed, which provided us an overview. In line withRQ 6, throughout Experiment 1 and Experiment 2, different types of measure techniqueswere compared.
4 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
SPAM-based pop-up
GCTS v1
Saccade test
Gaze-based UI v2
Experiment 1 Experiment 2 Field Study
Gaze-based UI v1 Gaze-based UI (waypoint)
GCTS v2 Extended version of GCTS v1
SART
SA theory AT, accessibility and IDILiterature Review
Extended version of GCTS v2
VR simulator
Pilot Study Experiment 1
Figure 1.4: An overview of tasks performed at each level.
1.4 Research MethodsThere are three common research methods for conducting research in HCI and otherareas [14] (c.f. Figure 1.5).
All the three methods have been used in this research:
• Observational method:The main method used in the field study with our target user group is observationalmethod. Observation is the starting point of the method. The method has also beenused throughout all the research with human subjects, as it is essential to observehumans interacting with computerembedded technology in HCI research [14]
• Experimental method:This is the main method we used for the two controlled experiments conductedin laboratory settings to acquire knowledge and study existing knowledge for thepurpose of verifying or extending knowledge. Typical approaches for experimentalresearch have been used in the experiments, such as research hypotheses andsignificance test.
• Correlational method
The main method in exploring the SA measure in two experiments is correlationalmethod. With this method, we could look for relationships between data from different SA measure techniques and compare data from each SA technique with datafrom other metrics.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 5
Experiment 1
Experiment 2
Field Study
Correlational
Method
Experimental
Method
Observational
Method
Figure 1.5: Overview of the common research methods
Some typical laboratory and nonlaboratory research methods have been selected forthis research, including field studies, surveys, interviews, and focus groups [15]. In addition, some design methods were also used as our research methods; for example, usercentered design approach [16], experience prototyping [17], and personas for capturingassistive technology needs of people with disabilities [18, 4].
1.5 Research ContributionThe contributions of this research are theoretical, designbased, and empirical within thethe area of gazecontrolled telepresence.
1.5.1 TheoreticalDuring this research, theoretical contribution were made by extending the theory of SAwithin a new context of teleoperation, namely controlling a telepresence robot with anHMD. We used the theory of situational awareness (SA) proposed by Endsley [19] consisting of a threelevel hierarchical model of SA, which was developed initially for aviationtasks. This theory has been widely used, and not only within its initial application area.In both experiments, we further validated the correlation of SA, workload and task performance within a context of robotic teleoperation with an headmounted display. Specifically, we adressed using gaze to navigate a telepresence robot. In Experiment 2, wefurther examined the role of training. The findings contributed to knowledge of existing SAtheory, and can be conveyed to research within the context of teleoperation. In addition,based on existing literature and our research practice with our target users, a theoreticalmodel was proposed (see Fig. 2.1) to clarify the relationship of impairments, disabilities,and inclusion.
1.5.2 DesignThe design contributions were mainly insights and ideas based on the studies and literature review.
6 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
The original version of the gazecontrolled telepresence system, and an extended versionwith a VR simulator have been applied in this research, which were implemented by students and software developers associated with the project. Throughout the research, thedesign contributions were made based on the following activities. Experiment 1 demonstrated the possibility of controlling a telepresence robot by using gaze. Compared tohand control, the performance and subjective experience were significantly lower. Basedon the findings and literature review, we came up with ideas for designing the VRbasedsimulator. In Experiment 2, results showed that the training process had significant andpositive impacts on gaze control ofteleoperation. Moreover, there was no significant difference between training in reality and training in the VR simulator. VRbased trainingcould be an alternative training environment, which is a safe and nonexpensive solution.This further confirmed the validity of our experimental design, which could be used fore.g. prototype testing when developing a system using novel control in teleoperation.
Throughout the entire research, the gazebased UI was designed using usercentereddesign [16]. The design process and outcome provided us insights for future design. Thefirst version of the gazebased UI was based on previous design suggestions [12, 20]. Aneed for improvement was found by observations in the pilot study. With these findingsand literature review, we came up with new design ideas for an improved version with anenhanced control mechanism and layout (see Fig. 3.5). The feasibility of the design werelater confirmed in the two experiments with ablebodied participants and a field study withour target users.
With findings from the systematic review, the importance of the concept of universal designfor telepresence robots were also emphasized.
1.5.3 EmpiricalWe completed studies of the systems, including experiments with ablebodied participantsand a field study with target users. The studies contributed to the research by providingempirical evidence of:
• the possibilities of gazecontrolled telepresence, within the context of teleoperationusing gaze with an HMD
• the challenges of using gaze control, compared to hand control
• training effects of gazecontrolled telepresence
• training in a VR simulator as an alternative for training within a real environment
• comparison of different SA techniques within the context of teleoperation with anHMD
• correlation of SA and other metrics (e.g., performance and workload) as further validation within the context of teleoperation
• findings based on observations of target users in a care home
In Fig. 1.4, main activities and their main contribution areas (top: theoretical; middle:empirical; bottom: designbased) are presented.
1.6 Overview of the Scientific PublicationsThis section briefly outlines these publications, which are included as part of the thesis.The papers related to the research are presented in Appendix A and Appendix B. A briefoverview of the papers are presented here.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 7
The paper presented below is based on a systematic literature review (c.f. Appendix A.1).The studies included in the review focused on telepresence robots for people with specialneeds, including those with different types of disabilities, elder adults, and homebondchildren. The systematic literature review serves as a basis for our research by providingan overview about where our research is located. Findings presented in this article alsohelped us answer RQ1 and RQ2.
Journal Article 1
Zhang, G., & Hansen, J. P. Telepresence Robots for People with Special Needs:a Systematic ReviewSubmitted to International Journal of Human–Computer Interaction
A prototype system for gazecontrolled telepresence was implemented by students andsoftware developers associated with the research group. The following paper [21] provides a description of the system and concepts behind it. Experience prototyping [17] wasused to understand the use case of controlling a telerobot from an HMD. This use casewas also shown as a demovideo at ACM ETRA 2018.
Article in Proceedings 1
Hansen, J. P., Alapetite, A., Thomsen, M., Wang, Z., Minakata, K., & Zhang, G.Head and Gaze Control of a Telepresence Robot with an HMDIn Proceedings of the ACM Symposium on Eye Tracking Research & Applications2018
Based on the prototype project, a pilot study and Experiment 1 was conducted. Based onthe primary results from the first experiment, we gained a prospect for our future research.In this paper [22], we outlined a twophase approach towards further investigation of thesystem. The first phase focused on accessibility and challenges, while the second phasefocused on training.
Article in Proceedings 2
Zhang, G., Hansen, J. P., Minakata, K., Alapetite, A., & Wang, Z. Eyegazecontrolled Telepresence Robots for People with Motor DisabilitiesIn Proceedings of the 14th ACM/IEEE International Conference on HumanRobotInteraction 2019
One main goal of Experiment 1 was to evaluate the feasibility and identify the existingchallenges of gaze control. The paper [23] presents an experimental comparison betweengazecontrolled and handcontrolled telepresence robots with a headmounted display.
Article in Proceedings 3
Zhang, G., Hansen, J. P., & Minakata, K. Hand and Gazecontrol of Telepresence RobotsIn Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications 2019
Findings fromExperiment 1 indicated that there were still serious challenges with regard togazebased driving. In this paper, potential improvements addressing the challenges were
8 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
discussed. The plan for future investigations of potential impacts of gazecontrol trainingin VR were also presented. This paper [24] was presented at a doctoral symposiumsession at ACM ETRA 2019 for feedback.
Article in Proceedings 4
Zhang, G., & Hansen, J. P. Accessible Control of Telepresence Robots basedon Eye TrackingIn Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications 2019
For Experiment 1, we enabled realtime measurement of SA in robot teleoperation with anHMD, by adapting the socalled SPAMmethod. The work was presented at the the NordicErgonomics and Human Factors Society Conference, where feedbacks were provided forour further work on it [25].
Article in Proceedings 5
Zhang, G., Minakata, K., & Hansen, J. P. Enabling Realtime Measurement ofSituation Awareness in Robot Teleoperation with a Headmounted DisplayIn Proceedings of the 50th Nordic Ergonomics and Human Factors Society Conference 2019
The goals of Experiment 1 were twofold: 1) to examine the possibility and challenges ofgaze control by comparing it with hand control and 2) to validate the SA technique withina telepresence context. With feedback from the conference, we completed data analysis focusing on 2). This following journal article (c.f. Appendix A.2) included significantextensions and enhancements to the conference version with detailed results for both 1)and 2).
Journal Article 2
Zhang, G., Hansen, J. P., & Minakata, K. Gazecontrolled Telepresence: An Experimental StudySubmitted to International Journal of Human Computer Studies
Based on our design suggestions, system developers associated with the research groupthen implemented a VRbased simulator for training of gazecontrolled telepresence andadded it to the robotic system. This paper [26] presents the VRbased simulator andpreliminary test results focusing on training effects.
Article in Proceedings 6
Zhang, G., & Hansen, J. P. A Virtual Reality Simulator for Training Gaze Control of Wheeled TelerobotsIn Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology 2019
Saccade test was implemented in the robotic system for Experiment 2. The paper (c.f.Appendix A.3) presents findings of comparison between SPAMbased popup and saccade test. The paper further concludes that saccade test could be an alternative methodfor assessment of operator’s SA under certain circumstances.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 9
Journal Article 3
Zhang, G., Hansen, S. H., Behrens, O. R., & Hansen, J. P. Saccade Test as a NewTool for Estimating Operators’ Situation Awareness in Teleoperation with anHMDSubmitted to Applied Ergonomics
The paper [27] presents a field study conducted with target users at a care home, whichconfirmed gazecontrolled telepresence as a viable option for our target user group. Findings from observation and their suggestions regarding the technology were included.
Article in Proceedings 7
Zhang, G., & Hansen, J. P. People with Motor Disabilities Using Gaze to ControlTelerobotsIn Extended Abstracts of the 2020CHI Conference onHuman Factors in ComputingSystems
Our target group are wheelchair users. Gazebased userinterfaces could be used forsteering of both wheelchairs and wheeled telerobots. This paper presents results of anexperimental study and a field study that compared the feasibility of different types ofgazebased user interfaces for wheelchair control. The final paper [28] reveals that ablebodied participants preferred waypoint control over continuous control (c.f. Figure 3.6)and overlay control (c.f. Figure 3.7).
Article in Proceedings 8
De Araujo, J., Zhang, G., Hansen, J.P., & Puthusserypady, S. Exploring EyeGazeWheelchair Control in VRIn Proceedings of the 2020 ACM Symposium on Eye Tracking Research & Applications
1.7 Overview of the ThesisThe dissertation continues with Chapter 2. The chapter presents background information about the research, consisting of two parts. It provides a general introduction totechnologydriven possibilities for people with disabilities, its related concepts, and socialimplications. Then related work and related theory of this research are presented. Chapter 3 presents the design and implementation of the gazecontrolled telepresence system,including the original version, extended version for operator training, and the gazebasedUI design for the systems. Chapter 4 presents the evaluation methods, measures, datacollection, and data analysis used in this research. Chapter 5 presents the main findings from the systematic literature review and their correlation with the studies. Chapter6 presents the insights based on the key findings from the two experiments 1) regarding accessibility to gazecontrolledtelepresence and its challenges for novice users; 2)VRbased operator training as an effective solution; and 3) development and testing ofnew methods for measures of SA. Chapter 7 presents insights based on observation fromthe field study with target users at a care home. Chapter 8 presents a discussion of theproject and outlines some core limitations to the present research as well as suggestionsfor future work within this field. Chapter 9 concludes the thesis by explaining how each ofthe research questions was addressed through the main findings.
10 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
2 BackgroundThis chapter consists of two parts. It starts with an introduction to the technologydrivenpossibilities and social implications of assistive technologies. Then we introduced background knowledge about the research, related work and theories.
2.1 Terminology and an Overview of Related ConceptsTechnologies can influence the lives of people with disabilities in various ways [29]. Thetopic of the project is closely related to assistive technologies and accessibility, so thisresearch inevitably will be using numerous terms related to disability. Certain words orphrases might intentionally or unintentionally reflect bias or a negative, disparaging, orpatronizing attitudes toward people with disabilities, and in fact any identifiable group ofpeople [30]. We aim to use appropriate terminology throughout this paper. Thus, it becomes mandatory to have a proper understanding of disability and inclusion for clarifyingthe relationships between concepts of accessibility, assistive technology (AT), and universal design (UD).
Figure 2.1: Overview of terminology
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 11
People might have impairments for various reasons, and these impairments may involvemotor, auditory, cognitive, and perceptual functions (sense, vision, hearing) at differentlevels, which may cause a mismatch between them and the technology or environment.For instance, people who lose their hand control cannot use standard personal computersor mobile phones. This mismatch results in a disability that prevents them to engageoptimally with major activities in the information society. This mismatch might further leadto a social problem — their exclusion from social networks or society in general.
Impairments can be temporal or permanent. When a person lack his or her hands thiswould be permanent, Hand control disability may also be due to injury or medical treatment, which only limits the hand ability temporarily. In addition, under certain situations,persons without impairment can have such a problem. For example, parents carrying achild might experience situational problems of not being able to use their hands to controla mobile phone. These conditions will cause either a permanent, temporal or situationaldisability. Therefore, we consider disability as a mismatch between human, technology,and environment. Based on this, we will use a definition that meets this understanding,which is provided by the United Nation’s Convention on the Rights of Persons with Disabilities (CRPD):
“Disability is an evolving concept and that disability results from the interaction betweenpersons with impairments and attitudinal and environmental barriers that hinders their fulland effective participation in society on an equal basis with others.” [31]
As illustrated in Figure 2.1, the mismatch can be avoided by improving the accessibilityof the environment and technology, through the use of assistive technology (AT). Accessibility aims to ensure that people have equal access to the physical environment, totransportation, to information and communications, including information and communications technologies and systems, and to other facilities and services open or provided tothe public, both in urban and in rural areas [31]. Examples of assistive technology includehearing aids, wheelchairs and gaze tracking.
In this process, AT, which refers to “assistive products and related systems and servicesdeveloped for people to maintain or improve functioning and thereby promote wellbeing”[32], plays an important role. Accessibility and AT provide specific solutions for overcoming specific mismatches.
Another similar concept, universal design (UD) posits requirements for the entire designat the general level, rather than a specific mismatch. It is defined as:
“The design of products, environments, programmes and services to be usable by allpeople, to the greatest extent possible, without the need for adaptation or specializeddesign. “Universal design” shall not exclude assistive devices for particular groups ofpeople with disabilities where this is needed” [31].
Under certain circumstances, a mismatch problem can be solved through the abovementioned means. Inclusion could be further achieved as an outcome. It is multifaceted,and defined as:
“The meaningful participation of people with disabilities in all their diversity, the promotionof their rights and the consideration of disabilityrelated perspectives, in compliance withthe CRPD” [31]
12 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
2.2 Assistive technology, SDG, and CRPDFrom a global perceptive, disabilities and inclusion are indispensable issues in the SDGsand 2030 Agenda [33]. Disability has been included in various targets [34]. For instance,the use of hearing aids and wheelchairs for people with hearing and motor impairmentscan reduce poverty (SDG 1.2) [35]. AT enabled children with severe disabilities to communicate effectively with their teachers and classmates, thereby facilitating learning andparticipation (SDG 4) [36]. International collaboration towards developing the manufacturing capacity of highquality affordable assistive products is a prerequisite for the UDneeded to meet the SDGs (SDG 17) [34].
The CRPD adress different aspects of people with disabilities, from basic rights and dailymobility to various rights and interests, specifically to cultural life, entertainment, leisure,and sports. The purpose of the convention is to promote, protect, and ensure the fulland equal enjoyment of all human rights and fundamental freedoms by all persons withdisabilities, and to promote respect for their inherent dignity [31].
With the CRPD, assistive technology has been further promoted internationally. This convention is a major step in focusing on existing barriers at the current level. Focusing onexisting barriers to ensure these rights, and further fostering the social understanding ofdisabilities, promote ATs as a medium to advance the inclusion of people with disabilitiesinto different areas of society [29].
Technology plays an important role in facilitating these rights. The possibilities driven bytechnologies and these rights are interrelated. For instance, personal mobility (Section20 of the CRPD) is an important basic right. People with motor disabilities may benefit from using telerobots to overcome the lack of mobility, e.g. for telepresence. Thesetechnologies enable them to overcome geographical barriers via audiovisual communications and a controllable physical embodiment [37]. Through ability and independencein personal mobility, their rights on other important domains can be ensured, includingeducation (Section 24), habitation and rehabilitation (Section 26), work and employment(Section 27), and adequate standard of living and social protection (Section 28).
2.3 Examples of Advanced Assistive TechnologyAdvances in robotics, augmented and virtual reality (AR and VR), and multimodal interaction provide new options for people with disabilities. Some recent examples are:
• Multimodal control technology enables people to overcome challenges due to impairments, and helps them seamlessly interact with daily devices. For example,they can control the device by voice [38], eye or head movements [21], or even byusing their minds with braincomputer interfaces (BCI) [39].
• Telerobots can help people overcome physical barriers by being virtually placed in aremote place for communication and service operations. For instance, in December2018 OriHime robots [40] were used by people with disabilites to serve and communicate with visitors in a Japanese cafe (c.f. Fig. 1.2). Similarly, future agriculturetelerobots may be operated by disabled people, which might provide inclusion in theworkforce for some people that would not otherwise be able to make a living.
• Immersive virtual reality technology convey a more fully experience of being at adifferent location. Combining a VR headset with a remote 360 degree camera provide the freedom to look in all directions by moving the head and body. Inaccessibleareas in nature or at cultural scenes may hereby be contemplated in true, dynamicperspective and high detail.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 13
2.4 Assistive Technology and InclusionAmong the rights of people with disabilities declared in the CRPD, living, education andemployment are the most fundamental. Persons with disabilities have poorer health outcomes, lower education achievements and less economic participation than people without disabilities [41].
Specifically, children with disabilities tend to be in poor health and have limited access toeducation [42], suffering even greater unequal treatment. In terms of employment, persons with disabilities face disproportionate unemployment [43]. Ensuring their equal rightson these aspects is an important step in achieving the goal declared in CRPD. Regardingthe abovementioned basic rights and inclusion, ATs have a positive impact on the qualityof life, health and wellbeing of people with disabilities. From a social perspective, theycan also have a positive impact on the entire society, by reducing the direct costs of healthand welfare.
2.5 Challenges and Barriers of Assistive TechnologiesAs argued previously, we can see the potential benefits of ATs for people with disabilities.However, challenges and barriers exist when designing and developing these technologies.
There are around one billion people in the world who need and can benefit from assistivedevices due to disability. As most countries have a rising ageing population, and theprevalence of diseases is increasing, this number will be much larger by 2050. However,only 10% of disabled people worldwide have access to assistive devices [32]. Reasonsinclude (1) high cost of product or technology, (2) lack of funds, (3) limited availability, (4)lack of knowledge, (5) lack of a skilled workforce, and (6) lack of policy and standards [41].For example, 70 million people need a wheelchair, but only 515% can get a wheelchair[44, 32].
Most research and development focuses on assistive devices for highincome people [32].This can be seen in the case of a telerobot for persons with motor disabilities (EngineeredArt’s Robothespian [45]), which costs $59,000. The high cost of equipment limits theirwidespread use. Moreover, there is usually a lack of assessment and prescription, fitting,user training or mechanisms for followup, maintenance, and repair of products of ATs[32].
Other issues include the limitations of the technology itself. BCI systems can create manypossibilities, but currently it cannot exceed the laboratory and become effective in daily lifesupport [46]. Various smart products have their potential benefits, but they are usually offtheshelf without possibilities for adaptation to improve accessibility. For instance, manyspeech interactive device will not be able to recognize people speaking slightly indistinctly.
As previously stated, employment is a basic right. We will address telework as a case laterin this thesis . Telework is the “practice of substituting communications and/or computertechnology for actual travel to work or a central office” [47]. It has the potential to facilitate employment for people with disabilities by removing barriers caused by traditionalwork environments. Reasons for employees to telework are varied [48]. For personswith disabilities, it can overcome physical barriers with information and communicationtechnology for telework [47].
It can also promote their employment by reducing disabilityrelated bias and discrimination[49]. If ATs can be widely used by people with disabilities, they could have their accessto the work environment by the use of the technologies. However, barriers exist in this
14 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
case. First, there were limited positions dedicated to telework [50]. The number mayincrease because of the pandemic in 2020, but a factor that cannot be ignored is thatemployers are rarely willing to provide telework roles for new employees [50]. Previousstudies found that telework as a job accommodation may not in itself provide equivalentaccess to employment [50]. This is a very important issue to consider when consideringtelework by using ATs for people with disabilities.
2.6 Consideration for Our ProjectBased on the above issues, here are some considerations for our research project. Recent advances in robotics and emerging technologies have potential positive impacts ondisability and inclusion. When adapting these technologies, we should not only focus onthe cuttingedge and highend products. It is important to be aware of the importance ofdeveloping new solutions with existing basic, lowcost assistive products. Individual andcostlevel impacts should always be taken into consideration.
Consequently, we should pay attention to commodity products, which are available to everyone and everywhere. They are not only low price, but also flexible to use, and easy tobe adapted for people with disabilities. For example, in one of prototypes the RaspberryPi ($75) is combined with a lowcost robot base to provide new interaction possibilities(c.f. Figure 3.1. Several apps has now been developed on tablets that support peoplewith special communication needs, e.g. typetotalk. Tablets have its advantage as astandard device, which can be bought from ubiquitous stores, and not only through special providers of AT. In our research case, headmounted displays (HMDs) with builtingaze tracking have become more and more popular for VR games, and these immersivedisplays may also be used to improve telepresence experience. Users can acquire it atdifferent prices according to their financial capabilities, and researchers and developerscan more easily study its potential as an AT. Specialized eyetrackers may require severalweeks or even months to get repaired, which is a problem for people who are depending on it for their daily communication. With the HMDs that has buildin eyetracking onthe common market, it is much easier to get local service or replacement. Moreover, ifcaregivers are already familiar with the product, they can assist the users without muchtraining.
2.7 Eyetracking and Gaze InteractionRecent advances in eyetracking technology provide a range of possibilities in researchand daily life. Eyetracking technology has been widely used by researchers and practitioners in several areas, including neuroscience and psychology, engineering and humanfactors, marketing, advertising and humancomputer interaction [51, 52].
Currently, there are mainly two types of eyetracking devices, including mobile eye trackers and screenmounted eye trackers. Tobii Eye Tracker 4C is a typical screenmountedeye tracker, which needs to be fixed near the screen. Compared with screenmountedeye trackers, mobile eyetrackers have advantage of flexibility and mobility. Tobii ProGlasses are an example of mobile eyetrackers. Recent advances in eyetracking technology make it possible to build these eye trackers into the emerging headmounted display,such as FOVE and HTC VIVE. Figure 2.2 shows a FOVE HMD with builtin eyetrckers(left), the Tobii Pro Glasses 1, and the Tobii Eye Tracker 4C.
1https://www.tobiipro.com/ (last accessed: 28052021)
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 15
Figure 2.2: Eyetracking devices: builtin, headmounted (mobile) , and remote (fixedunder the monitor)
Besides position measures, eye movement data can provide measures of gaze direction,amplitude, duration, velocity, acceleration, shape, areas of interest (AOI), and scanpath[53]. Regarding saccadic eye movements, which we wil later focus on in this thesis,data collected typically includes saccadic direction, saccadic amplitude, accadic duration,saccadic velocity, and acceeration/deceleration, and saccadic curvature.
Eyetracking technology provides various types of eye movements. In this research, forthe accessible control part, we focus on control (e.i. steering) based on gaze tracking,which includes data related to gaze position and fixation (e.g., duration, latency, andposition). For the assessment part, we focused on saccadic eye movements, such assaccade latency, amplitude, duration, position and direction.
2.7.1 Gaze InteractionGaze interaction is an interaction modality, where information from a human’s eye gazebehaviour can be exploited during humantechnology interaction [54]. Interactivemethodsusing eye movement and gaze control could result in faster and more efficient humancomputer interfaces [55].
Besides typical gazebased text entry [56, 57], gaze interaction has been extensivelyused for gazecontrolled wheelchairs[58], vehicle driving [12], drone flying [59], and robotoperation [60].
People with motor disabilities, especially those with limited hand ability, can simply useeye gaze to interact with daily devices. In teleoperation, prior work has applied gazeinteraction in different types of telerobots, such as humanoid telerobots [45], robotic arms[61], and wheeled telerobots [10, 62, 63]. Navigation is a main task when using wheeledtelerobots. Different types of gazebased user interfaces have been designed to navigatewheeled robots [10, 62, 63]. By using these gazebased UIs, operators can have handsfree teleoperation. These gaze UIs can be used by people with motor impairments or byusers with occupied hands (e.g., machine operators or surgeons) [20, 12]. Using gaze for
16 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
robot control offers more natural orientation for the operator and a more fluid input methodcompared to speech control, according to [64].
However, current challenges also exist in gaze interaction. One of the main drawbacks isthe Midas touch problem [65]: everything a user looks at may become activated, due tothe involuntary activation of events with gaze [66]. This problem was originally describedby Jacob in 1990:
“Before long, though, it becomes like the Midas Touch. Everywhere you look, anothercommand is activated; you cannot look anywhere without issuing a command. The challenge in building a useful eye tracker interface is to avoid this Midas Touch problem. Ideally, the interface should act on the user’s eye input when he wants it to and let him justlook around when that’s what he wants, but the two cases are impossible to distinguishin general.” [65]
Human eyes are primary perceptual organs used for acquiring information from the outside world, but not used as an input to manipulate something. Consequently, the useof eyes for interaction activities (i.e. as a control organ), such as typing, clicking, andentering commands may be rather confusing, at least for novices [67, 68].
Typically, a dwelltime techniques has been used to overcome the Midas touch problem[69]. A solution suggested by [70] combined gazepointing with a softswitch, which couldbe accessed with any limb. However, this requires that the target users can use one oftheir limbs. If this is not the case, BCI could be considered. In [71], gaze could be usedto for target pointing and motor imagerydetection via EEG to execute a click for targetselection .
2.7.2 Assessment based on EyetrackingEye movement data, including pupil dilation, can be used for assessment of human activities, such as mental workload [72], situational awareness [73, 74], fatigue [75, 76, 77],cybersickness [78], and attention [79]. Eyetracking data can be used to accurately distinguish unique individuals with biometric identification [80]. In this research, we particularlyfocused on saccadic eye movements for assessment of situational awareness.
Data on saccadic eye movements, like average saccade amplitudes, average saccadevelocities, and average saccade peak velocities, can be used as important eyemovementbased biometric features. These data can be used to accurately distinguish unique individuals [80]. Saccade tests, including prosaccade test and antisaccade tests, have beenwidely used in pathophysiology and psychology [81]. The prosaccade test requires a testsubjects to make a saccade as quickly as possible, towards the location of an sudden onset target stimuli, while the antisaccade test requires the subject to make a saccade inthe opposite direction, but of the same distance, when the target stimuli appears[82].
Saccadic tests has been extensively used for diagnosing Parkinson’s disease [83], depression [84], Alzheimer’s disease [85], and schizophrenia [86]. For instance, previousresearch showed that Parkinson’s disease patients had more express saccades in the immediate prosaccade task, and more direction errors in the immediate antisaccade task.[87]. The patients also had difficulty in initiating voluntary saccades [83]. Significantlyincreased latency and antisaccade error rates were found in patients with Alzheimer’sdisease when comparing them with an age and gendermatched group [85]. Patientswith chronic schizophrenia also showed increased antisaccade errors [86].
Assessment based on saccadic eye movements has also been been used in human factors research, such as detection of fatigue [76, 88, 89, 77], attention [79], mental workload
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 17
[72], and sleepiness [90]. Saccade test for the detection of driving fatigue has been applied in transportation research [76], medical care (surgical residents) [88], aviation [89],and within the military [77]. A recent review paper [75] provides an overview of how different types of saccadic eye movement metrics have been applied in the detection of mentalfatigue, including saccade mean velocity, magnitude, frequency, accuracy, latency, duration, and peak velocity.
Saccadic eye movement performance was found to be an indicator of driving ability in elderly drivers, showing a strong correlation between antisaccade performance and drivingperformance [91]. Detection of workload based on pupil diameter is overly sensitive tobrightness changes, while saccadic eye movements are indifferent to changes in brightness. Thus, assessment of workload based on saccadic eye movements is supposed tobe more robust and accurate in most environments [72]. Saccadic main sequence (amplitude, duration and peak velocity) was used as a diagnostic measure of mental workload[92]. Previous findings indicated that among amplitude and duration of saccades, thepeak velocity of saccadic eye movements is particularly sensitive to changes in mental fatigue [76]. Saccadic peak velocity was used as a key metric for measuring mentalworkload [92]. Dualtask paradigm (see section 2.9.3) is commonly used in estimatingmental workload based on saccadic eye movements, such as using reaction time to thesecondary task as a metric [92].
2.8 Telerobots, Teleoperation and Telepresence2.8.1 Presence and telepresenceThe term telepresence can be traced back to 1980. It first appeared in Mavin Minsky’spioneering paper [1], where he envisioned that people would experience a sense of togetherness while being far from each other. He emphasized the idea of remotely “beingthere” and ”in person”. This proposal has essentially become a manifesto to encouragethe development of the science and technology for remote control of devices for telepresence, such as telepresence robots [93].
“ The biggest challenge to developing telepresence is achieving that sense of ’being there’.Can telepresence be a true substitute for the real thing? Will we be able to couple ourartificial devices naturally and comfortably to work together with the sensory mechanismsof human organisms?” Minsky asked in 1980 [1]
Sheridan later described telepresence as “feeling like you are actually ’there’ at the remotesite of operation” [94]. He further refers to presence elicited by virtual environment asvirtual presence.
The term telepresence consists of tele and presence. Taken together, they mean presence at a distance [2]. Presence, the sense of ”being there”, plays an importance rolein telepresence. In some of the literature, when referring to teleoperation, the term presence and telepresence were used interchangeably [95]. Sheridan referred to presenceas “the effect felt when controlling real world objects remotely as well as the effect peoplefeel when they interact with and immerse themselves in virtual environments.” [94]. Thisdefinition clearly describes presence as an experience elicited by technology [96]. Presence in a simulated environment, has become a central concept in the discussion of newtechnologies and mediated environments [97].
It is evident from empirical studies that presence not only provide the subjective feelingof ”being there”, but also correlates with performance, responses, and emotions [98].
Regarding measures of presence, subjective measures with questionnaires have been
18 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
used by most scholars [99]. Additionally, some researchers attempted to measure presence from behavioral and physiological data [98]. Physiological measurements like heartrate, pupil dilation, blink responses, and muscle tension, have all been proposed as possible indicators of presence [100]. Previous experiments supported the use of physiologicalresponses as a possible measurements. In particular, heart rate has been found to bemore sensitive and hold more statistical power than Galvanic Skin Response (GSR) andskin temperature [101]. Highend fMRI equipment have been applied to study the correlations between regions of the brain and the feeling of presence, suggesting it to beassociated with activity in the dorsolateral prefrontal cortex [102].
Presence was argued as “a subjective sensation, much like ‘mental workload’ and ‘mental model’—it is a mental manifestation, not so amenable to objective physiological definition and measurement” [103]. Therefore, as with mental workload and mental models, subjective report is still the essential basic form of measurement. [94]. Witmar andSinger’s Presence Questionnaire [104] is the most prevalent method for collecting subjective data [99]. Accordingly, telepresence is usually measured using subjective questionnaires [105]. Therefore, the Witmar and Singer’s Presence Questionnaire was alsoused as a measuring tool in our research.
2.8.2 TeleoperationThe process of controlling a telerobot remotely is called teleoperation [106]. It is describedas “extension of an operator’s sensing and manipulating capability by coupling throughcommunication means to artificial sensors and actuators” [13].
In telepresence robots, HMDs have recently become a popular display for the humanoperator (c.f. Lucas in Figure 1.1) [107, 108, 109]. Using an HMD in teleoperation hassome advantages when compared to a standard monitor: i) it improves the experiencewith a display of a fully immersive environment and stereo vision; ii) it has more mobilitycompared to typical screenbased display; iii) it has good possibilities for adding builtinsensors like gaze tracking, head motion or voice command. Thus, it can provide handsfree teleoperation, supporting persons with disabilities who cannot use their upper limbs[110]; iv) it may improve the situational awareness of the operator [111] and the level ofspatial presence [112, 111].
However, the deployment of such displays introduce some challenges when measuringsituational awareness, because the questionnaire methods require participants to leaveand reenter the remote environment displayed by the HMDs [113]; this has been termedthe ”break in presence” (BIP) problem [114]. Consequently, participants may not be ableto subjectively recall and express their experience accurately, particularly with posttrialassessment [95].
A recent study [113] enabled measurement of presence when users were wearing HMDsby displaying the presence questionnaires (i.e. Witmar and Singer’s Presence Questionnaire [104]) in the HMD allowing the users to complete the questionnaires directly in thevirtual environment.
2.8.3 Telepresence robotsTelerobots can be used to overcome barriers that prevent operators from physically reaching or directly interacting with their environment. Distance is a factor that causes barriers.Barriers may be imposed by hazardous environments or very large or small inaccessibleenvironments [2]. Space, undersea, nuclear power, the handicapped, surgery, terrestrial mining, construction and maintenance, warehousing, firefighting, policing and military operations are all barriers addressed in [13]. New barriers appear under certain new
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 19
circumstances, for instance, during the Covid19 pandemic, and accordingly telerobotshave been successfully applied in the healthcare sector for patient monitoring during thepandemic [115].
Today, telerobots are extensively used for telepresence, but far beyond being manipulators in sealed nuclear facilities as originally envisioned by Minsky [116]. Telepresencerobots have become one main type of telerobots, that operators can use to achieve telepresence. There is a variety of terms used to denote these secondgeneration telepresencerobots. They have been called ”remote presence system” [117], ”mobile robotic telepresence systems” [118, 119], ”virtual presence robots”, and ”remote presence robots” [120].They have been described as video conferencing systems mounted on a mobile roboticbase [118], like ”skype on wheel” [121], embodied video conferencing on wheels [116], oras an alternative to videomediated communications [119].
The business potentials of telerobots has lead several companies to introduce commercialproducts like Double Robot, Beam, Giraff, Padbot, and VGO.
Since the definitions of telepresence robots varies among researchers, it is importantto clarify how the term is used in this thesis. Telepresence robots are here defined asrobotic devices, that allow an operator to overcome physical distance in order to achievetelepresence.
Figure 2.3: A typical telepresence robot: Double Robot.
Fig. 2.3 shows a typical telepresennce robot (e.i. the Double Robot 2). The commercialproducts usually feature: i) twoway audio video communication; ii) connection betweenlocal and remote parties; iii) video screen where the operator’s face image is shown; andiv) mobility controls for the path of motion.
Among them, the twoway video communication feature is considered essential [122].A comparative study showed that the mobility feature significantly increased the remote
2https://www.doublerobotics.com (last accessed: 01042021)
20 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
user’s feelings of presence [119]. Presence achieved by using these robots, when combined with movement, enhances the perception of a social link for the operator [123].Thus, when applied in social interactions, it can greatly support task collaboration [119].
Regarding video communication, stateoftheart 360 video camera have been used instead of traditional webcamswith a limited field of view (FOV). A typical 360 video camerais an omnidirectional camera that can capture and transmit a sphere around the cameraas 360 videos. Depending on the features of the camera, mono or stereo 360 videoscan be recorded. The 360 videos can be viewed in real time using HMDs and tablets[7] and provide an immersive omnidirectional view. Viewers can freely change their fieldof view by looking around while these videos play [124]. It has been shown that using360 video view with telepresence robots in indoor settings increases task efficiency, butalso increases the difficulty level of use [125]. When using 360 video cameras for telepresence robots, some issues need to be considered. In 360 view, users in both remoteand local environments found it more difficult to understand directions and orientations[126]. In addition, low transmission rates for streaming may negatively impact the qualityof highresolution 360 videos. The 360 video cameras are usually combined with HMDsas display devices in present research on telepresence robots, e.g. [127, 109].
Telepresence robots are suitable for a range of applications [118]. They have been applied for collaboration between geographically distributed teams [128], at academic conferences [5], in office settings [116], in relationships between longdistance couples [129],for in longdistance shopping [130], by people with mobility impairments [131], for outdooractivities [7], and, as previously mentioned, in supporting healthcare personnel by providing remote patient communication, clinical assessment and diagnostic testing during theCovid19 pandemic [115]. The areas where research literature has most frequently reported on use of telerobots are office environments, health care, elderly aging in place,and school environments [118]. The robots used in these application domains includecommercial products and homemade (DIY) robots.
2.8.4 Telepresence robots for persons with special needsAmong the application areas of telepresence robots mentioned above, we focus on anumber of studies, that explored the potential of telepresence robots in supporting people with special needs. These use cases included, for instance, distant communicationfor patients [132], athome support of elderly with dementia [6], social connectedness forolder adults with dementia [133], distant learning for homebound students [134], caring for children with cognitive disabilities [135], independent living for seniors [132], andenhancement of social relations for people with dementia [133].
Using telepresence robots can reduce loneliness among older adults with mobility impairments, support their ageinginplace while remaining socially connected to friends andfamily [136], and promote social inclusion for homebound children [137]. These use casesare mainly related to healthcare domains and the users with special needs are often people with disabilities, older adults, and patients, who can increase their quality of life byusing telepresence robots.
Mobility problems may lead to psychological problems, like feelings of emotional loss, reduced selfesteem, isolation, stress, and fear of abandonment [138]. Overcoming part of amobility problem may provide new daily opportunities, reduce dependence on caregiversand family members, and promote feelings of selfreliance [139]. Telepresence robotsthus have potentials to improve their quality of life, by supporting them in their social activities. For example, telepresence robots can reduce loneliness among older adults withmobility impairments, supporting their ageinginplace while remaining socially connected
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 21
to friends and family [136].
As mentioned above, mobility is an important feature for telepresence robots. People withlimited mobility may use telerpresence robots as their avatars. For instance, persons withmotor impairments may benefit from telepresence robots to overcome mobility problems,especially those with severe motor disabilities, for instance cerebral palsy [135] and ALS[140]. Overall, these cases support the assumption that the deployment of telepresencerobots has a positive impact on persons with special needs.
Users’ engagement with telepresence robots has been as the local operator (teleoperator); or as remote participants colocated with the robots (see Fig. 2.4). The use cases forpersons with special needs are typically focused on the left case in Fig. 2.4. Accordingto the goals mentioned in Chapter 1, the left case is also the focus of our research.
Figure 2.4: Engagement with telepresence robots: 1) as a local operator (left); 2) as aremote participant colocated with the robot (right).
As Fig. 2.3 shows, a typical commercial telepresence robot is generally equipped withwheels for mobility, a microphone, a speaker, a camera, and a screen display. However,different types of impairments may prevent people with disabilities from taking advantageof these essential features. In particular, visual and auditory sensing are essential in theexperience of telepresence [122]. While telepresence robots have also been suggestedas communication tools for people with cognitive impairments, such as Autism SpectrumDisorder [135] and dementia [6], their cognitive challenges may lead to difficulty in usingthe telepresence robots independently [38]. Themost common control method for existingtelepresence robots is hand control with a mouse, a keyboard, a touchscreen or a joystick.However, people with motor impairments may have limited gripping, fingering or holdingability. This last group of users is the target user group of our research.
2.9 Human Factors and Telepresence RobotsRecent advances in AI have made telerobots more intelligent. Despite the autonomousfeatures, in most cases, realtime operator control is needed. In common wheeled robots,
22 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Type Feature Pros Cons
Observer ratings subjective Ease of use;No interruption
Limited knowledge ofa person’s concept
SART [157] subjective posttest Ease of use;No interruption Memory problem
SASWORD [158] subjective posttest Ease of use;No interruption Memory problem
SAGAT [159] objective realtime Overcomes memory problem Total interruption of main task
SPAM [160] objective realtime Overcomes memory problem;Embedded in tasks Interruption of main task
Table 2.1: A comparison of SA techniques (based on a summary from [161])
the main task of the operator is navigation. Remote robots can avoid obstacles and automatically follow a set path, but navigation in unpredictable environments is a complextask, and this navigation task requires operator control [141]. Even in autonomous vehicles or robots, SA is very important, as an operator must maintain a constant SA foreffective control of the job performed by the machine in case it suddenly fails and needssupervision [142, 143].
2.9.1 Task performance and teleoperationPerformance when navigating can be be measured with metrics like task completion time,deviation from planned route, coverage of necessary area, and error rate (i.e. collisions)[144].
Workload is another key human factor factor and numerous previous studies have shownthat human performance is highly dependent on workload e.g. [145, 146, 142]. It hasa negative impact on performance as high workload can degrade both performance andsituational awareness (SA) [146, 145, 142].
A large number of SA studies have appeared in the past 30 years. The terms situationalawareness and situation awareness have both been used in prior research. We usethe term situational awareness in this thesis, as it is currently the most frequently usedterm in books according statistics 3. SA is critical to effective decisionmaking, operatorperformance andworkload in numerous dynamic control tasks [147, 148]. SA has becomea core theme within the human factors research [149, 150]. SA is typically defined as”the perception of elements in their environment within a volume of space and time, thecomprehension of their meaning, and the projection of their status in the near future” [151].
SA is considered a primary basis for performance [19, 152, 153, 154]. Since there isa negative correlation between mental workload and SA [155], combination of SA andworkload can be used for predicting performance [154].
With regard to teleoperation tasks, previous work has pointed to the interdependencebetween attention, SA and telepresence [95, 156]. It was also found that SA and telepresence both significantly predicted task performance, which lead the authors to suggestthat SA and telepresence are positively correlated [154].
2.9.2 Measuring SADifferent methods have been proposed for measuring SA. As Table 2.1 shows, they canbe categorized as subjective and objective measures, and the measurement can be realtime or posttrial.
3https://books.google.com/ngrams (last accessed: 28052021)
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 23
Situational Awareness Rating Technique (SART) [157] is a typical posttrial and subjective measure, that relies on the ability of participants to assess their own level of subconstructs such as supply, demand and understanding. These subconstructs are combinedin order to obtain a composite SA score/level. Even though SART is considered a psychometrically reliable and valid questionnaire [162], it is subject to the limitations of surveyresearch such as response bias and subjectivity. Situation Awareness Global Assessment Technique (SAGAT) and the Situation Present Assessment Method (SPAM) [160]are two examples of realtime and objective measures. The methods rely on the correctness of responses as well as the reaction time (RT) to respond to queries. The SAGATrequires the implementation of a goaldirected tasks analysis with subject matter expertsin order to determine the content of the questions used for the administration of SAGAT.During a simulation run, an experimenter randomly freezes the simulation and administers a questionnaire regarding pertinent information for the simulation such as a map ofa sector and aircraft information. This has been criticized because this method seemsto rely on the working memory of the participants. This was addressed by Endsley witha simulation experiment that tested whether SAGAT was in fact just measuring workingmemory capacity indistinguishable from SA [151]. SPAM is an online measure that consists of operationallyrelevant questions that are administered in real time. The SPAMhas be used to predict performance in a cognitivelyoriented, airtrafficmanagement task[163, 160], and we will return to this method in more detail below.
As mentioned above, recent advances in HMDs and builtin sensors bring benefits to teleoperation. However, deployment of such displays also lead to challenges in measurementof SA with participants wearing HMDs. Using a postrial and subjective questionnaire likeSART has its limitation due to the issues previously mentioned.
Besides these conventional methods, it is also feasible to assess SA from eyemovements[164, 165, 166, 167]. Eye tracking represents as a psychophysiologically based andquantifiable measurement technology which is noninvasive and effective [165] and it isapplicable through realtime feedback technologies [164]. For example, eye movementfeatures (dwell and fixation time) were found to be correlated with performance measuresin [168].
2.9.3 Assessment based on DualtaskA dualtask paradigm [169] is a common procedure in experimental psychology that requires an individual to perform two types of tasks simultaneously, including a primary taskand a secondary task. Methods based on this paradigm are used to measure workload[170, 171, 172, 92], and sleepiness [173]. The dualtask paradigm has also been used inprevious research addressing SA [174].
Reaction time is a key metric in such methods. For instance, the reaction time to thesecondary task [92] has been used to estimate mental workload in driving. Using thereaction time in a secondary task to determine sleepiness of drivers has also shown to bepromising [173].
SPAM is basically a dual task method where the secondary task is reaction to the operators’ landline, which can be activated by the experimenter [175]. Before the experiment,participants were informed that phone calls would come over the landline, and they needto answer the query as quickly as possible via the landline. This task acts as an additional task, which meets the requirement of selfpacing and provides two sets of datafrom response latency and errors [171]. The time taken to answer acts as an indicatorof workload [149]. However, a previous study found that SPAM induces some dualtaskperformance decrements [176].
24 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
2.9.4 SafetySafety plays a crucial role in HumanRobot Interaction (HRI) [177] and safety constraintscan seriously limit the use of robots [178]. In the context of telepresence robots, theaforementioned human factors are closely related to safety. For example, the number ofcollision is an important indicator of performance. Frequent collisions cause damage tothe equipment, as well as potential dangers to the remote environment and people [136].
Obstacle detection and avoidance have been included in some current commercial products as mentioned in the section on earlier research [38, 179, 180, 136, 181]. These maysupport operators addressing some of the safety issues. However, current autonomousassistance cannot ensure that all safety constraints are met. When a telepresence robotis going through a remote environment, the definition of an obstacle or a target is usually not absolute [182]. An object in the remote environment could be considered as anobstacle to avoid, or it could be the target the user wants to get close to.
One of the most common mechanisms that has been applied in robot control is the s.c.,dead man’s switch [183, 184]; sometimes named liveman button [184]. Dead man’sswitch have been extensively used in safetycritical motion control devices, especially vehicles or machines [185]. An example of this is that when the driver is incapacitated withhis foot off the pedal, the dead man’s switch may act as an safety device to immediatelycease the locomotive’s motion [185] Simmilarly, when the operator is incapacitated withhis hands leaving the control device, the dead man’s switch may stop a telerobot’s motion or a selfdriving car [185]. Dead man’s switches has been implemented in electricwheelchairs and rehabilitation equipment as an essential feature when serving peoplewith limited mobility [186, 187, 188, 189].
2.10 Transfer of Learning from VRAccording to a definition provided by [190], the term transfer of learning is the processand the effective extent to which prior experiences affect learning and performance in anew situation [190]. This process is also called transfer of training [191].
For novices, the acquisition of skills for novel interactive system, like gazecontrolled telepresence, skills could be acquired in a simulated environment and then transferred to a realenvironment.
Recent advances in VR technologies have created opportunities to support this process[192, 127]. Simulated environments based on VR enable training of skills in novel ways,which holds significant promise for maximizing the efficiency and effectiveness of skilllearning in a variety of settings through immersive environments [193].
Simulated environments based on VR have been extensively explored for training in previous studies of teleoperation [178, 64]. There are two main reasons for using simulatedenvironments. The first one is safety regards for equipment, environment and people,especially safety in remote environments. For obvious reasons, collisions during trainingcould lead to critical and costly damages. Training with a real robot may be dangerousdue to their highspeed movements [194]. Learning teleoperation in real environmentsrequire both human resources, for instance an instructor, equipment and the occupationof space [178]. Previous studies often aimed to validate a VR training simulation by exploring the degree to which skills learned in the virtual environment could be applied tothe realworld [193]. Positive impacts of simulation environments for teleoperation training was demonstrated by [178, 64]. How to make this transfer process more efficient isone aim of this thesis.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 25
A VR environment needs to be realistic enough for developing new skills that can be applied to real tasks [193]. Basically, it is necessary to establish a correspondence betweenkey elements of the real and the virtual tasks for effective transfer of training [193]. However, it has been argued that simulation fidelity is and important but complex factor for thetransfer of learning [195]. For instance, graphical realism has often shown inconsequential in comparison to other key task elements [196, 193].
Regarding training of gazecontrolled telerobots, how to develop simulations that bestsupports transfer of learning is still an open question, that will be adressed in this thesis.
26 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
3 Gazecontrolled TelepresenceThis chapter presents the robotic systems we have applied for the research, as well asthe different versions of the gazebased user interfaces that were modified in response tothe insights the studies provided.
3.1 Robotic SystemWe applied a gazecontrolled telepresence system, developed by students and softwareengineers associated with our project, which allows operators to use their gaze to steera robot, change the fieldofview, select waypoints, and type. These tasks may be donethrough gaze only. Fig. 3.2 outlines the system architecture and the components.
Figure 3.1: A target user using gaze to control a commercial telerobot modified with a360 camera.
Our gazecontrolled telepresence systems [21] were built around a robot platform, witheye tracking in an HMD, and a user interface developed inUnity (version: 2018.1.6)1. Theplatform builds upon the opensource Robot Operating System (ROS) and its ecosystem.The platform has been adapted on several types of robots, including an offtheshelf robot(PadBot), developeroriented robots (Parallax Arlo), and modified wheelchairs (cf. Fig.3.2).
A Fove headset and a Ricoh Theta S2 were used for the system. Compared to the typicalcombination of webcam and screen for telepresence, the use of 360 cameras and HMDoffer a wider FOV, allowing for a complete transmission of the telerobot’s camera location.The Fove headset has a resolution of 2560 × 1440 pixels, rendering at a maximum of 70frames per second (fps) with an FOV of 100. In addition, the headset has builtin eyetracking components. More details of the hardware can be found in [21, 23].
Two versions of the system were used in different stages of our research, namely theoriginal version and the extended version with a VR simulator. The first version of the
1https://unity3d.com [last accessed 22042021]2https://theta360.com/) [last accessed 21042021]
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 27
system with a Parallax Arlo was used for the pilot study. Afterwards, the first system wasalso used in Experiment 1. A Padbot was used in the following studies, as the ParallaxArlo was not running as stable as the Padbot. Moreover, it is much heavier than thePadbot, and thus it was not used for safety reasons during field tests.
With the first version of the system, the test subjects in Experiment 1 wore an HMD andused eye gaze or hands to navigate the telerobots. The HMD (FOVE) and a joystick(Microsoft Xbox 360 Controller) were connected with a computer to the Unity platform.The computer was connected with the telerobot via wireless network. During driving,the live video stream was displayed at the HMD. Eyetracking is handled by two builtininfrared eye trackers of the Fove.
In a remote room, a telerobot carried the 360 camera, a microphone, and two sensorsfor indoor positioning. Five ultrasound sensors (GamesOnTrack)3 were mounted on thewall, and communicated with a transmitter placed on top of the telerobot. This trackingsystem allowed positioning in 3D with a precision of 12 centimeters.
A user interface was built in Unity, which features twomodes: parking and driving. Parkingmode allows the operator to use a panningtool in order to get a full 360 view of the currentlocation. Driving mode displays a front camera view in lowresolution in order to minimisedelay in the video transmission.
VR HMDwith eye trackers
Telerobots
Gaze control
Live video stream360° video camera
Figure 3.2: The robotic system for gazecontrolled telepresence.
As Fig. 6.1 shows, the training of gaze control needed to be explored, based on thefindings from Experiment 1. Besides the robotic platform used in reality and a controllerfor the operator, the extended version of the system (c.f. 3.3) consists of a VRbasedrobot model and a simulator of the environment.
A user wearing the HMD navigates a robot or a virtual robot by gazing in the directionin which the robot should drive. The control panel was the same for controlling both thevirtual robot model and the real robot. The shared controller module ensures that thecontrol mechanism is the same for both the real and the virtual robot with regards tomapping eyemovements and the control commands. The virtual robots were modeledwith the same essential features (e.g., velocity, shape and size) as the real robot. The
3http://www.gamesontrack.dk/ [last accessed 22042021]
28 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
real robot carries a camera, transmitting a live video stream to the user from the remotereality. In the simulator, the synthetic live stream is generated from a prerendered 3Dmodel of a room similar to the one the real robot was driving in.
In both experiments, we constructed mazes for our driving tests [23], marked by whiteroom dividers on the floor. In the simulator, the same maze layouts had been included inthe model.
We took pictures of the floor maze markers and the walls, and used them as textures inthe model. Real sounds of collision and wheel rotation were recorded and used as soundeffects in the simulation. A shaking effect from collision with the maze dividers or the wallwas also included in the simulation, approximating how this would look in reality.
Operator
Control panel
Remote maze
Live stream
Control commands
Virtual maze
Robot
Virtual Camera Ricoh Theta S camera
Remote reality(Real environment)
Simulator(VR environment)
Virtual robot
Camera view
Eye tracking
Eye tracking data
VR HMD with built-in eye
trackers
Unity-ROS-BridgeVirtualUnityController
Figure 3.3: An extended version of the robotic system with a VR simulator
The extended version was also used in the field study. Figure 3.1 shows that a wheelchairuser from our target user group is navigating a telerobot. He wears headset equipped witheye tracking and control the telerobot with his gaze.
3.2 Gazebased User InterfaceWe applied different versions of gazebased UIs. The control panel was basically thesame, but with different modifications of the gazebased UIs. The main difference couldbe found in the layouts and control mechanisms for steering in the driving mode.
The first version of the gazebased UI was designed based on previous research on gazecontrolled driving [12]. A pink cursor on the screen indicates the operator’s point of regard.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 29
The user observes the streaming video to continuously adjust the locomotion of the robotby moving the cursor. For instance, when the operator looks upwards in the upper part ofthe video stream, the robot will go forward. Turning can be done by looking at the left orright part of the video stream. As Fig. 3.4 shows, Xaxis modulates steering (from 100%left to 100% right, 0% driving straight) and the Yaxis modulates speed (From 0 to 100%of maximum velocity, where 0 is top ) [12]. The blue axis and numbers are invisible tothe operators. However, limitation was found in the pilot study conducted. We observedthat the user needed to look at the ceiling in order to obtain maximum speed. This is notintuitive at all for a driving task. Users’ feedback in postsession interviews confirmed thatchanges of the UI was needed.
Stop0
100
0100left
100right
50
Figure 3.4: The first version of the gazebased UI.
Addressing the issues mentioned above, further UI design changes were made. Some ofour design changes were partly inspired by prior work on gazebased UI, e.g., comparison of continuous and gradientbased gazebased UIs and discrete and constantbasedgazebased UI suggested by [20]. In addition, contour plots of the distribution of the centre points of all fixations made in left and right bends during car steering tasks providedus with ideas for design of the layout [197].
The updated, second version continued to use the point of regard as a pink cursor, andthe UI was still an invisible control layout working on top of the live video stream. Gazemovements are mapped to the robot’s movements via the UI. Consequently, the robotturns in the direction the user is looking. When the driver closes his/her eyes or looksoutside the area of the live video stream, the robot stops. Fig. 3.5 shows a screenshot ofthe layout. The blue arrows where not part of the interface but only included in the figureto explain its features. The gaze point (pink cursor) is visible to the user inside the videostream, indicating the position from where continuous input are sent as commands to thetelerobot. When looking in the left upper area, the robot turns left; in the middle area,it drives forward; in the right upper area, it turns right; in the left lower area, it turns left(spin turn); in the right lower area, it turns right (spin turn). The velocity is controlled bythe distance between the gaze position and the center position of the live video stream(maximum linear velocity: 1.2 m/s).
30 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Figure 3.5: The updated version of the gazebased UI.
A comparative study by one of our master students [28] of gaze UI for controlling awheelchair was conducted to compare three types of gazebased UIs, namely continuous (c.f. Fig. 3.6), overlay (c.f. Fig. 3.7), and waypoint (c.f. Fig. 3.8). The continuousversion applied a steering principle similar to the one in our first version (c.f. Fig.3.4).
Figure 3.6: Gazebased UI: continuous.
Figure 3.7: Gazebased UI: overlay.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 31
Figure 3.8: Gazebased UI: waypoint.
Results indicated that the waypoint method had the best performance and was preferredby the ablebodied users. Recently, a similar waypoint control has been implemented inthe updated version of one of the most successful commercial telepresence robots (e.g.Double 3) with semiautonomous features for setting a waypoint and then moving towardsit automatically. In the field study, we also enabled the waypoint control in our new versionof the system. However, the waypoint method was not preferred by our participants fromthe target user groups [27], because they found it submissive to be forced at looking downin order to set a new waypoint. More details of the field study can be found in AppendixB.7 and will be discussed later in this thesis.
32 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
4 Methods, Data Collection and AnalysisThis chapter presents the evaluation methods, measurement, data collection, and dataanalysis in the experiments and field study. Chapter 6 and 7 will later present the insightswe got from them. As was pointed out in Section 1, for our studies of the gazecontrolledtelepresence with an HMD, we modified existing methods, and developed a new mazebased evaluation method. The proposed evaluation method involves a range of measures of common metrics from HRI. These measures are briefly presented in this chapter.Specifically, with regard to the SA measure, the process of our SPAMbased popup andsaccade test are introduced. The data collection and data analysis are presented in theend.
4.1 Evaluation MethodsIn both experiments, our mazebased test method was used. The method mainly adaptedcommon metrics in HRI [144], such as operator performance (including task completiontime, and coverage of necessary area), ratings of workload, numbers of collision, situational awareness, etc. In addition, we proposed some novel measures of posttrial recollection (e.g. drawing a maze sketch) and estimations of task events (e.g. the duration ofa trai and number of collisions). They purpose of them was to measure the participantsability to recall details from the scene and from their trail performance, expecting that thiswould reflection their SA to some degree.
Besides these measurements, observations was used in all the studies in our research,as observation is the foundation of empirical research [14]. Observations were supportedby video recordings.
In the field study, a Wizard of Oz [198] method was used. The goal was to observe the useand effectiveness of various interfaces and control methods with our target users, ratherthan to compare a range of complete robotic systems. Therefore, we used the Wizard ofOz technique for some of our scenarios because of its advantages in early explorations,where complex system intelligence can be delivered by a human assistant instead ofbeing fully implemented. Also, without an open API to the Double robots it would be ademanding task to establish a direct gaze control. The Wizard of Oz method was appliedin the following way:
When a participant was using the robot, this would transmit live images to a a screen witha Tobii eye tracker attached to it. The experimenter was standing behind the user andemulating his gaze movements using a joystick by observing where the user looked atthe screen (indicated by a marker superimposed on the image by the Tobii system. Theexperimenters joystick movements was then in fact controlling the robot without the usersknowing it. The study was conducted according to the principles suggested by [199] forconducting accessibility research. For instance, the study plan and consent form weresent to the caregivers before the study.
4.2 MeasuresThe following sections present measures and how the data collection was carried out.Both quantitative and qualitative measures were used in the research.
With regard to qualitative measure, in the two experiments and field study, the measuresincluded:
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 33
1. Video recorded in the remote environment for posttest analysis;
2. Video recorded of the Unity UI environment (including the gaze cursor) for posttestanalysis;
3. Voice and text feedback provided by participants in the posttrial or the posttestinterviews.
In the pilot study prior to the other studies, only 1 and 2 were recorded. In the field study,the videos were not recorded at the care home (e.g. canteen, hall, and corridor), forprivacy reasons.
With respect to quantitative measures, in the two experiments, we mainly used commonmetrics in HRI recorded in log files [144] supplemented with our novel measures of recollection and estimation. The novel measures was colleced during a posttrial questioningsession asking i.e., ”Can you estimate how many minutes you have spent on driving thetelerobot?”, to draw a maze sketch), and to point at the locations where they had beentalking to a person in the room during the maze driving. Figure 4.1 shows an exampleof a maze sketch drawn by a participant (left), and his mark of the location of the personhe had met in the maze (middle). The actual maze layout used in this trail is show to theright, with the three positions for where a person in the room appeared are marked byyellow stars. As it can seen from the figure, only a partly correct sketch was made andthe participant only remembered having met a person once.
In both experiments, the following quantitative data where collected:
1. Log data of the telerobot from both Gamesontrack ultrasound sensors, includinga timestamp, the telerobot’s position (x, y), and velocity. In Experiment 1, log data(i.e., timestamp and telerobot’s position) from the telerobot’s builtin encoder werecollected. However, the data quality were not as good as the data from the ultrasound sensors, in terms of accuracy and stability. Therefore, only the ultrasoundsensors were used in the analysis and the following study. In the field study, theultrasound sensors were not used, as they needed several signal receivers on theceiling to transfer data, and it was impossible to do this due to the constraint of theenvironment in the care home.
2. Log data from the UI. The log data were written in JSON format. The log dataincluded the timestamp of each trial (hour:minute:second), namely the timestamp ofthe starting point and end point of each trial. In addition, it also included data fromthe SPAMbased popup by recording response time to two types of onscreen, popup queries (see Appendix A.2 for more details on this). In Experiment 2, data fromsaccade tests were additionally recorded in the log data file. Details on the log datafrom the popup and the saccade test are provided in the session below.
3. A Task Load Index (NASATLX) [200]) questionnaire was used to collect workloadratings after each trial (with 6 rating scales, including mental demand, physical demand, temporal demand, performance, effort and frustration). Each rating scalehad 21 gradations. The NASATLX questionnaire consistently exhibits high reliability, user acceptance and low intersubject variability in numerous studies [201].
4. Self Assessment Manikin (SAM) [202]. The SAM questionnaire was used for theparticipants to report their feelings of pleasure, arousal and dominance on a fivegraded facial pictorial form.
5. Presence. In Experiment 1, we are also measured presence. A Presence Question
34 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
naire [203] revised by the UQOCyberpsychology Lab was used to rate the feeling ofpresence on a sevenpoint scale after each trial. The questions included six aspectson perceived realism, possibility to act, quality of interface, possibility to examine(the environment), selfevaluation of performance, and sounds. To avoid too manyquestionnaires for the participants in the field study, the measure of presence wasreplaced with one simple question, namely ”where are you now?”.
As mentioned in the previous chapter, a questionnaire is the most frequently usedmethod of measurement. After completing the first experiment, we confirmed thatadding gaze control to telepresence robots did not have a significant impact on presence. Therefore, as the participants used the same gazecontrolled system, we didnot measure presence in the second experiment.
6. The participants’ responded to posttrial questions in both experiments about estimation (task duration and number of collisions), recollection of the maze layout (c.f.Fig. 4.1), positions of the person who was talking with them, and the number oftimes they communicated with the person. In Experiment 2 and the field study, theparticipants’ level of confidence was measured before and after each trial with thequestion ”Overall how confident did you feel while driving the robot...”.
Figure 4.1: A participant drawing amaze sketch frommemory (right), his final maze sketch(middel), and the actual maze layout used for the trial(right). The yellow stars shows thelocation of where he had met a person in the room.
Situational awareness (SA) was measured in the study, as it is a main focus of the research (see Fig. 1.3). Different types of measurements were used for this purpose. In Experiment 1, we used two different SAmeasurements. We implemented a popup interfacebased on SPAM (see Fig. 4.2) [160]. For comparison, a traditional ex post questionnaire(SART) was used as well.
During each trial, the experimenter observed the participant’s operation via a operatorconsole display. When the telerobot passed certain areas in the maze or when a maneuver, e.g. a turn, had been done, a query popup in the control display was prompted bythe experimenter. This happened in two steps: First, a preliminary query appeared (c.f.Figure 4.3). When the participant replied yes, the query was terminated and the responsetime was recorded. Then the query (c.f. 4.4) appeared. When the participants had givena full verbal response, the experimenter closed the query popup and their response time
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 35
were logged in the system. Popupbased data includes both the response time and theaccuracy of the actual answer from the participant.
Activated by the experimenter
Pop-up: “Are you ready to answer a question?”
Yes No
Timer stops and record RS Timer stops as pre-set
Pop-up: SA query
Closed by the experimenter
Test person starts to answer question orally
RS and answers recorded
Figure 4.2: Process of the SPAMbased pop up.
Based on task analysis and SA theory [19], the queries included:
1. Perceptionrelated questions (e.g., ”What gender was the voice you just heard?”);
2. Comprehensionrelated queries (e.g., ”What kind of information did the person tellyou?”);
3. Projectionrelated queries (e.g., ”Can you estimate when you will be finished withthe task?”).
The queries used are included in Appendix A.2. More details of the SPAMbased popupcan be found in Appendix A.2 as well. Subsequently, after each trial, a SART questionnaire [157] was used for participants to subjectively rate each dimension of the operator’sSA on a sevenpoint rating scale.
36 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Figure 4.3: SPAMbased popup: a preliminary query
Figure 4.4: SPAMbased popup: a perceptionrelated query
In Experiment 2, we continued using SPAMbased popup for measuring operators’ SA.The process of the SPAMbased popup was the same with Experiment 1. The popupwas also used to collect data for comparison with the saccade test in Experiment 2.
Figure 4.5: Saccade test.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 37
Activated by the experimenterBy pressing the hot key
saccade-test begin
Data recorded
Red dot appear
A correct pro-saccade made towards the dot
5 times
Main task (navigating the telerobot with an HMD)
Saccade test ends
Trial measured as attempted but wrongtrial measured as correct
Main task (navigating the telerobot with an HMD)
No response A wrong pro-saccade made towards the dot
Figure 4.6: Process of the saccade test.
A prosaccade test based on the Saccade Machine developed by Diako Mardanbegi [81]was implemented in Experiment 2.
During the experiment, a hotkey could be pressed by the experimenter to initialize thesaccade test. The participants thge had been instructed to move their eyes to fixate thered dot appearing on the screen. Fig. 4.6 presents the process of the saccade test inExperiment 2. While the test was running, all data were recorded on the computer with adynamic file naming system.
In the saccade test, a number of saccadic eye movement data types were collected, suchas position, latency, amplitude, and trial number. Results returned from the data analysisusing the Saccade Machine [81] included:
1. Total number of successful trials (with correct or corrected saccades)
2. Total number of trials with corrected saccade
3. Total number of trials with the first saccade detected as correct
38 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
4. Total number of failed trials (wrong saccade or bad data quality)
5. Latency of the first saccade after target onset regardless of its direction
6. Latency of the first correct saccade after target onset
7. Latency of the corrected saccade after wrong saccades
8. Latency of the correct or corrected saccades
9. Latency of the first wrong saccade after target onset
10. Amplitude of the correct saccade
11. Amplitude of the corrected saccade
12. Amplitude of the wrong saccade
More details of the saccade test can be found in Appendix A.3.
4.3 Data AnalysisThis section presents data analysis in our experiments. Different methods of data analysiswere applied depending on the research focus. As presented in Fig. 1.3, there were tworesearch focuses: exploration of gazecontrolled telepresence and SA.
4.3.1 Data Analysis for Gazecontrolled TelepresenceFor data processing, some types of data were quantified or calculated as illustrated below.The maze sketch from Figure 4.1 could be quantified according to the number of essentialfeatures included in the sketch. In this particular example, the maze sketch was rated tohave a correctness of 60%, having the starting point and the endpoint, plus half of themaze layouts in it.
In both experiments, deviation from the optimal path is one of the key metrics of performance. First we selected one trial from each maze, which has the the best performance(shortest task completion time). The path from the maze was considered the optimal path.By comparing each path with the corresponding optimal path for the same maze layout,we calculated each path’s rootmeansquare deviation (RMSD) value using the followingequation 1.
RMSD =
∑nt=1(Pt − Pt)
2
n(4.1)
More details about the RMSD in Experiment 1 can be found in [23]. In Experiment 2, wedid not use this calculation as there were too much path overlapping.
For ANOVAs, the data were processed in the following way:
• The ShapiroWilk test was used to check the normality of the data.
• The Levene’s test was used to check their homogeneity of variance.
When the measured dependent variable failed on either the normality or homogeneity ofvariance assumptions, we used data transformation on the measured value and checkedboth assumptions again. If the transformed value did not fail any of the assumptions, thiswould then be used as a the value of interest in the ANOVA. If the transformed data did
1Pt: a position on the path; Pt: a position on the optimal path with the shortest distance with Pt.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 39
not satisfy the assumption of normality or homogeneity of variance, the Mann–Whitney Utest was used instead.
The data transformation we applied throughout the data analysis was the BoxCox transformation:
y(λ) =
yλ−1λ , if λ = 0
log y, if λ = 0
(4.2)
The transformed value was then inserted into the linear model used for the ANOVA analysis. If the results showed no difference in significance compared to the untransformedANOVA model, the results from the untransformed ANOVAanalysis was reported.
In Experiment 1, there were two types of independent variables compared: trial order andcontrol methods. With twoway ANOVAs, we could find the main effects of each independent variables on the dependent variables. We could also find the impact of interaction ofthe two independent variables.
In Experiment 2, there were three types of independent variables: status of training (withtwo levels: before or after training), training types (with two levels: with a real robot or witha virtual robot in a VR simulator), and maze layouts (with two levels: the same or differentfrom the trials). With threeway ANOVAs, we could find the main effects of each independent variables on the dependent variables. We could also find the impact of interaction oftwo or three independent variables.
If the results from the ANOVA showed any significant impact (p < 0.05), a Bonferroni correction was conducted as a posthoc test. All these results could help us answer researchquestions of each experiment focused on exploring the gazecontrolled telepresence system.
In addition, descriptive analysis based on the log data was also used in data analysis.In Experiment 1, through the visualization of each path based on the log data, the telepresence robot’s coverage of necessary area could be observed (c.f. Figure 4.8). Byvisualizing the same control method in the same maze layout, the difference between thetwo control methods could also be observed, such as the overall distance between thepath and the obstacle, and the overall direction of the path (c.f. Figure 4.7). This also confirmed some findings from previous statistical analyses. For instance, when using gazecontrol, the participants reported a significantly lower feeling of dominance.
Figure 4.7: Visualisation of gazecontrolled telerobots’ paths (left) and handcontrolledtelerobots’ paths (right).
40 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Figure 4.8: A path plot from a participant using gaze control. A collision can be observedin the upper right part.
In Experiment 2, descriptive analysis was used to observe the learning curve from eachparticipant’s training process. The observation could provide information on whether thelearning process of each participant follows the specific pattern, such as the learning curve[204].
4.3.2 Analysis for Comparison of SA MeasureOne of the main purposes of the research was measuring SA (see Fig. 1.3). Pearsoncorrelation coefficient was used to explore how linear the relationship between the twovariables were. The Pearson correlation coefficient also represents the effect size (Pearson’s r). Normally it can be represented as a value between 1 and 1. In Experiment1, the implemented SPAMbased popup was compared with a posttrial questionnaireSART. The focus was to see whether participants who performed better in response tothe popup would report higher scores on the SART questionnaire as well. We comparedthem with various central metrics and found that the relationship between SART and othermetrics was very weak.
Besides the correlation, we aimed to see whether the saccade test could be used to differentiate performance in robot teleoperation between good and bad robot pilots. Thus,we grouped participants into two groups: the eight best and the eight worst pilots. Thegrouping was based on each participant’s performance, for which task completion timeand the average number of collisions were used as metrics. Since the comparison involved two metrics with two types of data, the data of each metric were normalized bydoing a minmax feature scaling (normalization) on the data. The following equation wasused:
V alue_normalized =V alue−MIN(value)
MAX(value)−MIN(value)(4.3)
The MannWhitney Utest and the oneway ANOVA were used for data analysis of comparing the eight best and eight worst performers.
A random effects model was used for our analysis of data collected from SPAM and SART.Since we had two independent variables, we used the following model (ϵ is the residual).
SA(SART ) = µ+ α(method) + β(trial) + γ(method, trial) + ϵ (4.4)
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 41
SA(SPAM) = µ+ α(method) + β(trial) + γ(method, trial) + ϵ (4.5)
Based on the analysis, the comparison of different SA measures and potential SA measures can be made. In addition, the validity of each measure could be found from theanalysis.
42 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
5 Insights from the Systematic LiteratureReview
This chapter outlines main findings and insights gathered from the systemic literaturereview of telepresence robots for people with special needs.
Preferred Reporting Items for Systematic Reviews and Metaanalyses (PRISMA) guidelines were applied. We used Web of Science (WoS), ACM Digital Library, IEEE Xplore,PubMed, and Scopus for searching. A hand examination of reference lists supplementedthe results. More details can be found in Appendix A.1.
The articles were further characterized by problems addressed, objectives, types of special needs considered, features of the devices, features of solutions, and the evaluationmethods applied. Future research directions were proposed based on the review, addressing issues like usecases, user conditions, universal accessibility, safety, privacyand security, independence and autonomy, evaluation methods, and user training programs.
The review evaluates, synthesizes, and presents the studies according to different typesof telepresence robots operated by people with special needs. A summary of commonresearch directions was provided. In addition, a summary of issues, which need to beconsidered in future research, was outlined. The following sections of the thesis focus onthe relationship between the review and our entire research, supporting us in answeringRQ1.
Prior work has explored telepresence robots for different types of special needs, whichare namely disabilityrelated and agerelated. Barriers and challenges of using telepresence robots due to specific disabilities were the most common problem statements(e.g. [62, 205, 206, 38]). By analyzing these special needs, especially disabilityrelated,we found motor disability to be the most common user condition addressed. Telepresence robots have obvious potentials to assist in overcoming mobility problems. A typicaltelepresence robot is handcontrolled with a joystick, a touch screen, a keyboard, or amouse, but many people with motor disabilities have limited manual abilities. Resultsfrom the review revealed that different types of control methods were proposed addressing this problem, including BCIbased, eyemovementbased, speechbased, and headmovementbased. The control methods were mainly for navigation tasks or for changingthe field of view. These findings provided us with an overview of prior work addressing themotor disability issues and helped us know where our research focus should be. With thisinformation, the advantages and disadvantages of the different methods could be compared. Among them, BCIbased methods were the most common. However, due to thetechnology constraints in prior work, no BCIbased method had been explored outside alaboratory setting. As the second most common one, eyetrackingbased methods hadshown their unique advantages regarding flexibility, cost, and ease of use. We found thatgazecontrolled telepresence with screenbased eye trackers and a traditional LCD as adisplay for the operator has already been explored by [10], but a research gap was thatgazecontrolled telepresence with an HMD had not yet been explored. This gave us themotivation to examine it in an experiment in order to address RQ2. Analysis of the systematic literature review revealed that many of the prior work papers proposed new solutionsby introducing novel control methods, e.g. [207, 63, 208, 209] or new sensing methods
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 43
e.g. [206, 205, 210, 211]. Results of the studies showed that novices often would needadequate practices to learn how to use the novel methods [10]. These findings also motivated us to look for new ways of training by investigating training impacts and by lookingfor an alternative, safe, and costeffective method.
Additionally, the systematic review provided some knowledge to the planning of our fieldstudy. We found that many prior studies did not involve target users with special needsas test subjects. Studies with both the target user groups and nontarget users showedsome differences between them, specifically with regard to performance and preference.Therefore, we were confirmed that a field study with the target users in a care homeshould be an important part of our research. In addition, prior work reported difficulties inrecruiting and conducting a study with the target users with special needs [180]. Theseissues were taken into consideration in the planning of our field study. We tried to use bestpractise by informing our contact persons in the care home about every detail in the fieldstudy. On basis of their feedback, our first version of a long questionnaire were simplifiedinto one or two brief questions in order to make them less challenging and longlasting.
Overall, we became aware of the importance of universal design based on our systematicreview of prior work. If universal design principles had been considered at the beginningof a design process, especially for the commercial offtheshelf robots, the necessity ofmaking adaptations to them afterwards might have been avoided to some degree. Suppose the telepresence robots could just be connected with other assistive devices easilyvia an open API. In that case, gaze trackers or special joysticks, might work with themoutofthebox. It requires that a ”design for all” thinking the basic principle of universaldesign are fundamental to system development.
44 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
6 Insights from the ExperimentsThis chapter outlines the findings and insights gathered from the two experiments conducted in the Ph.D. project. As Figure 6.1 shows, the first experiment focused on validating the accessibility of the gaze control of telepresence robots, and identifying userchallenges. The second experiment focused on understanding the impacts of traininggaze control of telepresence robots. The details of the systems were presented in Chapter 3. Our proposed evaluation methods based on common metrics in HRI was presentedin Chapter 4.
Experiment 2: Training
N=32
Extended version with a VR simulator
Experiment 1: Accessibility and Evaluation
N=16
Original robotic telepresence system
To examine the possibility and challenges of gaze control of telepresence robots by
comparing it with hand control
To examine the impacts of training gaze control of telepresence robots
To validate the situation awarenesstechnique within a telepresence context.
Comparison:
Ex post questionnaire (SART)
Real-time objective pop up (SPAM)
To validate the possibility of using saccadetest as an alternative SA technique withina robot teleoperation context.
Comparison:
Saccade test
Real-time objective pop up (SPAM)
Figure 6.1: Overview of two experiments and research aims.
6.1 Accessibility and EvaluationBased on our review, we identified a research gap, for which the use of gaze control fornavigating telerobots with an HMD needs to be explored.
Before the first experiment, we conducted a pilot study [212] to obtain a basic understanding of the task complexity, as it is a factor affecting telepresence and performance [213,95]. Also, when testing telepresence robots, the task complexity should be similar to thereal task scenario [214]. Based on the observation, we improved the gazebased UI (c.f.Section 3).
With regard to the research goal for Experiment 1, the following hypotheses were proposed.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 45
When using an HMD to navigate a telepresence robot,
1. operators show no difference in key metrics (including their performance, workload,and SA) between using a joystick by hands and using gaze control;
2. operators show no difference in the key metrics, between first trial and the secondtrial with the same control method.
A total of 16 participants took part in the study. A withinsubjects design was used in thisexperiment. There were two groups of independent variables in the experiment: 1) inputmethod (gaze control or hand control); 2) order of trials with same control method (thefirst trial or the second trial)
Dependent variables included the participants’ performance, workload, SA, posttrial recollection, estimation, selfassessment, and experience of navigating the telerobot. Figure6.2 shows how participants were divided into groups based on the independent variables.
Trial 1 Trial 2
Hand
Trial 1 Trial 2
HandTrial 1 Trial 2
Gaze
Trial 1 Trial 2
Gaze
16 participants
Figure 6.2: Illustration of the experimental design of Experiment 1.
Details of the experiment was reported in [23] and Appendix A.2.
The main observation from this study was that telepresence robots could actually be controlled by gaze when using an HMD. This suggests that gaze interaction may be a viableway to ensure the accessibility of telepresence for people with motor disabilities, especially those with limited hand functionality.
The “feeling of being there” [1] is vital in telepresence. Our findings confirmed that bringing this novel control method to telepresence robots had no potential negative impactson the experience of presence. However, when we looked at other metrics, significantdifferences were found. Table 6.1 presents an overview of the comparison between handcontrol and gaze control.
None of our participants had tried gaze control of telepresence robots, and most of themhad not even experienced gaze control. They were novices for this task. Hand controlis commonly used by most people every day. The challenges of using gaze control bythe novices were reflected in the significant less performance associated with this inputmethod. Significant differences were found in task completion time, collision, RMSD, SA,
46 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Gaze Hand ANOVAmean (SD) p
PerformanceTask completion time 93.86 (61.51) 65.03 (31.43) .023*RMSD 0.40 (0.90) 0.25 (0.11) .000073**Number of collisions 1.68(1.73) 0.75 (1.67) .033*Task completion (%) 97.8 100 Situational AwarenessRS to prequery 3.05 (1.61) 2.54 (0.75) >.05RS (perception) 7.43 (2.67) 6.00 (2.96) .049*RS (comprehension) 13.41 (5.28) 13.23 (7.94) >.05RS (projection) 21.89 (8.26) 22.41 (10.22) >.05Accuracy of the queries (perception) 0.85 (0.25) 0.86 (0.20) >.05Accuracy of the queries (comprehension) 0.47 (0.51) 0.47 (0.51) >.05Accuracy of the queries (projection) 0.36 (0.29) 0.59 (0.23) .00084**NASATLXMental demand 10.75 (5.14) 7.97 (4.65) .028*Physical demand 6.88 (4.46) 5.22 (3.52) .030*Effort 9.69 (4.73) 6.47 (4.33) .00059**Frustration 8.41 (3.99) 5.91 (3.75) .016**Performance 9.41 (4.83) 7.19 (4.88) .019*Temporal Demand 7.18 (3.38) 6.5 (3.83) >.05Posttrial Estimation and RecollectionMaze sketch (%) 58.43 (37.60) 79.68 (30.85) .018*Duration estimation (%) 1.21 (0.99) 2.13 (2.03) .019*SAMPleasure 0.06 (0.44) 0.06 (0.76) >.05Dominance 0.43 (1.11) 0.43 (0.80) .00067**Arousal 0.25 (0.67) 0.16 (0.63) >.05PresenceRealism 9.40(4.83) 7.09 (4.47) >.05Possibility to examine 4.35 (0.77) 4.91 (2.21) .017*Quality of interface 3.72 (0.92) 3.46 (0.92) >.05Possibility to act 3.52 (1.00) 3.95 (1.12) >.05Sound 3.45 (1.36) 3.59 (1.50) >.05Selfevaluation of performance 4.44 (0.91) 4.80 (1.16) >.05
Table 6.1: Difference between gaze control and hand control.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 47
and workload. In another gazecontrolled telepresence system with a 2D monitor screenbased eyetrackers, similar results regarding challenges of gaze control were observed[10].
In the experiment, each user experienced gaze control two times, but there was no significant difference between the first trial and the second trial. More comprehensive trainingmight had given positive impacts, but with just two trails it is unclear how the impact mighthave been. Thus, we saw a need for a more indepth training study, which was one ofthe motivations for our next study. Therefore, these issues could be tackled in our nextstudy.
In Experiment 1, we experienced some cases of discomfort. However, it is not clear if thisstems from the discomfort of wearing a HMD for longer time or from using gaze interaction.In a future study, we should examine if there is a difference regarding discomfort whenusing an HMD and when using a monitor.
Prior work indicated that telepresence had a correlation with task performance, and withoperators’ SA [95]. In our research, we found significant differences between gaze andhand in teleoperation performance, but no significant difference in telepresencemeasuredby the presence questionnaire (except for one subscale possibility to examine, whichwas rated significantly higher for hand). This was the reason why we did not continuemeasuring telepresence by using the presence questionnaire in Experiment 2.
Furthermore, Experiment 1 applied a mazebased evaluation method which showed sensitivity to the differences between hand and gaze. Thus, the mazebased method wasalso used in Experiment 2. In the systematic review, we found that the common metricsin HRI [144] had not been applied in most of the studies. The findings in Experiment 1proved that these metrics could provide comprehensive information within the context oftelepresence robots.
Regarding RQ 2, Experiment 1 confirmed that gaze interaction is a viable method to control a telerobot. Regarding RQ 3, major challenges of gaze control were found by comparing it with hand control. Overall, Experiment 1 provided us with answers to RQ 2 andRQ 3 of the research project, and laid the foundation for subsequent studies.
6.2 TrainingThis section introduces Experiment 2, which focused on training, and presents the mainfindings and insights from it.
Experiment 2 is a followup on Experiment 1. Results from Experiment 1 showed thatnavigating a gazecontrolled telepresence robot with an HMD is quite a challenge and thatcontrol exclusively by gaze has negative impacts on key metrics for novices. Therefore,novices need training in a safe environment where they can acquire the needed skills.The reason we chose simulators were given in [26]. Simulators may be used as a costeffective and safe solution for the acquisition of skills to operate a robot [178]. The trainingof eyetrackingbased robot steering with a VRbased simulator has not yet been explored,so it is still unclear how to design the simulator and what the learning effects might be.Consequently, one goal of Experiment 2 was to examine the impacts of training gazecontrol of telepresence robots. Section 3 introduced the extended version of the gazecontrolled telepresence system with a VR simulator, which was used in this study. AVRbased simulator enabled users to train their gaze control of wheeled telerobots in asimulated environment (c.f. Figure 3.3).
48 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
With regard to the research goal, the following hypotheses were proposed:When wearing an HMD to navigate a telepresence robot,
1. operators show no difference in key metrics (including their performance, workload,and SA) before and after training;
2. operators show no difference in key metrics (including their performance, workload,and SA), between training with a real telerobot and training with a virtual telerobotin VR.
3. operators show no difference in key metrics (including their performance, workload,and SA), between training with the same maze layout (same task) and training witha different maze layout (different task).
The experiment used a betweengroup design. There were three groups of independentvariables in the experiment: 1) training status (before or after training); 2) training type(with a real robot or with a virtual robot in a VR simulator); 3) maze layout (whether themaze layout for training is the same or different from the real teleoperation used for finaltesting). Dependent variables included the participants performance, workload, SA, posttrial recollection, estimation, selfassessment, and experience of navigating the telerobot.
A total of 32 participants took part in the study. Figure 6.3 shows how participants were divided into groups based on the independent variables. The tasks consisted of three parts:pretrial, training session, and final trial. The idea of this design was based on findingsfrom Experiment 1. We added the training session between the two trials. Since we couldnot find a significant difference between the two trials in Experiment 1, we aimed to knowhow the added training session may affect the difference between the two trials. In thepretrial, the participants navigated the telerobot through amaze in a room. Afterwards, aspart of training, they navigated the telerobot for five times. Half of the participants trainedin a VR environment, while the other half trained in reality. After training, they navigatedthe telerobot again in a real room as the final trial. More details of the experiment can befound in Appendix C.
Figure 6.3: Illustration of the experimental design of Experiment 2.
The main findings from this experiment were threefold: 1) the training sessions had significant performance effects; 2) there was not significant difference between training with a
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 49
real robot and training with a virtual robot in a VR simulator; 3) it made no difference whatmazelayout that was used for training (e.i. same or different). Results mainly revealedthat Hypothesis 1 was rejected, while Hypothesis 2 and 3 were accepted. Appendix Cprovides an overview of these findings. Task completion time is one of the key metricsfor teleoperation and Figure 6.4 shows the interactions between the training effect (Trail 1and Trail 2), the training environment (real or VR) and the final test trail (samemaze layoutor different layout). The figure reveals that users improved their performance by reducingtheir task completion time significantly after training (DIFF: the maze layout for trainingis the different from the one used for the real task; SAME: the maze layout for trainingis the same with the one used for the real task). Neither training environment or mazelayout had any significant effect on how much a test subject improved their performance.The difference between real and VR training in trail 1 was expected to be random as bothgroups were tested in a real environment and no encounter with the VR model had takenplace at that time. More charts can be found in Appendix C.
DIFF SAME
Trial1 Trial2 Trial1 Trial2
0
2
4
6
Trial
Task
com
plet
ion
time
(min
)
Type
REAL
VR
Task completion time
Figure 6.4: Task completion time (training effects)
Participants’ performance, SA, and feelings of dominance and confidence were significantly improved after training, while their workload was significantly reduced. Moreover,there was no significant difference in these aspects between training with a real robot andtraining with a virtual robot in the VR simulator. In Experiment 1, we did not find any significant difference between the first trial and the second trial. In this experiment, there werefive training sessions between the first trial and the second trial. The results indicated thatadequate operator training (five training session in this case) had positive impacts on thegaze control of telerobots. Moreover, there was no significant difference between trainingwith a real robot and training with a virtual robot in VR. Therefore, VRbased training couldbe an alternative training environment, which is safe and nonexpensive solution. Therewas no significant difference in the above mentioned aspects between training with thesame maze layout (same task) and with a different maze layout (different task). In thedesign process, the simulation was made as close to reality as possible, including the environment, the robot and the control mechanism. This finding suggested that the designof the training task is not necessary to be completely consistent with the real task. This isalso supported by our finding that the actual maze layout was of insignificant importance.The theory of transfer of training (c.f. Section 2) could be used to explained this finding.
50 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
The skills acquired from training sessions can be transferred to similar tasks, with thesame maze layout or in a different layout.
Besides training effects, we also analyzed the training process itself. Between the twotrials, there were five times training. Figure 6.5 shows task completion time of each training session under each condition. A learning curve can be observed. The curve may beused as a reference when training novice users [56]. As we can see, the curve becomesflat after Training 2. We wondered whether the pretrial plus two training would have beensufficient and brought this question to our field study. Also, it can be observed that driving in the simulator was slower than in reality for some unknown reason. Thus, drivingspeed may be another example of a training scenario element like graphical realism, c.f.section 2.10 that seems to be inconsequential to the simulator training outcome.
Figure 6.5: Task completion time of each training session between groups in Experiment2
The findings contribute to the theory of transfer of training, by providing more empiricalevidence within a context of acquiring skills in a VR simulation and transferring it to usegaze to control a real robot with an HMD.
We proposed a method of training with a VR simulator and tested it in the experiment.Findings suggested that it could be a feasible solution for the addressed issues, whichcould provide insights to other researchers. For instance, when conducting a study ona novel control method for telerobots with novices, this kind of simulation could be used.In addition, similar test procedures and task types (with a line maze layout) can be usedfor future research. The same control device and the same control mechanism could betransferred to the VR simulator. This provides a high simulation level by ensuring thesame hardware and the same control mechanism.
The findings also had implications on our field study. Based on the findings, training in aVR simulator was used as a tutorial session in the field study for the participants to acquirenavigation skills before driving the real robot.
Overall, the findings from the two experiments suggests potential solutions for promotingthe use of gazecontrolled telepresence as a viable and feasible assisstive technology forour target users.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 51
6.3 SA MeasureThis section presents the main finding and insights regarding SA measure from the twoexperiments. As outlined in Fig. 1.3, we aimed to explore how we can measure SA whendriving a robot with an HMD.
Throughout the two experiments, two comparisons were made in order to look for an SAmeasure, which is suitable for our use case. In Experiment 1, the ex post questionnaireand the SPAMbased popup (c.f. Fig. 4.3 and Fig. 4.4) were compared. Details of theresults can be found on Appendix A.2. In Experiment 2, SPAMbased popup and saccadetest (c.f. Fig. 4.5) were compared. Details of the results can be found on Appendix A.3.
In both experiments, data from the above mentioned measures were compared with othermetrics. The correlation between the data was then used to examine the validity and reliability of the technique. In Experiment 1, it was found that the SPAMbased popupprovided more reliable data than a posttrial subjective measure (SART). Correlations ofSA and other metrics further confirmed previous findings on teleoperation, namely a positive correlation between SA and human performance, and a negative correlation betweenSA and workload. However, we also found that the SPAMbased popup has a limitation,namely that response to a popup can interrupt the main task. Therefore, we looked foran alternative method with less interruption. A prosaccade test was implemented. InExperiment 2, we found that saccade test data had strong significant correlation with datafrom the SPAMbased pop up. In particular, latency to correct saccade test (especiallythe first one) has strong significant correlation (r = 0.75) with the preliminary question (e.i”Are you ready?”) of the SPAMbased popup. Therefore, this suggests that the saccadetest could be an alternative method for assessing an operator’s SA. Correlations of saccade test data and other metrics further confirmed the previous findings on teleoperation,namely a positive correlation between SA and human performance and a negative correlation between SA and workload. More detailed results can be found in Appendix A.2and A.3.
An overview of the different methods based on our findings from the two experimentscan be found in Table 6.2. Experiment 1 was the first time that SPAM was applied in acontext where a telepresence robot was navigated with an HMD. This adaptation has itsunique advantages compared to a posttrial questionnaire, as it does not rely on memory.It is objective and done in real time. It could be a good candidate measure of SA inteleoperation with an HMD.
Our exploration of using the saccade test (prosaccade test) in Experiment 2 suggestedthat it can be a new tool for measuring SA within this context. It also has advantagesas a realtime and objective measure like SPAM. The strong correlation between the twomethods suggested that the saccade test could be a potential alternative for SPAM. In addition, compared to SPAMbased popup, the main advantage is less interruption of themain task, as it only needs the operator to simply move his eyes shortly. Moreover, it ismuch easier for an experimenter to collect data compared to the SPAMbased popup, asthe saccadic eye tracking data can be recorded automatically with stateoftheart technology [81]. However, a disadvantage of using saccade test as a new tool is that it onlyprovide on single kind of information. According to the SA theory, SA consists of threehierarchical levels: perception, comprehension, and projection [159]. With SPAM, measures of each level can be embedded into different quires. The two types of response timeand accuracy of answer provides multidimensional information. A saccade test relies onsaccadic eye movements, especially latency, but as illustrated in our experiment, it cannotprovide information on three detailed levels as SPAM can. Therefore, the saccade test
52 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Pros Cons SuggestionSART Ease of use forexperiment and operator Subjective
No interruption Rely on memory BIP problem not reliablewith a small sample
SPAMbased popup
Objective Complexity of usefor the experimenter;
Suitable for teleoperationwith multiple informationof SA needed, but notsensitive regarding
Realtime Interruption of secondary taskto the main task
Multiple informationon threelevels of SA:Reliable dataSaccade Test Ease of use forexperimenter and operator Single information As a new tool
Less interruption ofsecondary task tothe main task
Latency as indicator Suitable for teleoperationwith quick SA measure,
Realtime No details regardingthree SA levels needed;
Reliable data Sensitive aboutinterruption to main task
Table 6.2: An overview of SA comparison based on our research practice
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 53
can be a suitable candidate for SA measure in teleoperation (e.g. with an gaze tracker),when no details about threelevel SA are needed, and when easy and continuous datacollection is requested.
Our findings in SA measure also have implications for the design of telepresence robotsfor our target user groups. Based on the findings from Experiment 2, the possibility ofapplying the saccade test to improve safety mechanism in teleoperation was discussed.More detals can be found in Appendix A.3. The typical dead man’s switch is binary. Thesaccade test could be used as an enhanced version of the deadman’s button in robotteleoperation with gaze tracker. By using such button, depending on the operator’s reaction to the saccade test, three realtime status classifications of the operator could beprovided, namely sleeping, awake with low awareness, or with high awareness. Dependent on this classification, the robot may respond by e.g. stopping, warning or drivingnormally.
54 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
7 Insights from the Field studyThis chapter introduces the field study and the main findings.
The field study was conducted at a care home with around 90 people using wheelchairsfor daily mobility. Participants include five people with different levels of motor impairments. Some of them used gaze interaction for communication and some used speechcommands for smarthome control. All of our five participants have impaired manual activity, including limited gripping, fingering or holding ability, due to cerebellum disorders orcerebral palsy. None of them have cognitive impairments or literacy difficulties, but oneof them has impaired speech functions. Three of them have experienced VR using anHMD and one has experienced a telerobot.
Participants took part in the study one by one. After greetings and a brief introduction, wecollected demographic information, including experience with gaze control, VR, telerobotsand wheelchair control. This information was provided by a caregiver for the one individualwho is nonverbal. In particular, participants were informed that if they felt uncomfortableat any time they should stop immediately. Then we showed a demo video of one ofour earlier telepresence robot projects [21] with a person controlling a telerobot by hisgaze while laying in bed. A Double robot (Double 2) and our gazecontrolled robot werestanding next to the table we were sitting at.
Initially, we conducted an interview focusing on their expectations and visions of possible usage of telerobots for their daily life activities. Then followed the two experienceprototyping sessions as described below. We presented a range of options to them, including types of displays (HMD versus screen), independence levels (independent versusassisted by others), and control methods (gaze, speech or hand).
In the first experience prototyping session, we used our gazecontrolled robotic telepresence system (see Figure 3.3). An HMD (FOVE) which was connected with a computerrunning Unity. We first gave the participants the possibility to train in navigation of a virtualrobot in VR by conducting six trials, similar to the procedure used in Experiment 2 [26].
Then they got to try a real gazecontrolled telerobot. In the field study, the setup wasalmost the same with the version for Experiment 2 (c.f. Figure 3.3). In addition, for comparison of UIs, we implemented waypoint control in the system. We introduced two waysof driving the telerobots using two types of gazebased UIs, namely the one on the updated version of ”invisible control” (see Fig. 3.5) [23] and the waypoint control (see Fig.3.8). Details of the two gazebased UIs can be found in Appendix B.7 [27].
Participants were asked to use their gaze to navigate the telerobot, wearing a FOVE HMDwith builtin eye trackers. They were instructed to drive around a large table placed in themiddle of the room and say hello to a person they would meet while driving around it. Theydid this twice once using the ”invisible control”, and once using the waypoint method(see Fig. 3.8). Afterwards, they answered questions about how confident they felt usinggaze control and about their preference for one of the two gaze control methods.
In the second experience prototyping session we presented a Double robot running theDouble web application in a Chrome browser on a laptop connected to the robot via wireless network.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 55
Two levels of assistance were experienced by the participants. First they could ask ahelper to navigate the robot from the corner of the room to their position. Then we gavethe participants an open instruction. This was done to provide a possibility to imagine thelevel of intelligence they would expect for this type of interaction.
Then they were asked to use gaze control. An eye tracker (Tobii Eye trackers 4C) wasused for tracking the user´s gaze point, and was shown on top of the live video streamfrom the Double robot. Again, steering of the Double was performed in a Wizard of Ozsetup by an experimenter standing behind the participant manually following the usersgaze point with a joystick (Xbox 360 controller). The Double robot was driven from theexperiment room through a corridor to the canteen, passing the reception of the carehome.Afterwards, the four participants who could speak also tried he telerobot around byusing simple speech commands.
After the sessions, we talked about their experiences and asked questions about potentialusages of the telerobots and their preference. More details about the field study can befound in Appendix B.7.
Insights based on observation from the field study can be seen as follows.
7.1 Presence and ExperienceAs mentioned in Section 2, presence is a key metric of telepresence. It is important toknow whether they achieved the feeling of being there through the telerobot. As explainedin Section 4, the questionnaire used in the experiment was not used in the field study dueto the constraints. However, we investigated their feeling of presence by asking themthe question ”where are you now?” frequently during the tasks in the field study. Theparticipants’ answers were always their remote position (e.g. in the corridor, reception, orcanteen) not their actual physical location (i.e., in the experiment room). This indicateda strong sense of telepresence.
We had a very strong impression that the participants, their fellow residents, and the staffin the carehome were excited when communicating via the telerobots. A staff membereven hugged the telerobot when she saw who was driving it. The participant who couldnot speak, laughed and used her hand gestures when communicating with friends shemet on her way. Overall, the participants’ assessment of their experiences with telerobotswere very positive, with typical statements like ”super fun, easy to use”, and even ”proudof myself”. These observations, to some degree, confirmed our answer to RQ 1 that it isimportant to bring telepresence robots to our target user groups.
7.2 Envisioning Telepresence and its PossibilitiesAt the beginning of the field study, it was difficult for the participants to envision potentialusages of telepresence. Without the telerobot being placed in front of the participants, itwould most likely have been impossible for them to have a vision, or even a very roughunderstanding of the possibilities that ”telepresence” and telerobots could provide them.However, the participants got a much more clear vision of future usage after experiencingthe telerobots for themselves, mostly focusing on social interaction and possible inclusionin society. For example, one participant envisioned that they could use the telerobots togo shopping and to talk with a care giver or a shop assistant via the robot. When using thetelerobot for shopping, the participant would like to complete tasks like looking at the pricelabels themselves independently. One participant envisioned that he could get a job in awarehouse driving robot trucks remotely or cleaning floors with a floor cleaning machine.One participant expressed a wish for the telerobot to have an arm that could do things.
56 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
According to the target users’ description, independent daily activities and working aretheir main ambitions. These kind of use cases have been explored in previous work,for instance, the use of telepresence robots for shopping [129]. Use cases for socialinteraction has been studied by [128, 5, 129], telework was studied by [116], and telerobotsfor outdoor activities by [7]. However, none of these use cases have not yet explored withour target users.
7.3 Beyond Wheeled TelerobotsThe telepresence achieved by using the currentstate telerobots are of a relatively earlystage, which limits the deployment in more advanced use cases.
In the future, new types of robots could be expected for telepresence. For instance, besides wheeled telerobots, telepresence robots could be humanoid, dronelike (e.i. flying),have arms, or feature other kinds of manipulators. Examples of humanoid robots for person with disabilities can be found in [45]. Robots with more alienlike appearances (e.g.,Orihime, c.f. figure 1.2 [40]). The Orihime table robots can raise their hands to make agreeting, robot had an arm swing. Regarding humanoid telepresence robots, there areopen questions for future research. If the robots are humanlike and our target users cancontrol different bodyparts of the robots, the robot may become a fullbody avatar for theusers. For example, when users have impairments with their hands and legs, but couldcontrol the avatar for walking, what implication would this have for them? It is also an openquestion what impact this might have on people in the remote environment who engagewith the telerobot.
Today’s wheeled telerobots are limited to 2d plane movement. Airborne drones may becomemerged with telepresence robots, for instance allowing in to fly over short distances.this dual function would solve the solve the problem that wheeled roboit may face goingupstairs or crossing a doorstep. A gazecontrolled drone has been explored [59] andrecently, an exploratory study was conducted to explore drones for telepresence [215].
Robotic arms and fingers have the potential to become and integrated and essential partsof future telepresence robots in combination with both wheeled telerobots or humanoidtelerobots [110].
The above mentioned features beyond traditional wheeled telerobots may provide morepossibilities for our target users. These are not only an extension of existing telepresencerobots, but they also holds promises beyond the existing experience of telepresence.
7.4 Independence and AssistanceAutonomous telepresence robots[216] might seem like a good solution for our targetusers. Prior work also argued that a higher degree of autonomy is required in telepresence robots [217]. However, fully autonomous systems may increase workload if userslack trust in the automated system, especially when the underlying mechanism of automation is not clear to them [218]. According to the interview, no participant preferredto have the telerobot driven for them, either by other people nor by a fullyautonomous”intelligent” system. This is in line with previous observations [182] that our target usersprefer to retain control authority.
In the field study, it was observed that the preferences of target users was often basedon the independence provided. When participants were asked about their feelings of independence using the categories suggested by [219] (in our terms being able to controlrobots on your own, being able to maintain personal mobility in a remote place and being confident doing so), all of the participants stated that they had the same feeling of
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 57
independence when using the different control methods (gaze, speech and hand). Noparticipant chose the fully autonomous solution.
In this field study, it was further confirmed that safety could be problematic. This is aparticular concern when driving among other wheelchair users because they could notavoid colision with a telerobot by a quick manoeuvre. Collision between telerobots andother residence might cause serious accidents.
Therefore, semiautonomous robots seems to be a more viable solution, using intelligentsystems to assist in problematic situations only, and sensors to warn when obstacles aredetected. However, this is a very complex issue that requires more research addressingdifferences in user needs, quality and speed of data communication, differences in robotdesign, and differences in control principles to name some of the factors that may impactperformance and user experience.
7.5 Target Users in Realistic ScenariosThe systematic literature review (see Appendix A.1) reveals that studies on telepresencerobots for people with special needs were usually conducted with ablebodied subjects ina laboratory setting.
Prior work showed that good performance by ablebodied participants does not necessarily mean that a similar good performance can be expected by people with disabilities[220]. The field study and both experiments revealed that, in terms of gazecontrolledtelepresence, both groups had similar performance. However, differences between thetwo groups were also found, for example with reaged to the target users’ preference fordifferent hardware and control methods. Prior work with ablebodied individuals showedthat most users liked fullyautonomous features [216]. However, interviews from the fieldstudy revealed that some participants did not like that. The waypoint method had superiorperformance, and it was also most preferred by the ablebodied users in the experimentby [28]. However, the field study revealed that our wheelchair users felt uncomfortableand excluded when they had to look down at the floor to steer a vehicle.
The field study with target users also provided us with a deeper understanding of peoplewith disabilities and the need for future research. For example, in the interviews, someparticipants expressed their willingness to help others. This further confirmed previousfindings that people with disabilities should not be simply regarded as passive recivers ofcare. Providing them possibilities to become helpers might support their social interactionand integration [221].
The real scenario in a field study provided more information than the laboratory setting.It is important to test telpresence robot in the field [214]. Even though our laboratorysetting was designed to be as close to the real scenario as possible we observed someimportant differences. For instance, when the telerobot went through a lobby in the carehome, there where a lot of wheelchair users gathering, posing potential dangers andsafety problems. This never happended during the experiment. Other issues like batteryduration and recharge options, wireless network coverage, and privacy of remote users,were always ignored in our laboratory setting. Only in a realistic setting can this kindof problems be identified, which would then need to be solved for future deployment oftelepresence robots.
7.6 Implications for Future DesignBased on the main observations and insights mentioned above, some issues for futuredesign can be identified.
58 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
In the field study, we further confirmed that it is important to involve target user groupsthroughout the entire design process. This is evident in the difference between the targetusers’ preferences and the ablebodied users’ preferences. For instance, their needsfor ”telepresence” have gone beyond what the existing telepresence robots can provide,such as telework with a remote robotic arm. This information from target user groups mayprovide enlightenment for future design.
Before deployment of telepresence robots for special user groups, unexpected issuesneed to be identified by testing with target user groups in realistic scenarios. Asmentionedabove, one unexpected issue was found in the situation where the telerobot was togetherwith many other people on wheelchairs in the remote environment. In addition to thecare home setting, using telepresence robots in a home setting for social interaction maylead to more issues in navigation, as homes have many physical constraints, such asdoorsteps and door frames [118].
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 59
8 DiscussionThe discussion is structured in three parts. The first part is related to a discussion ontelepresence for people with disabilities and inclusion via telerobots. The second one isrelated to limitation of the current technologies for telepresence. The last part is relatedto limitations of the studies.
In this research, gazecontrolled telepresence has been designed and implemented forour target user groups, of which the persona (c.f. Chapter 1) is a representative. Withinthis research, theoretical, design, and empirical contributions were made throughout theentire research process. The research questions were answered with the main researchactivities of the literature review, the experiments, and the field study. Based on the literature review, we know the importance of telepresence robots for people with disabilities.Based on theses findings, we aimed to design and implement gazecontrolled telepresence to enable accessible control of telerobots for the target user groups. In the researchpractice we explored the accessibility and challenges. Addressing the challenges, we explored a VRsimulatorbased solution for training and examined its effectiveness. In thisprocess, our evaluation utilized existing methods, and also our adapted novel methods,mainly for measuring SA.
Gazecontrolled telepresence is technologydriven, which consists of technologies of eyetracking, telerobotics, and other devices for telepresence. The research within the mainfocuses, accessibility, training, and evaluation, were carried out around these technologies.
The systematic literature review gave an overview of the tele robot technologies that hasbeen created with the motivation to provide more possibilities for people with disabilities.The main problem addressed is mobility, which is also common among elderly people ingeneral and home bound children, for instance with prolonged diseases. Telepresenceprovides them with possibilities to take part in school teaching and to be among theirfriends. Our gazecontrolled telepresence is particularly meant for those who have motordisabilities, especially limited manual ability. Their main problem is the mismatch betweencurrent telepresence robots especially the commercial ones and their abilities. Ourgazecontrolled telepresence solves this mismatch by providing a control method that wehave shown to work well with several types of telerobots.In addition we have offered apromising way to train for this control method through a VRbased simulator. With thiscombined solution for the mismatch, their impairment should not become a barrier. Bytechnical solutions, ”an equal basis with others” [31] could be ensured, and there supporttheir basic rights declared in the CRPD.
Personal mobility (Section 20 in the CRPD) is a basic right. The basic rights of education(Section 24 in the CRPD), work and employment (Section 27 in the CRPD) could be alsopromoted by using gazecontrolled telepresence robots. For instance, the homeboundchildren from our target user groups might be able to attend school via a gazecontrolledtelepresence robot, addressing the serious issue that some children with disabilities tendto have, namely limited access to education [42]. The promotion of their rights plays animportant role in their social inclusion, to provide them with full and effective participationin society on an equal basis with others.
Based on reexamination of the research throughout the entire research process, some
60 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
limitations have been found. These could have potential implications on future work regarding research or design of similar systems.
One main concern is about the hardware for the gazecontrolled telepresence. The reason we have chosen an HMD are twofold, namely for the operators, and for our researchpurposes. However, here we need to rethink the advantages and disadvantages of HMDto evaluate whether HMD is the most suitable solution for our use case. Through our studies, especially field studies, we found that using HMD have some serious disadvantages.For our target user groups, participants did not like to use the HMDs, although they foundthem interesting to start with. They soon realized that they could not use them independently, but needed assistance to put them on, take them off, and adjusting its position.Moreover, some users felt uncomfortable after using it. Negative health and safety effects from participating in VRbased training via HMDs might happen [222]. For instance,in Experiment 2, one of the participants had to stop due to severe cybersickness. Formany people who did not have the severe symptoms, they might have red marks in theirface even after short usage (10 minutes). Wearing an HMD for a long time might havepotential impacts on the neck and head. This is a crucial issue to be considered for ourtarget users and users of HMD´s at large. Potential solutions have been considered forour target user groups, such as a pillowlike HMD for users in bed [223]. Regarding gazecontrol, our field study suggested that it is problematic for waypoint control. The methodfeatures semiautonomous driving. Although it was found to be the best one for the ablebodied participants in our comparative study [28] it was not preferred by the target users,as users needed to look upwards to set waypoints due to its limited FOV.
HMDs do not allow users to have facetoface communication with other people aroundthem , as the face is covered by the HMD. When wearing an HMD, one participant in thefield study could not communicate with her care giver who was sitting next to her. Theusers might become socially isolated by being forced to wearing a HMD [21], especiallywhen the telerobot is used in a remote place, without people there. Moreover, persons inthe remote place cannot have facetoface communication with the operator whose face iscovered by the HMD. Thus, wearing an HMDmight become a barrier for social interaction.Addressing the issues, an avatar face of the user displayed on the remote screen couldbe an option. Animation of the face where the avatar’s mouth moves synchronously withspeech has been developed by [224, 225]. In addition, the operators’ eye movementsand speech could be recorded for being displayed with the avatar. These potential solutions and their implications need to be explored in future research. Also, technologicalimprovements of HMD´s are most welcome. They need to be more lightweight and theyneed to be socially transparent in the sense that other people can see the face of theoperator wearing them, just like normal glasses allows for.
Overall, our solution based on a combination of HMD and builtin eyetrackers has bothadvantages and drawbacks. There are a few potential solutions, which can be foundin Table 8.1. Cave automatic virtual environments (CAVE) can be used for an immersiveenvironment where users are completely or almost completely integrated into a virtual environment [226, 227]. The CAVE could be considered as a candidate for gazecontrolledtelepresence, together with seethrough eyetracking glasses, such as Tobii Pro Glasses.This would require a high cost of display equipment and a special place to guaranteethe immersive level. Addressing the cost issue, a lowcost setup which could be considered is a computer monitor with eye trackers, such as screenbased eyetrackers TobiiPro. This combination for gazecontrolled telepresence has been explored prior to ourresearch [10].
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 61
Solution HMD +builtin eye trackers
2D display +screenbased eyetrackers
CAVE + seethrougheyetracking glasses
Pros Immersive Flexible
No isolation Low cost Facetofacecommunication
Immersive Facetofacecommunication
Cons
Target users cannotuse independently Problem of isolation Limited FOV uncomfortable No facetofacecommunication
Not flexible Not immersive
High cost Not flexible
Immersion Very high Low HighCost Medium Low Very highFlexibility Very high Very low MediumTable 8.1: A comparison of possible solutions for gazecontrolled telepresence for targetusers as operators
Regarding the control method, our gazebased UI cannot completely overcome the Midas touch problem [65]. For our target users, BCIs have been unable to compete withsimpler assistive technologies such as eyetrackers for typing tasks [39], but they havetheir advantage in deciphering patterns of brain activity (e.g., intention). Gaze controlcombined with BCI may be a solution, when the Midas touch problem is a key issue andhigh accuracy is needed. For instance, gaze control with motor imagery selection [71]has been proposed. However, this method is slow in capturing users’ intention. In oursystematic review, we found that earlier studies [208, 228] using wet electrodes in theirexploration, but these devices needs to be set in a laboratory. Therefore, we concludedthat current BCIbased technology might have some advantages, but could not be usedin reallife situations. When we look at stateoftheart devices, for instance, NextMind1, these might open doors for BCI to move outside the lab. It is a realtime brain sensing wearable device, currently only available for the developers community, that has dryelectrodes and remote, bluetooth connection to a central computer unit.
For transferring the live stream from the 360 degree camera, we used a limited FOV inthe driving mode to guarantee the best possible video quality (c.f. the control panel in Fig.3.5). Recent adaptive solutions based on eyetracking data could be considered in futureresearch addressing this issue. For example, foveated rendering for gazetracked VR[229], which renders images with fewer resolutions outside the eye fixation area, couldmake significant speedups for wide FOV displays like HMDs. This solution could be usedalso for the VR simulator part of our gazecontrolled telepresence. Other solution couldbe further explored. For instance, areas of users’ most frequent interest (AOIs) could bedetected. Within this area, a highresolution video with more details could be transmittedvia video stream.
The motivation of bring eyegaze control to telepresence robots has been explained inprevious chapters. Our findings suggest that it is viable and has the potential to be apromising method. However, our findings suggest that it is not always the suitable methodfor all our target users. Some participants in our field study actually used their hands to
1https://www.nextmind.com/ [last accessed 17052021]
62 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
control the joystick on their wheelchair, though they have motor impairments. They wouldnot need eyegaze control. In our first prototype [21], we explored head control. For sometarget users, head control could be a viable and intuitive way to control FOV.
When we look back to the goals of technologydriven possibilities and inclusion, a fewissues and barriers regarding gazecontrolled telepresence should be addressed. It is aan open question whether a telerobot is suitable for our target user groups to work fromhome. It was suggested by our target users, but several issues should be addressed.First of all, there is a serious risk of becoming socially isolated. This is a common challenge when working from home, as many have done through the COVID lockdown [230].For people with disabilities using telerobots for working from home for a long time, similarpotential problems need to be considered. Also, there has been several cases of s.c.robot bullying [231] where people are treating robots in urban areas or at workplaces aggressively. This might be a most unpleasant experience for the pilot and special concernsshould be taken to guard them and to support them in case of an bullying episode.
In the future, research should focus on the Midas touch problem [65], which has not beenresolved in this research. In a broader sense new UIprincples are needed to supportgaze interaction. It is still a challenge for some people to learn hoe to use gaze, and inorder not to exclude them, more simple and intuitive designs should be developed.
Regarding preference of gazebased UIs in the field study, the factor that differences existbetween trained and untrained participants was not taken into consideration. This factorshould be considered in the future research. For instance, prior work showed that positionpoint navigation worked better for untrained users while waypoint navigation worked betterfor trained pilots [232].
A main limitation for our litterature review are that that the keywords are limited to focus ontelepresence. Some works have explored the potential usages of teleprsence, but maynot use the specific term telepresence and thus have not been included. For instancework by [110] addressed what they termed ”monitoring and controlling a remote roboticarm”.
In addition, presence is a key issue with regard to telepresence. Thus, we adopted awidelyused questionnaire for measurement of presence in Experiment 1. It was foundthat changing the control method and trial order had no significant effects on presence.We assumed that it was affected by the display and other equipment, and not impacted byother factors, such as the control method and the operator. In the second experiment, wedid not include presence as a dependent variable. However, research in this field revealedthat telepresence has correlation with performance in teleoperation, and the operator aswell. These factors should have been explored in the research, for instance, by measuringpresence after sufficient operator training.
In both experiments, in order to make sure the popup consistently appeared in each trial,we set the popup of the same type to appear in a fixed time period, or after a certainevent. For instance, in Experiment 1, after a turn, a query about orientation was posedas a perceptionrelated query. However, collision an important safety issue was notconsidered as a event. No popup appeared to measure their status of SA. Without thisinformation, we could not explore SA in realtime status when having a collision. In futurework, a popup could be presented after each collision, or just before an automatic collisionavoidance sensor would be activated.
In both experiments, only one pretest question (”Do you have VR sickness when wearingVR glasses?”) was asked to acquire information regarding cybersickness. The answers
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 63
were only ”yes”, ”no”, or ”I don’t know”. In Experiment 2, the lack of detailed informationand realtime detection resulted in an incident that a participant had to stop as a result ofsevere cybersickness. This suggests that realtime detection are needed, especially forthose from our target grouop that might have difficulties expressing themselves verbally.For instance, a method of predicting cybersickness based on users’ gaze behaviors inHMDbased VR [78] could be considered in future research. The saccade test and itspotential usages for an enhanced safety mechanism have been discussed in Chapter6. In addition, when such saccadic eye movement data have been collected, more realtime assessment based on saccadic eye movements could be considered, for instance,assessment of fatigue [77], attention [79], and sleepiness [90]. In addition, besides prosaccade, antisaccade could be further explored within the context of teleoperation, asprior work has shown a correlation between antisaccade performance and primary tasks[91].
Some other solutions proposed in previous studies could be considered in future researchaddressing the above mentioned issues. For instance, prior work showed that operator’s’workload of controlling the robot reduced when the telepresence robot automatically follows a person using two web cameras [233].
The number of participants in our field study was limited to five and we only conducted thestudy at one care home. In future studies, the number of people involved should be muchhigher, including also elderly participants and people with a larger variety of disabilitiesthan in our case. Telerobots may be applied by people living independently and it wouldmost likely provide much more insights to study how they would prefer to use telerobots.
Finally, the use of telepresence robots for social interaction has not been examined initself in any of our present research, and we have not considered the wider social andorganisational aspects of using telerobots at workplaces, at schools or care homes. Nordid we study the use of telerobots over a longer time period and ”for real” where theywould be fully used as e.g. a platform for permanent distance work, not just for a singleobservation.
64 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
9 ConclusionThis research focused on gazecontrolled telepresence, which intended to explore theaccessibility of telepresence robots for people with motor disabilities by using gaze, thepotential effects of training, and its evaluation. Wemainly made theoretical, designbased,and empirical contributions within this exploration by answering our research questions.
It is advisable to introduce telepresence robots for people with motor disabilities, c.f. RQ 1.Gaze interaction is a viable method to control a telerobot, c.f. RQ 2. We modified existingmethods, and developed the new evaluation method. We found the difficulties when controlling a telerobot by gaze, mainly in terms of longer task completion time, higher RMSD,more collisions, higher effort and frustration, higher mental and physical demand, andlower feeling of dominance, c.f. RQ 3. We can train people in VR to drive a telerobot bygaze in real life. Thus VRbased training could become an alternative method for traininginstead of using a real robot, c.f. RQ 4. Throughout this research and in previous work,it has been shown important to measure situational awareness when driving a telerobot,c.f. RQ 5. When driving a robot with an HMD, we can measure situational awarenesswith the SPAMbased popup and the saccade test. The saccade test was found to beless intrusive to the main task, c.f. RQ 6.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 65
Bibliography[1] Marvin Minsky. “Telepresence”. In: Omni 2.9 (1980), pp. 44–52.[2] Günter Niemeyer et al. “Telerobotics”. In: Springer handbook of robotics. Springer,
2016, pp. 1085–1108.[3] United Nations Assembly General. “Sustainable development goals”. In: SDGs
Transform Our World 2030 (2015).[4] John Pruitt and Tamara Adlin. The persona lifecycle: keeping people inmind through
out product design. Elsevier, 2010.[5] Fumihide Tanaka et al. “Telepresence robot helps children in communicating with
teachers who speak a different language”. In: Proceedings of the 2014 ACM/IEEEinternational conference on Humanrobot interaction. ACM. 2014, pp. 399–406.
[6] Wendy Moyle et al. “Connecting the person with dementia and family: a feasibilitystudy of a telepresence robot”. In: BMC geriatrics 14.1 (2014), pp. 1–11.
[7] Yasamin Heshmat et al. “Geocaching with a Beam: Shared Outdoor Activitiesthrough a Telepresence Robot with 360 Degree Viewing”. In: Proceedings of the2018 CHI Conference on Human Factors in Computing Systems. ACM. 2018,pp. 1–13.
[8] Natasa Koceska et al. “A telemedicine robot system for assisted and independentliving”. In: Sensors 19.4 (2019), p. 834.
[9] Eftychios G Christoforou et al. “The Upcoming Role for Nursing and AssistiveRobotics: Opportunities and Challenges Ahead”. In: Frontiers in Digital Health 2(2020), p. 39.
[10] C Carreto, D Gêgo, and L Figueiredo. “An Eyegaze Tracking System for Teleoperation of a Mobile Robot”. In: Journal of Information Systems Engineering &Management 3.2 (2018), p. 16.
[11] Hemin Omer Latif, Nasser Sherkat, and Ahmad Lotfi. “Teleoperation through eyegaze (TeleGaze): a multimodal approach”. In: 2009 IEEE International Conferenceon Robotics and Biomimetics (ROBIO). IEEE. 2009, pp. 711–716.
[12] Martin Tall et al. “Gazecontrolled driving”. In: CHI’09 Extended Abstracts on Human Factors in Computing Systems. ACM. 2009, pp. 4387–4392.
[13] Thomas B Sheridan. “Telerobotics”. In: Automatica 25.4 (1989), pp. 487–507.[14] I Scott MacKenzie. Humancomputer interaction: An empirical research perspec
tive. Newnes, 2012.[15] Jonathan Lazar, Jinjuan Heidi Feng, and Harry Hochheiser. Research methods in
humancomputer interaction. Morgan Kaufmann, 2017.[16] Chadia Abras, Diane MaloneyKrichmar, Jenny Preece, et al. “Usercentered de
sign”. In: Bainbridge, W. Encyclopedia of HumanComputer Interaction. ThousandOaks: Sage Publications 37.4 (2004), pp. 445–456.
[17] Marion Buchenau and Jane Fulton Suri. “Experience prototyping”. In: Proceedings of the 3rd conference on Designing interactive systems: processes, practices,methods, and techniques. ACM. 2000, pp. 424–433.
[18] Nicky Sulmon et al. “Using personas to capture assistive technology needs ofpeople with disabilities”. In: Persons with Disabilities Conference (CSUN). CSUN.2010.
[19] Mica R Endsley. “Measurement of situation awareness in dynamic systems”. In:Human factors 37.1 (1995), pp. 65–84.
66 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
[20] Sophie Stellmach and Raimund Dachselt. “Designing gazebased user interfacesfor steering in virtual environments”. In: Proceedings of the Symposium on EyeTracking Research and Applications. ACM. 2012, pp. 131–138.
[21] John Paulin Hansen et al. “Head and gaze control of a telepresence robot with anHMD”. In: Proceedings of the 2018 ACM Symposium on Eye Tracking Research& Applications. 2018, pp. 1–3.
[22] Guangtao Zhang et al. “Eyegazecontrolled telepresence robots for people withmotor disabilities”. In: 2019 14th ACM/IEEE International Conference on HumanRobot Interaction (HRI). IEEE. 2019, pp. 574–575.
[23] Guangtao Zhang, John Paulin Hansen, and Katsumi Minakata. “Handand gazecontrol of telepresence robots”. In: Proceedings of the 11th ACM Symposium onEye Tracking Research & Applications. 2019, pp. 1–8.
[24] Guangtao Zhang and John Paulin Hansen. “Accessible control of telepresencerobots based on eye tracking”. In: Proceedings of the 11th ACM Symposium onEye Tracking Research & Applications. 2019, pp. 1–3.
[25] Guangtao Zhang, Katsumi Minakata, and John Paulin Hansen. “Enabling RealTime measurement of situation awareness in robot teleoperation with a headmounted display”. In: Nordic Human Factors Society Conference. 2019, p. 169.
[26] Guangtao Zhang and John Paulin Hansen. “A Virtual Reality Simulator for Training Gaze Control of Wheeled TeleRobots”. In: 25th ACM Symposium on VirtualReality Software and Technology. 2019, pp. 1–2.
[27] Guangtao Zhang and John Paulin Hansen. “People with Motor Disabilities UsingGaze to Control Telerobots”. In: Extended Abstracts of the 2020 CHI Conferenceon Human Factors in Computing Systems. 2020, pp. 1–9.
[28] Jacopo M Araujo et al. “Exploring EyeGaze Wheelchair Control”. In: ACM Symposium on Eye Tracking Research and Applications. 2020, pp. 1–8.
[29] Linda Nierling and Maria Maia. “Assistive Technologies: Social Barriers and SocioTechnical Pathways”. In: Societies 10.2 (2020), p. 41.
[30] Vicki L Hanson, Anna Cavender, and Shari Trewin. “Writing about accessibility”.In: Interactions 22.6 (2015), pp. 62–65.
[31] United Nations. Convention on the Rights of Persons with Disabilities. New York,2007.
[32] World Health Organization et al. Improving access to assistive technology: reportby the secretariat. Geneva: WHO, 2016.
[33] United Nations. Transforming our world: The 2030 agenda for sustainable development. New York, 2016.
[34] Emma Tebbutt et al. “Assistive products and the sustainable development goals(SDGs)”. In: Globalization and health 12.1 (2016), pp. 1–6.
[35] Johan Borg et al. “Assistive technology use is associated with reduced capabilitypoverty: a crosssectional study in Bangladesh”. In: Disability and Rehabilitation:Assistive Technology 7.2 (2012), pp. 112–121.
[36] Turki Alquraini and Dianne Gut. “Critical components of successful inclusion of students with severe disabilities: Literature review.” In: International journal of specialeducation 27.1 (2012), pp. 42–59.
[37] Andriana Boudouraki et al. “’I can’t get round’ Recruiting Assistance in MobileRobotic Telepresence”. In: Proceedings of the ACM on HumanComputer Interaction 4.CSCW3 (2021), pp. 1–21.
[38] Katherine M Tsui et al. “Towards designing telepresence robot navigation for people with disabilities”. In: International Journal of Intelligent Computing and Cybernetics 7.3 (2014), p. 307.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 67
[39] Pavithra Rajeswaran and Amy L Orsborn. “Neural interface translates thoughtsinto type”. In: Nature (2021).
[40] Stefan Vikkelsø et al. “The telepresence avatar robot OriHime as a communicationtool for adults with acquired brain injury: an ethnographic case study”. In: IntelligentService Robotics 13.4 (2020), pp. 521–537.
[41] WHO et al.World report on disability 2011. World Health Organization, 2011.[42] Johan Borg et al. “Assistive technology for children with disabilities: creating oppor
tunities for education, inclusion and participation–a discussion paper”. In:Geneva:WHO (2015).
[43] J Somavia. “Facts on Disability in the World of Work”. In: Geneva: InternationalLabor Organization (2007).
[44] World Health Organization et al.Guidelines on the provision of manual wheelchairsin less resourced settings. World Health Organization, 2008.
[45] Sameer Kishore et al. “Comparison of SSVEP BCI and eye tracking for controllinga humanoid robot in a social environment”. In: Presence: Teleoperators and virtualenvironments 23.3 (2014), pp. 242–252.
[46] Jesus Minguillon, M Angel LopezGordo, and Francisco Pelayo. “Trends in EEGBCI for dailylife: Requirements for artifact removal”. In: Biomedical Signal Processing and Control 31 (2017), pp. 407–418.
[47] Bradford W Hesse. “Curb cuts in the virtual community: Telework and personswith disabilities”. In: Proceedings of the TwentyEighth Annual Hawaii InternationalConference on System Sciences. Vol. 4. IEEE. 1995, pp. 418–425.
[48] Nathan W Moon et al. “Telework rationale and implementation for people with disabilities: Considerations for employer policymaking”. In:Work 48.1 (2014), pp. 105–115.
[49] Jane Anderson, John C Bricout, and Michael D West. “Telecommuting: Meetingthe needs of businesses and employees with disabilities”. In: Journal of VocationalRehabilitation 16.2 (2001), pp. 97–104.
[50] Maureen Linden. “Telework research and practice: Impacts on people with disabilities”. In:Work 48.1 (2014), pp. 65–67.
[51] Andrew T Duchowski. Eye tracking methodology: Theory and practice . 328 (614).2017.
[52] Päivi Majaranta and Andreas Bulling. “Eye tracking and eyebased human–computerinteraction”. In: Advances in physiological computing. Springer, London, 2014,pp. 39–65.
[53] Kenneth Holmqvist et al. Eye tracking: A comprehensive guide to methods andmeasures. OUP Oxford, 2011.
[54] Päivi Majaranta and Mick Donegan. “Introduction to gaze interaction”. In: Gazeinteraction and applications of eye tracking: Advances in assistive technologies.IGI Global, 2012, pp. 1–9.
[55] Päivi Majaranta. Gaze Interaction and Applications of Eye Tracking: Advances inAssistive Technologies: Advances in Assistive Technologies. IGI Global, 2011.
[56] Hirotaka Aoki, John Paulin Hansen, and Kenji Itoh. “Learning to interact with acomputer by gaze”. In: Behaviour & Information Technology 27.4 (2008), pp. 339–344.
[57] Päivi Majaranta and KariJouko Räihä. “Text entry by gaze: Utilizing eyetracking”.In: Text entry systems: Mobility, accessibility, universality (2007), pp. 175–187.
[58] Mohamad AEid, Nikolas Giakoumidis, and Abdulmotaleb ElSaddik. “A Novel EyeGazeControlledWheelchair System for Navigating Unknown Environments: CaseStudy With a Person With ALS.” In: IEEE Access 4 (2016), pp. 558–573.
68 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
[59] John Paulin Hansen et al. “The use of gaze to control drones”. In: Proceedings ofthe Symposium on Eye Tracking Research and Applications. ACM. 2014, pp. 27–34.
[60] Gauthier Gras and GuangZhong Yang. “Intention recognition for gaze controlledroboticminimally invasive laser ablation”. In: Intelligent Robots and Systems (IROS),2016 IEEE/RSJ International Conference on. IEEE. 2016, pp. 2431–2437.
[61] Vinay Krishna Sharma et al. “Eye gaze controlled robotic arm for persons withsevere speech and motor impairment”. In: ACM Symposium on Eye Tracking Research and Applications. 2020, pp. 1–9.
[62] Alexey Petrushin, Giacinto Barresi, and Leonardo SMattos. “Gazecontrolled LaserPointer Platform for People with Severe Motor Impairments: Preliminary Test inTelepresence”. In: 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE. 2018, pp. 1813–1816.
[63] Alexey Petrushin et al. “Effect of a ClickLike Feedback on Motor Imagery in EEGBCI and EyeTracking Hybrid Control for Telepresence”. In: 2018 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM). IEEE. 2018,pp. 628–633.
[64] Ginger S Watson, Yiannis E Papelis, and Katheryn C Hicks. “Simulationbasedenvironment for the eyetracking control of teleoperated mobile robots”. In: Proceedings of the Modeling and Simulation of Complexity in Intelligent, Adaptive andAutonomous Systems 2016 (MSCIAAS 2016) and Space Simulation for PlanetarySpace Exploration (SPACE 2016). 2016, pp. 1–7.
[65] Robert JK Jacob. “What you look at is what you get: eye movementbased interaction techniques”. In: Proceedings of the SIGCHI conference on Human factorsin computing systems. 1990, pp. 11–18.
[66] Robert JK Jacob. “Eye movementbased humancomputer interaction techniques:Toward noncommand interfaces”. In: Advances in humancomputer interaction 4(1993), pp. 151–190.
[67] John Paulin Hansen and Hirotaka Aoki. “Methods and measures: An introduction”.In: Gaze interaction and applications of eye tracking: Advances in assistive technologies. IGI Global, 2012, pp. 197–204.
[68] Shumin Zhai. “What’s in the Eyes for Attentive Input”. In: Communications of theACM 46.3 (2003), pp. 34–39.
[69] Yogesh Kumar Meena et al. “Design and evaluation of a time adaptive multimodalvirtual keyboard”. In: Journal on Multimodal User Interfaces 13.4 (2019), pp. 343–361.
[70] Yogesh Kumar Meena et al. “A multimodal interface to resolve the MidasTouchproblem in gaze controlled wheelchair”. In: 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE.2017, pp. 905–908.
[71] Baosheng James Hou et al. “GIMIS: Gaze Input with Motor Imagery Selection”. In:ACM Symposium on Eye Tracking Research and Applications. 2020, pp. 1–10.
[72] Satoru Tokuda et al. “Estimation of mental workload using saccadic eye movements in a freeviewing task”. In: 2011 Annual International Conference of theIEEE Engineering in Medicine and Biology Society. IEEE. 2011, pp. 4523–4529.
[73] Sogand Hasanzadeh, Behzad Esmaeili, and Michael D Dodd. “Measuring construction workers’ realtime situation awareness using mobile eyetracking”. In:Construction Research Congress 2016. 2016, pp. 2894–2904.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 69
[74] Lucas Paletta et al. “Estimation of situation awareness score and performanceusing eye and head gaze for humanrobot collaboration”. In: Proceedings of the11th ACM Symposium on Eye Tracking Research & Applications. 2019, pp. 1–3.
[75] Tanya Bafna and John Paulin Hansen. “Mental fatigue measurement using eyemetrics: A systematic literature review”. In: Psychophysiology (2021), e13828.
[76] Leandro L Di Stasi et al. “Towards a driver fatigue test based on the saccadicmain sequence: A partial validation by subjective report data”. In: Transportationresearch part C: emerging technologies 21.1 (2012), pp. 122–133.
[77] Carolina DiazPiedra et al. “Fatigue in the military: towards a fatigue detection testbased on the saccadic velocity”. In: Physiological measurement 37.9 (2016), N62.
[78] Eunhee Chang, Hyun Taek Kim, and Byounghyun Yoo. “Predicting cybersicknessbased on user’s gaze behaviors in HMDbased virtual reality”. In: Journal of Computational Design and Engineering (2021).
[79] James E Hoffman and Baskaran Subramaniam. “The role of visual attention insaccadic eye movements”. In: Perception & psychophysics 57.6 (1995), pp. 787–795.
[80] Corey Holland and Oleg V Komogortsev. “Biometric identification via eye movement scanpaths in reading”. In: 2011 International joint conference on biometrics(IJCB). IEEE. 2011, pp. 1–8.
[81] Diako Mardanbegi et al. “SaccadeMachine: software for analyzing saccade tests(antisaccade and prosaccade)”. In: Proceedings of the 11th ACM Symposium onEye Tracking Research & Applications. 2019, pp. 1–8.
[82] Jay Pratt and Leo Trottier. “Prosaccades and antisaccades to onset and offsettargets”. In: Vision Research 45.6 (2005), pp. 765–774.
[83] Yasuo Terao et al. “New perspectives on the pathophysiology of Parkinson’s disease as assessed by saccade performance: a clinical review”. In: Clinical neurophysiology 124.8 (2013), pp. 1491–1506.
[84] Clayton E Curtis et al. “Saccadic disinhibition in patients with acute and remittedschizophrenia and their firstdegree biological relatives”. In: American Journal ofPsychiatry 158.1 (2001), pp. 100–106.
[85] Ruxsana ShafiqAntonacci et al. “Spectrum of saccade system function in Alzheimerdisease”. In: Archives of neurology 60.9 (2003), pp. 1272–1278.
[86] Junko Fukushima et al. “Disturbances of voluntary control of saccadic eye movements in schizophrenic patients”. In: Biological psychiatry 23.7 (1988), pp. 670–677.
[87] Florence Chan et al. “Deficits in saccadic eyemovement control in Parkinson’sdisease”. In: Neuropsychologia 43.5 (2005), pp. 784–796.
[88] Leandro L Di Stasi et al. “Saccadic eye movement metrics reflect surgical residents’ fatigue”. In: Annals of surgery 259.4 (2014), pp. 824–829.
[89] Leandro L Di Stasi et al. “Effects of long and short simulated flights on the saccadiceye movement velocity of aviators”. In: Physiology & behavior 153 (2016), pp. 91–96.
[90] Kati Hirvonen et al. “Improving the saccade peak velocity measurement for detecting fatigue”. In: Journal of neuroscience methods 187.2 (2010), pp. 199–206.
[91] KaiUwe Schmitt et al. “Saccadic eye movement performance as an indicator ofdriving ability in elderly drivers”. In: Swiss medical weekly 145 (2015), w14098.
[92] Leandro L Di Stasi et al. “Saccadic peak velocity sensitivity to variations in mentalworkload”. In: Aviation, space, and environmental medicine 81.4 (2010), pp. 413–417.
70 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
[93] Wijnand A IJsselsteijn. “History of telepresence”. In: O. Schreer, P. Kauff, & T.Sikora (Eds.) D 3 (2005), pp. 7–22.
[94] Thomas B Sheridan. “Musings on telepresence and virtual presence”. In: Presence: Teleoperators & Virtual Environments 1.1 (1992), pp. 120–126.
[95] Jennifer M Riley, David B Kaber, and John V Draper. “Situation awareness andattention allocation measures for quantifying telepresence experiences in teleoperation”. In:Human Factors and Ergonomics in Manufacturing & Service Industries14.1 (2004), pp. 51–67.
[96] Giuseppe Riva. “Is presence a technology issue? Some insights from cognitivesciences”. In: Virtual reality 13.3 (2009), pp. 159–169.
[97] James J Cummings and Jeremy N Bailenson. “How immersive is enough? Ametaanalysis of the effect of immersive technology on user presence”. In: Media Psychology 19.2 (2016), pp. 272–309.
[98] Martijn J Schuemie et al. “Research on presence in virtual reality: A survey”. In:CyberPsychology & Behavior 4.2 (2001), pp. 183–201.
[99] Simone Grassini and Karin Laumann. “Questionnaire measures and physiologicalcorrelates of presence: A systematic review”. In: Frontiers in psychology 11 (2020),p. 349.
[100] Michael Meehan et al. “An objective surrogate for presence: Physiological response”. In: 3rd International Workshop on Presence. Vol. 2. 2000.
[101] Michael Meehan et al. “Physiological measures of presence in stressful virtualenvironments”. In: ACM Transactions on Graphics (TOG) 21.3 (2002), pp. 645–652.
[102] Thomas Baumgartner et al. “Feeling present in arousing virtual reality worlds: prefrontal brain regions differentially orchestrate presence experience in adults andchildren”. In: Frontiers in human neuroscience 2 (2008), p. 8.
[103] Peter Van der Straaten and MJ Schuemie. “Interaction affecting the sense of presence in virtual reality”. PhD thesis. 2000.
[104] Bob GWitmer and Michael J Singer.Measuring immersion in virtual environments.Tech. rep. ARI Technical Report 1014). Alexandria, VA: US Army Research Institute for the Behavioral and Social Sciences, 1994.
[105] Eric B Nash et al. “A review of presence and performance in virtual environments”.In: International Journal of humancomputer Interaction 12.1 (2000), pp. 1–41.
[106] Curtis W Nielsen, Michael A Goodrich, and Robert W Ricks. “Ecological interfacesfor improving mobile robot teleoperation”. In: IEEE Transactions on Robotics 23.5(2007), pp. 927–941.
[107] Salvatore Livatino, Giovanni Muscato, and Filippo Privitera. “Stereo viewing andvirtual reality technologies in mobile robot teleguide”. In: IEEE Transactions onRobotics 25.6 (2009), pp. 1343–1355.
[108] Henrique Martins, Ian Oakley, and Rodrigo Ventura. “Design and evaluation of aheadmounted display for immersive 3D teleoperation of field robots.” In: Robotica33.10 (2015), pp. 2166–2185.
[109] Jingxin Zhang et al. “Detection thresholds for rotation and translation gains in 360videobased telepresence systems”. In: IEEE transactions on visualization andcomputer graphics 24.4 (2018), pp. 1671–1680.
[110] Anja Jackowski andMarion Gebhard. “Evaluation of handsfree humanrobot interaction using a head gesture based interface”. In: Proceedings of the Companion ofthe 2017 ACM/IEEE International Conference on HumanRobot Interaction. ACM.2017, pp. 141–142.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 71
[111] Amin Hosseini and Markus Lienkamp. “Enhancing telepresence during the teleoperation of road vehicles using HMDbased mixed reality”. In: 2016 IEEE IntelligentVehicles Symposium (IV). IEEE. 2016, pp. 1366–1373.
[112] Jarosław Jankowski and Andrzej Grabowski. “Usability evaluation of vr interfacefor mobile robot teleoperation”. In: International Journal of HumanComputer Interaction 31.12 (2015), pp. 882–889.
[113] Valentin Schwind et al. “Using presence questionnaires in virtual reality”. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems.2019, pp. 1–12.
[114] Jason Jerald. The VR book: Humancentered design for virtual reality. Morgan &Claypool, 2015.
[115] Mahdi Tavakoli, Jay Carriere, and Ali Torabi. “Robotics, smart wearable technologies, and autonomous intelligent systems for healthcare during the COVID19 pandemic: An analysis of the state of the art and future vision”. In: Advanced IntelligentSystems (2020), p. 2000071.
[116] Katherine M Tsui et al. “Exploring use cases for telepresence robots”. In: Proceedings of the 6th international conference on Humanrobot interaction. ACM. 2011,pp. 11–18.
[117] Leila Takayama et al. “Assisted driving of a mobile remote presence system: System design and controlled user evaluation”. In: 2011 IEEE international conferenceon robotics and automation. IEEE. 2011, pp. 1883–1889.
[118] Annica Kristoffersson, Silvia Coradeschi, and Amy Loutfi. “A review of mobilerobotic telepresence”. In: Advances in HumanComputer Interaction 2013 (2013),p. 902316.
[119] Irene Rae, Bilge Mutlu, and Leila Takayama. “Bodies in motion: mobility, presence,and task awareness in telepresence”. In: Proceedings of the SIGCHI Conferenceon Human Factors in Computing Systems. 2014, pp. 2153–2162.
[120] EunJung Chang. “Experiments and Probabilities in Telepresence Robots”. In: Exploring Digital Technologies for ArtBased Special Education: Models andMethodsfor the Inclusive K12 Classroom 40 (2019).
[121] Carlos Carrascosa et al. “From physical to virtual: widening the perspective onmultiagent environments”. In: Agent Environments for MultiAgent Systems IV.Springer, 2015, pp. 133–146.
[122] Munjal Desai et al. “Essential features of telepresence robots”. In: Technologiesfor Practical Robot Applications (TePRA), 2011 IEEE Conference on. IEEE. 2011,pp. 15–20.
[123] Hideyuki Nakanishi, Yuki Murakami, and Kei Kato. “Movable cameras enhancesocial telepresence in media spaces”. In: Proceedings of the SIGCHI Conferenceon Human Factors in Computing Systems. 2009, pp. 433–442.
[124] Anthony Tang and Omid Fakourfar. “Watching 360 videos together”. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM.2017, pp. 4501–4506.
[125] Steven Johnson et al. “Can you see me now?: How field of view affects collaboration in robotic telepresence”. In: Proceedings of the 33rd Annual ACM Conferenceon Human Factors in Computing Systems. ACM. 2015, pp. 2397–2406.
[126] Anthony Tang et al.Collaboration in 360 Videochat: Challenges and Opportunities.Tech. rep. University of Calgary, 2017.
[127] Dhanraj Jadhav, Parth Shah, and Henil Shah. “A Study to design VI classroomsusing Virtual Reality aided Telepresence”. In: 2018 IEEE 18th International Conference on Advanced Learning Technologies (ICALT). IEEE. 2018, pp. 319–321.
72 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
[128] Min Kyung Lee and Leila Takayama. “Now, I have a body: Uses and social normsfor mobile remote presence in the workplace”. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM. 2011, pp. 33–42.
[129] Lillian Yang, CarmanNeustaedter, and Thecla Schiphorst. “Communicating througha telepresence robot: A study of long distance relationships”. In: Proceedings ofthe 2017 CHI Conference Extended Abstracts on Human Factors in ComputingSystems. ACM. 2017, pp. 3027–3033.
[130] Lillian Yang et al. “Shopping Over Distance through a Telepresence Robot”. In:Proceedings of the ACM on HumanComputer Interaction 2.CSCW (2018).
[131] Rachel E Stuck et al. “Understanding attitudes of adults aging with mobility impairments toward telepresence robots”. In: Proceedings of the Companion of the 2017ACM/IEEE International Conference on HumanRobot Interaction. 2017, pp. 293–294.
[132] Laurel DRiek. “Healthcare robotics”. In:Communications of the ACM 60.11 (2017),pp. 68–78.
[133] Wendy Moyle et al. “Potential of telepresence robots to enhance social connectedness in older adults with dementia: an integrative review of feasibility”. In: International psychogeriatrics 29.12 (2017), pp. 1951–1964.
[134] Veronica Ahumada Newhart and Judith S Olson. “My student is a robot: Howschools manage telepresence experiences for students”. In: Proceedings of the2017 CHI conference on human factors in computing systems. 2017, pp. 342–347.
[135] Mark Tee Kit Tsun et al. “A Robotic Telepresence System for FullTime Monitoringof Children with Cognitive Disabilities”. In: Proceedings of the international Convention on Rehabilitation Engineering & Assistive Technology. 2015, pp. 1–4.
[136] Xian Wu et al. “Telepresence heuristic evaluation for adults aging with mobility impairment”. In: Proceedings of the Human Factors and Ergonomics Society AnnualMeeting. Vol. 61. 1. 2017, pp. 16–20.
[137] Veronica Ahumada Newhart. “Virtual inclusion via telepresence robots in the classroom”. In: CHI’14 Extended Abstracts on Human Factors in Computing Systems.2014, pp. 951–956.
[138] Marcia Finlayson and Toni Van Denend. “Experiencing the loss of mobility: perspectives of older adults with MS”. In: Disability and rehabilitation 25.20 (2003),pp. 1168–1180.
[139] Richard C. Simpson, Edmund F. LoPresti, and Rory A. Cooper. “How many peoplewould benefit from a smart wheelchair?” In: Journal of Rehabilitation Research andDevelopment (2008). ISSN: 07487711. DOI: 10.1682/JRRD.2007.01.0015.
[140] Carlos Escolano et al. “A telepresence robotic system operated with a P300basedbraincomputer interface: initial tests with ALS patients”. In: 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology. IEEE. 2010,pp. 4476–4480.
[141] Juan P Vasconez, George A Kantor, and Fernando A Auat Cheein. “Human–robotinteraction in agriculture: A survey and current challenges”. In: Biosystems engineering 179 (2019), pp. 35–48.
[142] Matthew Gombolay et al. “Computational design of mixedinitiative human–robotteaming that considers human factors: situational awareness, workload, and workflow preferences”. In: The International journal of robotics research 36.57 (2017),pp. 597–617.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 73
[143] David Sirkin et al. “Toward measurement of situation awareness in autonomousvehicles”. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2017, pp. 405–415.
[144] Aaron Steinfeld et al. “Common metrics for humanrobot interaction”. In: Proceedings of the 1st ACM SIGCHI/SIGART conference on Humanrobot interaction.ACM. 2006, pp. 33–40.
[145] Robert W Proctor and Trisha Van Zandt. Human factors in simple and complexsystems. CRC press, 2018.
[146] Pamela S Tsang and Michael A Vidulich. “Mental workload and situation awareness.” In: (2006).
[147] David B Kaber, Emrah Onal, and Mica R Endsley. “Design of automation for telerobots and the effect on performance, operator situation awareness, and subjectiveworkload”. In: Human factors and ergonomics in manufacturing & service industries 10.4 (2000), pp. 409–430.
[148] Terrence Fong et al. “Common metrics for humanrobot interaction”. In: IEEE 2004International Conference on Intelligent Robots and Systems, Sendai, Japan. 2004.
[149] Paul Salmon et al. “Situation awareness measurement: A review of applicabilityfor C4i environments”. In: Applied ergonomics 37.2 (2006), pp. 225–238.
[150] John M Flach. “Situation awareness: Proceed with caution”. In: Human factors37.1 (1995), pp. 149–157.
[151] Mica R Endsley. “Direct Measurement of Situation Awareness: Validity and Use ofSAGAT”. In: Situation Awareness Analysis and Measurement. Ed. by M. R. Endsley & D. J. Garland. Mahwah NJ: Lawrence Erlbaum Associates, 2000, pp. 147–173.
[152] JM Riley, DB Kaber, and JV Draper. “Situation awareness and attention allocation measures for quantifying telepresence experiences in teleoperation”. eng. In:Human Factors and Ergonomics in Manufacturing 14 (2004), pp. 51–67.
[153] HA Ruff, S Narayanan, and MH Draper. “Human interaction with levels of automation and decisionaid fidelity in the supervisory control of multiple simulated unmanned air vehicles”. eng. In: Presenceteleoperators and Virtual Environments11 (2002), pp. 335–351.
[154] Yiannis Gatsoulis and Gurvinder Singh Virk. “Performance metrics for improvinghumanrobot interaction”. und. In: Advances in Climbing and Walking Robots Proceedings of 10th International Conference, Clawar 2007 (2007), pp. 716–725.
[155] Michael A Vidulich. “The relationship betweenmental workload and situation awareness”. In: Proceedings of the human factors and ergonomics society annual meeting. Vol. 44. 21. 2000, pp. 3–460.
[156] John V Draper, David B Kaber, and John M Usher. “Telepresence”. In: Humanfactors 40.3 (1998), pp. 354–375.
[157] R. M. Taylor. “Situational awareness rating technique (SART): The developmentof a tool for aircrew systems design”. In: Proceedings of the AGARD AMP Symposium on Situational Awareness in Aerospace Operations, CP478. SeuillysurSeine: NATO AGARD, 1989.
[158] DebraG Jones. “Subjectivemeasures of situation awareness”. In:Situation awareness analysis and measurement (2000), pp. 113–128.
[159] Mica R Endsley. “Situation awareness global assessment technique (SAGAT)”.In: Proceedings of the IEEE 1988 national aerospace and electronics conference.IEEE. 1988, pp. 789–795.
74 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
[160] Francis T Durso, M Kathryn Bleckley, and Andrew R Dattel. “Does situation awareness add to the validity of cognitive tests?” In:Human Factors 48.4 (2006), pp. 721–733.
[161] John D Lee, Alex Kirlik, and Marvin J Dainoff. The Oxford handbook of cognitiveengineering. Oxford University Press, 2013.
[162] RM Taylor. “Situational awareness rating technique (SART): The development ofa tool for aircrew systems design”. In: Situational Awareness. Routledge, 2017,pp. 111–128.
[163] Mica R Endsley. “A Systematic Review andMetaAnalysis of Direct ObjectiveMeasures of Situation Awareness: A Comparison of SAGAT and SPAM”. In: Humanfactors (2019), p. 0018720819875376.
[164] Joost CF de Winter et al. “Situation awareness based on eye movements in relation to the task environment”. In: Cognition, Technology & Work 21.1 (2019),pp. 99–111.
[165] Kristin Moore and Leo Gugerty. “Development of a novel measure of situationawareness: The case for eye movement analysis”. In: Proceedings of the humanfactors and ergonomics society annual meeting. Vol. 54. 19. SAGE PublicationsSage CA: Los Angeles, CA. 2010, pp. 1650–1654.
[166] MW Smolensky. “Toward the physiological measurement of situation awareness:The case for eye movement measurements”. In: Proceedings of the Human Factors and Ergonomics Society 37th annual meeting. Vol. 41. Human Factors andErgonomics Society Santa Monica. 1993.
[167] Laura H Ikuma et al. “A guide for assessing control room operator performance using speed and accuracy, perceived workload, situation awareness, and eye tracking”. In: Journal of loss prevention in the process industries 32 (2014), pp. 454–465.
[168] Gunnar Hauland. “Measuring individual and team situation awareness during planning tasks in training of en route air traffic control”. In: The International Journal ofAviation Psychology 18.3 (2008), pp. 290–304.
[169] Christopher D Wickens. “Processing resources and attention”. In: Multipletaskperformance 1991 (1991), pp. 3–34.
[170] Luca Baldisserri et al. “Motorsport driver workload estimation in dual task scenario”. In: Sixth International Conference on Advanced Cognitive Technologiesand Applications. Citeseer. 2014.
[171] ID Brown. “Dual taskmethods of assessing workload”. In:Ergonomics 21.3 (1978),pp. 221–224.
[172] George D Ogden, Jerrold M Levine, and Ellen J Eisner. “Measurement of workloadby secondary tasks”. In: Human Factors 21.5 (1979), pp. 529–548.
[173] Stuart D Baulk, LA Reyner, and James A Horne. “Driver sleepiness—evaluation ofreaction time measurement as a secondary task”. In: Sleep 24.6 (2001), pp. 695–698.
[174] Serkan Cak, Bilge Say, and Mine Misirlisoy. “Effects of working memory, attention,and expertise on pilots’ situation awareness”. In: Cognition, Technology & Work22.1 (2020), pp. 85–94.
[175] Francis T Durso et al. “Situation awareness as a predictor of performance for enroute air traffic controllers”. In: Air Traffic Control Quarterly 6.1 (1998), pp. 1–20.
[176] Russell S Pierce et al. “The relationship between SPAM, workload, and task performance on a simulated ATC task”. In: Proceedings of the Human Factors andErgonomics Society Annual Meeting. Vol. 52. 1. Sage Publications Sage CA: LosAngeles, CA. 2008, pp. 34–38.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 75
[177] Milos Vasic and Aude Billard. “Safety issues in humanrobot interactions”. In: 2013IEEE International Conference on Robotics and Automation. IEEE. 2013, pp. 197–204.
[178] Luis Pérez et al. “Industrial robot control and operator training using virtual realityinterfaces”. In: Computers in Industry 109 (2019), pp. 114–120.
[179] Wendy Moyle et al. “Social robots helping people with dementia: Assessing efficacy of social robots in the nursing home environment”. In: 2013 6th InternationalConference on Human System Interactions (HSI). IEEE. 2013, pp. 608–613.
[180] Miguel Kaouk Ng et al. “A cloud robotics system for telepresence enabling mobilityimpaired people to enjoy the whole museum experience”. In: 2015 10th International Conference on Design & Technology of Integrated Systems in NanoscaleEra (DTIS). IEEE. 2015, pp. 1–6.
[181] Gloria Beraldo et al. “BrainComputer Interface meets ROS: A robotic approachto mentally drive telepresence robots”. In: 2018 IEEE International Conference onRobotics and Automation (ICRA). IEEE. 2018, pp. 1–6.
[182] Robert Leeb et al. “Towards independence: a BCI telepresence robot for peoplewith severe motor disabilities”. In: Proceedings of the IEEE 103.6 (2015), pp. 969–982.
[183] Stanislav Ondas et al. “Service robot SCORPIO with robust speech interface”. In:International Journal of Advanced Robotic Systems 10.1 (2013), pp. 1–11.
[184] François Ferland and François Michaud. “Selective attention by perceptual filteringin a robot control architecture”. In: IEEE Transactions on Cognitive and Developmental Systems 8.4 (2016), pp. 256–270.
[185] Umar Ahsan et al. “Development of a virtual test bed for a robotic dead man’sswitch in high speed driving”. In: 2012 15th International Multitopic Conference(INMIC). IEEE. 2012, pp. 97–104.
[186] Sven Linnman. “M3S: The local network for electric wheelchairs and rehabilitation equipment”. In: IEEE Transactions on Rehabilitation Engineering 4.3 (1996),pp. 188–192.
[187] WeoiLuen Chen et al. “The M3Sbased electric wheelchair for the people with disabilities in Taiwan”. In: Disability and rehabilitation 27.24 (2005), pp. 1471–1477.
[188] Chiunfan Chen et al. “M3S system prototypea comprehensive systemwith straightforward implementation”. In: Proceedings of the 2004 IEEE International Conference on Control Applications, 2004. Vol. 2. IEEE. 2004, pp. 1278–1283.
[189] RD Jackson. “Robotics and its role in helping disabled people”. In: EngineeringScience & Education Journal 2.6 (1993), pp. 267–272.
[190] Henry C Ellis. The transfer of learning. Macmillan, 1965.[191] Susan M Barnett and Stephen J Ceci. “When and where do we apply what we
learn?: A taxonomy for far transfer.” In:Psychological bulletin 128.4 (2002), pp. 612–637.
[192] Cyril Bossard et al. “Transfer of learning in virtual environments: a new challenge?”In: Virtual Reality 12.3 (2008), pp. 151–161.
[193] David J Harris et al. “A framework for the testing and validation of simulated environments in experimentation and training”. In: Frontiers in psychology 11 (2020),p. 605.
[194] John O Oyekan et al. “The effectiveness of virtual environments in developingcollaborative strategies between industrial robots and humans”. In: Robotics andComputerIntegrated Manufacturing 55 (2019), pp. 41–54.
76 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
[195] Geoff Norman, Kelly Dore, and Lawrence Grierson. “The minimal relationship between simulation fidelity and transfer of learning”. In:Medical education 46.7 (2012),pp. 636–647.
[196] Nicklas Dahlstrom et al. “Fidelity and validity of simulator training”. In: TheoreticalIssues in Ergonomics Science 10.4 (2009), pp. 305–314.
[197] Michael F Land and David N Lee. “Where we look when we steer”. In: Nature369.6483 (1994), pp. 742–744.
[198] Laurel D Riek. “Wizard of Oz studies in HRI: a systematic review and new reportingguidelines”. In: Journal of HumanRobot Interaction 1.1 (2012), pp. 119–136.
[199] Dianne Rios et al. “Conducting accessible research: including people with disabilities in public health, epidemiological, and outcomes studies”. In: American journalof public health 106.12 (2016), pp. 2137–2144.
[200] Sandra G Hart and Lowell E Staveland. “Development of NASATLX (Task LoadIndex): Results of empirical and theoretical research”. In: Advances in psychology.Vol. 52. Elsevier, 1988, pp. 139–183.
[201] AnjanaRamkumar et al. “UsingGOMSandNASATLX to evaluate human–computerinteraction process in interactive segmentation”. In: International Journal of Human–Computer Interaction 33.2 (2017), pp. 123–134.
[202] Margaret M Bradley and Peter J Lang. “Measuring emotion: the selfassessmentmanikin and the semantic differential”. In: Journal of behavior therapy and experimental psychiatry 25.1 (1994), pp. 49–59.
[203] Bob GWitmer and Michael J Singer. “Measuring presence in virtual environments:A presence questionnaire”. In: Presence 7.3 (1998), pp. 225–240.
[204] Louis E Yelle. “The learning curve: Historical review and comprehensive survey”.In: Decision sciences 10.2 (1979), pp. 302–328.
[205] Chung Hyuk Park and Ayanna M Howard. “Realtime haptic rendering and haptic telepresence robotic system for the visually impaired”. In: 2013 World HapticsConference (WHC). IEEE. 2013, pp. 229–234.
[206] Chung Hyuk Park and Ayanna M Howard. “Real world haptic exploration for telepresence of the visually impaired”. In:Proceedings of the seventh annual ACM/IEEEinternational conference on HumanRobot Interaction. 2012, pp. 65–72.
[207] KatherineMTsui et al. “Designing speechbased interfaces for telepresence robotsfor people with disabilities”. In: 2013 IEEE 13th International Conference on Rehabilitation Robotics (ICORR). IEEE. 2013, pp. 1–8.
[208] Luca Tonin et al. “The role of sharedcontrol in BCIbased telepresence”. In: 2010IEEE International Conference on Systems, Man and Cybernetics. IEEE. 2010,pp. 1462–1466.
[209] Berdakh Abibullaev et al. “Design and Optimization of a BCIDriven Telepresence Robot Through Programming by Demonstration”. In: IEEE Access 7 (2019),pp. 111625–111636.
[210] Chung Hyuk Park and Ayanna M Howard. “Roboticsbased telepresence usingmultimodal interaction for individuals with visual impairments”. In: InternationalJournal of Adaptive Control and Signal Processing 28.12 (2014), pp. 1514–1532.
[211] Chung Hyuk Park, EunSeok Ryu, and Ayanna M Howard. “Telerobotic haptic exploration in art galleries and museums for individuals with visual impairments”. In:IEEE transactions on Haptics 8.3 (2015), pp. 327–338.
[212] Guangtao Zhang et al. “Impact of task complexity on driving a gazecontrolledtelerobot”. In: Abstracts of the Scandinavian Workshop on Applied Eye Tracking(SWAET 2018). Ed. by Daniel Barratt, Raymond Bertram, and Marcus Nyström.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 77
Vol. 11. Journal of Eye Movement Research 5. Frederiksberg, Denmark, Aug.2018, p. 30.
[213] Thomas B Sheridan. “Musings on telepresence and virtual presence”. In: Presence: Teleoperators & Virtual Environments 1.1 (1992), pp. 120–126.
[214] Amedeo Cesta et al. “Evaluating telepresence robots in the field”. In: InternationalConference on Agents and Artificial Intelligence. Springer. 2012, pp. 433–448.
[215] Mehrnaz Sabet, Mania Orand, and David W. McDonald. “Designing TelepresenceDrones to Support Synchronous, Midair Remote Collaboration: An ExploratoryStudy”. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 2021, pp. 1–17.
[216] Akansel Cosgun, Dinei A Florencio, and Henrik I Christensen. “Autonomous person following for telepresence robots”. In: 2013 IEEE International Conference onRobotics and Automation. IEEE. 2013, pp. 4335–4342.
[217] Lorenzo Riano, Christopher Burbridge, and TM McGinnity. “A study of enhancedrobot autonomy in telepresence”. In: Proceedings of Artificial Intelligence and Cognitive Systems, AICS. AICS. 2011, pp. 271–283.
[218] Raja Parasuraman, Thomas B Sheridan, and Christopher D Wickens. “Situationawareness, mental workload, and trust in automation: Viable, empirically supported cognitive engineering constructs”. In: Journal of cognitive engineering anddecision making 2.2 (2008), pp. 140–160.
[219] Parvaneh Rabiee. “Exploring the relationships between choice and independence:experiences of disabled and older people”. In: British Journal of Social Work 43.5(2013), pp. 872–888.
[220] Luzheng Bi, XinAn Fan, and Yili Liu. “EEGbased braincontrolled mobile robots: asurvey”. In: IEEE transactions on humanmachine systems 43.2 (2013), pp. 161–176.
[221] Peng Liu, Xianghua Ding, and Ning Gu. ““Helping Others Makes Me Happy” SocialInteraction and Integration of People with Disabilities”. In: Proceedings of the 19thACMConference on ComputerSupported CooperativeWork & Social Computing.2016, pp. 1596–1608.
[222] Sue VG Cobb et al. “Virtual realityinduced symptoms and effects (VRISE)”. In:Presence: Teleoperators & Virtual Environments 8.2 (1999), pp. 169–186.
[223] Doil Kwon et al. “PillowVR: Virtual Reality in Bed”. In: 25th ACM Symposium onVirtual Reality Software and Technology. 2019, pp. 1–2.
[224] Yajie Zhao et al. “Maskoff: Synthesizing face images in the presence of headmounted displays”. In: 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE. 2019, pp. 267–276.
[225] Kyle Olszewski et al. “Highfidelity facial and speech animation for VR HMDs”. In:ACM Transactions on Graphics (TOG) 35.6 (2016), pp. 1–14.
[226] Wolfram Schoor et al. “VR based visualization and exploration of plant biologicaldata”. In: JVRBJournal of Virtual Reality and Broadcasting 6.8 (2010).
[227] Pedro Monteiro et al. “Handsfree interaction in immersive virtual reality: A systematic review”. In: IEEE Transactions on Visualization & Computer Graphics 27.05(2021), pp. 2702–2713.
[228] Luca Tonin et al. “Braincontrolled telepresence robot by motordisabled people”.In: 2011 Annual International Conference of the IEEE Engineering in Medicine andBiology Society. IEEE. 2011, pp. 4227–4230.
[229] Anjul Patney et al. “Towards foveated rendering for gazetracked virtual reality”.In: ACM Transactions on Graphics (TOG) 35.6 (2016), pp. 1–12.
78 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
[230] Christine Ipsen et al. “Six key advantages and disadvantages of working fromhome in Europe during COVID19”. In: International Journal of Environmental Research and Public Health 18.4 (2021), p. 1826.
[231] Pericle Salvini et al. “How safe are service robots in urban environments? Bullying a robot”. In: 19th International Symposium in Robot and Human InteractiveCommunication. IEEE. 2010, pp. 1–7.
[232] F Michaud et al. “Remote assistance in caregiving using telerobot”. In: Proceedings of the International Conference on Technology & Aging. 2007.
[233] Xianda Cheng et al. “Personfollowing for telepresence robots using web cameras”. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE. 2019, pp. 2096–2101.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 79
A Appendix Journal ArticleA.1 Telepresence Robots for People with Special Needs: a
Systematic ReviewAuthors: Guangtao Zhang, and John Paulin Hansen
Submitted to International Journal of Human–Computer Interaction
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 81
Noname manuscript No.(will be inserted by the editor)
Telepresence Robots for People with Special needs: aSystematic Review
Guangtao Zhang · John Paulin Hansen
the date of receipt and acceptance should be inserted later
Abstract Telepresence robots are increasingly used tosupport remote social interaction. Telerobots allow theuser to move a camera and a microphone at a remote
location in real time - often with a display of the user’sface at the robot. These robots can increase the qual-ity of life for people with special needs, who are, forinstance, bed bound. However, interface accessibility
barriers have made them difficult to use for some peo-ple. Still, no state-of-the-art literature review has beenmade of research on telerobots for people with disabil-
ities.
We used Preferred Reporting Items for Systematic
Reviews and Meta-analyses (PRISMA) guidelines for areview. Web of Science (WoS), ACM Digital Library,IEEE Xplore, PubMed, and Scopus were searched and
supplemented by hand examination of reference lists.The search includes studies published between 2009 and2019.
A total of 871 articles were included in this review,42 of which were eligible for the analysis. These ar-
ticles were further characterized in terms of problemsaddressed, objectives, types of special needs considered,features of the devices, features of solutions, and the
evaluation methods applied. On basis of the review, fu-ture research directions are being proposed, addressingissues like: use-cases; user conditions; universal acces-
Guangtao ZhangTechnical university of DenmarkKgs. Lyngby, DenmarkORCID: 0000-0003-0794-3338Corresponding author: [email protected]
John Paulin HansenTechnical university of DenmarkKgs. Lyngby, DenmarkORCID: 0000-0001-5594-3645
sibility; safety; privacy and security; independence andautonomy; evaluation methods; and user training pro-grams.
The review provides an overview of existing research,a summary of common research directions, and a sum-mary of issues, which need to be considered in future
research.
Keywords Telepresence · Human-robot interaction ·Accessibility · Universal access · Systematic literaturereview
1 Introduction
The concept of telepresence [37] conceives the idea ofproviding a person the feeling of actually being presentat a remote location. With the continuous development
of robotic technology, robotic telepresence are now re-alised to some degree [33]. It enables the operator tobe placed effectively ”in-the-scene” by mapping the op-erator’s visual, tactile, motor and cognitive functions
to a remote robot [39]. Social robotic telepresence is amajor field of application, providing social interactionat a distance [33].
A number of studies have put a special focus on thepotentials of telerobots supporting people with specialneeds, e.g. distant communication for patients [59], sup-
port of elderly with dementia [42,40], distant learningfor home-bound students [44], caring for children withcognitive disabilities [80], and independent living forseniors [59]. User engagement with telepresence robots
has been either as the local operator (teleoperator); oras participants in social events including a telerobot. Inthis review, we only focus on the first role.
Literature on general usages of telepresence robotshas been reviewed by [33]. Telepresence robots to en-
82 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
2 Guangtao Zhang, John Paulin Hansen
Fig. 1 Some commercial telepresence robots: Double [15], VGO [84], Beam [6], Giraff [23], Padbot [49].
hance social connectedness for older adults with demen-
tia have been reviewed by [40]. A systematic review ofresearch into how robotic technology can help elderlypeople was conducted by [63]. However, these reviews
only considered telepresence robots for either typicalusers, or for a very specific user group. Hence, there isa need for of a systematic review of the recent litera-
ture on telepresence for people with special needs, in-cluding people with disabilities, seniors, and patients,who could potentially all increase their quality of lifeby using telepresence robots.
Consequently, the purpose of this review is to per-form a systematic review and showcase the academicresearch work published in the telepresence robots do-
main for people with special needs, and to identify newareas of research.
The paper is organised as follows: Section 2 de-scribes how telepresence robots have been used by peo-ple with special needs and states the research ques-tions for our review. Section 3 presents the methods
used for the systematic review. Section 4 describes re-sults based on the review. Section 5 presents answers tothe research questions, and suggests research directions
and issues to be considered in future research. Section6 identifies some limitations of our review. Finally, insection 7, conclusions are drawn.
2 Telepresence robots and special needs
2.1 Telepresence robots
Telepresence robots are utilised for, e.g.: collaborationbetween geographically distributed teams [34], at aca-
demic conferences [72], for relationships between long-
distance couples [94], by people with mobility impair-ments [69], and for outdoor activities [27]. During theCovid-19 pandemic, telepresence robots supports health-
care personnel by providing remote patient communi-cation, clinical assessment and diagnostic testing [74].
Social robotic telepresence has formed the basis ofseveral new companies introducing commercial prod-
ucts, like Double Robot, Beam, Giraff, Padbot, and VGO(see Fig. 1).
Different terms have been used for telerobots in theresearch literature, for instance: remote presence sys-
tem [71], mobile robotic telepresence system [33], vir-tual presence robots and remote presence robots [11].They are often explained as a video conferencing system
mounted on a mobile robotic base [33], with a popularphrasing as ”skype on wheels” [9].
Commercial products usually feature: i) two-wayaudio and video communication between remote par-
ties; ii) a video screen where the operator’s face image isshown; and iii) mobility controls for the path of motion,- of which the two-way video communication feature is
considered essential [14].
Communication via a telepresence robot can be de-fined as robot-mediated communication [73]. Using telep-resence robots involves different types of interactions
occurring simultaneously [33]. When using the robotsfor social interaction, besides the human-robot interac-tion mentioned above, human-robot interaction occurs
between the remote robot and the remote persons. Theinteraction between local users and remote persons ishuman-human interaction via the robot.
We define telepresence robots as robotic devices, by
which an operator can overcome physical distance for
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 83
Telepresence Robots for People with Special needs: a Systematic Review 3
the purpose of telepresence. In this review we includeboth commercial products, experimental (i.e. prototype)robots made for research, and commercial telerobotsmodified for special needs.
2.2 Accessibility and universal access
Accessibility describes the degree to which an environ-ment, service, or product allows access by as many peo-ple as possible, in particular people with disabilities
[91]. Accessibility and high quality of interaction foreveryone, anywhere, and at any time are fundamentalrequirements for universal access [68]. Universal acces-sibility is typically considered for special populations
including seniors and people with disabilities [79].
2.3 Telepresence robots and special needs
More than 190 million persons worldwide are estimatedto live with disabilities [91]. Auditory disabilities, mo-
tor disabilities, and cognitive disabilities are the maintypes of disabilities, while 39 million persons are clas-sified as legally blind [54]. In particular, people withmotor disabilities may benefit from telepresence robots
to overcome mobility problems, especially those withservere motor disabilities, for instance cerebral palsy[80] or Amyotrophic Lateral Sclerosis (ALS/MND) [18].
The most commonly used control method for telerobotsis hand control. However, motor impairments may limitthe use of hands, causing, for instance, limited gripping,
fingering or holding ability. Visual and auditory sensingalso play important roles in the experience of telepres-ence [14]. Impairments of sensing may have severe neg-ative impacts on the experience of telepresence, or even
make it impossible to experience the remote place.
Telepresence robots have been used by families andcaregivers to support remote relationships with children
who have cognitive challenges from Autism SpectrumDisorder or Cerebral Palsy [80], and has been used asa communication tools for people with dementia [42].
However, due to their cognitive challenges, independentuse of telepresence robots can be problematic [79]. Fi-nally, elderly people and hospitalised or home-boundpatients have been considered as beneficiaries of teler-
obots.
Telepresence robots may have particular impact onhospitalised or home-bound children, who experience
not only poorer health, but also limited opportunitiesfor education [82]. Likewise, children with disabilitiesexperience poorer health, limited opportunities for ed-
ucation and encounter greater inequalities than chil-dren without disabilities [82]. Mobility problems may
lead to psychological problems, for instance, feelings
of emotional loss, reduced self-esteem, isolation, stress,and fear of abandonment [20]. Overcoming part of amobility problem may provide new daily opportunities,
reduce dependence on caregivers and family members,and promote feelings of self-reliance [64]. Telepresencerobots thus have potentials to improve their qualityof life, supporting social activities and building net-
works to others. For instance, telepresence technologycan reduce loneliness among older adults with mobil-ity impairments, supporting their ageing-in-place while
remaining socially connected to friends and family [92].
2.4 Research Question
A systematic literature review is performed to collect,
comprehend, analyze, synthesize, and evaluate recentrelevant literature. The seven research questions ad-dressed are:
(1) Which relevant studies have been published with
the last 10 years?(2) What are the special user conditions considered
for the design of telepresence robots?
(3) What are the use-cases addressed for people withspecial needs?
(4) Are current telepresence robot systems accessi-ble for people with special needs?
(5) How have telepresence robots for people withspecial needs been evaluated?
(6) What are the potential impacts on quality of life
for the proposed solutions?(7) What should be addressed in future research?
3 Method
The methodology used for this systematic literature re-view is detailed in the following sections. We conductedand reported the review according to the Preferred Re-porting Items for Systematic Reviews and Meta-analyses
Statement (PRISMA) [38]. A flowchart of the processis shown in Fig. 2.
3.1 Search terms
Based on our research objectives, we defined the searchterms to match the research questions. The topics ofthe terms chosen were the purpose (i.e. telepresence)and the possible fields of use (i.e.accessibility, inclusive
design, universal design, special needs, disabilities, hos-pital, care home). Instead of the specific term ”telepres-ence robots”, a broader database-specific search term
”telepresence” was used.
84 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
4 Guangtao Zhang, John Paulin Hansen
Records identified through database searching (n = 866)
Web of Science = 248Scopus = 309PubMed = 69ACM DL = 124
IEEE Xplore = 116
Records after duplicates removed (n = 347)
Full-text articles assessed for eligibility (n = 83)
Studies included in qualitative synthesis (n = 42)
Records screened (n = 270)
Additional records identified through other searching
(n = 5 )
Records excluded (n = 77)• Not robotics• Abstract only• Other language
Articles excluded (n = 187), with reasons:
• Review• Not telepresence robots• Non-human study• Not as robot operators• Not people with special needs• Acceptance study• Hypothetical study• Robot used for other purposes
than social connection or independent living
Included
Eligibility
Screening
Identification
Articles excluded (n = 41), with the above mentioned reasons
Fig. 2 PRISMA flow diagram illustrating the number of reviewed articles through the different phases.
The expression used in the search process was:(accessib* OR assistive OR inclusion OR ”inclusive
design” OR ”universal design” OR ”special needs ” ORdisabilit* OR impair* OR deficit OR ill* OR hospital
OR ”care home”) AND (telepresence OR tele*presence)
3.2 Identification of databases and search engines
The search engines and bibliographic databases selected
should be prestigious [2] and cover both healthcare andtechnical-scientific literature. Therefore, IEEE Xplore,ACM Digital Libray (ACM DL), PubMed, WoS, and
Scopus were chosen. We searched the databases in De-cember 2019.
3.3 Inclusion and Exclusion Criteria
As we focused on the systematic review on the state-of-
the-art , we established the period from the last decade(2009 to 2019). The number of the articles after remov-ing duplicates was 342 (see Fig. 2).
The articles were screened for eligibility in two phases.In the first phase of filtering, we excluded results which
had only an abstract. Since we used the broad term
telepresence in our search, we also excluded the articleswhich did not include any robot or robot-like devices.The number of articles after filtering was 265. In the
second phase, we filtered out 78 articles from the 265.This was done by one author going through titles, key-words, and abstracts. If eligible, the full-text of the ar-
ticles were retrieved and reviewed. The following exclu-sion criteria were met: (1) Not telepresence robots; (2)Non-human study; (3) Review; (4) Robots with otherpurposes; (5) Not for people with special needs; (6) Ac-
ceptance study; (7) Hypothetical study; (8) Robot usedfor other purposes than social connection or indepen-dent living.
The resulting list of 39 articles was further criti-cally investigated by both authors separately. The re-sults were then discussed and 2 articles were finally ex-
cluded by agreement of both authors, ending with atotal of 37 articles. By snowballing from the referencelists of the papers selected, we found 5 more key pa-pers for our review, which were considered as additional
records. Finally, a total of 42 articles (see Table 1) wereincluded.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 85
Telepresence Robots for People with Special needs: a Systematic Review 5
Title
Typ
esof
Sp
ecia
lN
eed
sR
ef.
Acl
ou
dro
boti
cssy
stem
for
tele
pre
sen
ceen
ab
lin
gm
ob
ilit
yim
pair
edp
eop
leto
enjo
yth
ew
hole
mu
seu
mex
per
ien
ceM
oto
rd
isab
ilit
ies
[46]
AS
tep
tow
ard
sa
Rob
oti
cS
yst
emW
ith
Sm
art
ph
on
eW
ork
ing
As
Its
Bra
in:
An
Ass
isti
ve
Tec
hn
olo
gy
Moto
rd
isab
ilit
ies
[61]
Ast
ud
yto
des
ign
VI
class
room
su
sin
gvir
tual
reali
tyaid
edte
lep
rese
nce
Hom
ebou
nd
chil
dre
nw
ith
dis
ab
ilit
ies
[28]
AT
elep
rese
nce
Mob
ile
Rob
ot
Contr
oll
edW
ith
aN
on
invasi
ve
Bra
in-C
om
pu
ter
Inte
rface
Moto
rd
isab
ilit
ies
[3]
Ate
lep
rese
nce
rob
oti
csy
stem
op
erate
dw
ith
aP
300-b
ase
db
rain
-com
pu
ter
inte
rface
:In
itia
lte
sts
wit
hA
LS
pati
ents
Moto
rd
isab
ilit
ies
[18]
Acc
essi
ble
Contr
ol
of
Tel
epre
sen
ceR
ob
ots
base
don
Eye
Tra
ckin
gM
oto
rd
isab
ilit
ies
[95]
Acc
essi
ble
Hum
an
-Rob
ot
Inte
ract
ion
for
Tel
epre
sen
ceR
ob
ots
:A
Case
Stu
dy
Moto
ran
dco
gn
itiv
ed
isab
ilit
ies
[77]
An
Eye-
gaze
Tra
ckin
gS
yst
emfo
rT
eleo
per
ati
on
of
aM
ob
ile
Rob
ot
Moto
rd
isab
ilit
ies
[10]
Ass
ista
nt
Per
son
al
Rob
ot
(AP
R):
Con
cep
tion
an
dA
pp
lica
tion
of
aT
ele-
Op
erate
dA
ssis
ted
Liv
ing
Rob
ot
Eld
erly
[12]
Bra
in-C
om
pu
ter
Inte
rface
Mee
tsR
OS
:A
Rob
oti
cA
pp
roach
toM
enta
lly
Dri
ve
Tel
epre
sen
ceR
ob
ots
Moto
rd
isab
ilit
ies
[7]
Bra
in-c
ontr
oll
edte
lep
rese
nce
rob
ot
by
moto
r-d
isab
led
peo
ple
Moto
rd
isab
ilit
ies
[75]
Com
pari
son
of
SS
VE
PB
CI
an
dE
ye
Tra
ckin
gfo
rC
ontr
oll
ing
aH
um
an
oid
Rob
ot
ina
Soci
al
Envir
on
men
tM
oto
rd
isab
ilit
ies
[30]
Des
ign
an
dO
pti
miz
ati
on
of
aB
CI-
Dri
ven
Tel
epre
sen
ceR
ob
ot
Th
rou
gh
Pro
gra
mm
ing
by
Dem
on
stra
tion
Moto
rd
isab
ilit
ies
[1]
Des
ign
ing
spee
ch-b
ase
din
terf
ace
sfo
rte
lep
rese
nce
rob
ots
for
peo
ple
wit
hd
isab
ilit
ies
Cogn
itiv
ean
d/or
moto
rd
isab
ilit
ies
[78]
Dri
vin
ga
Sem
iau
ton
om
ou
sM
ob
ile
Rob
oti
cC
ar
Contr
oll
edby
an
SS
VE
P-B
ase
dB
CI
Moto
rd
isab
ilit
ies
[66]
EE
G-B
ase
dM
ob
ile
Rob
ot
Contr
ol
Th
rou
gh
an
Ad
ap
tive
Bra
in-R
ob
ot
Inte
rface
Moto
rd
isab
ilit
ies
[22]
Eff
ect
of
aC
lick
-Lik
eF
eed
back
on
Moto
rIm
ager
yin
EE
G-B
CI
an
dE
ye-
Tra
ckin
gH
yb
rid
Contr
ol
for
Tel
epre
sen
ceM
oto
rd
isab
ilit
ies
[57]
Evalu
ati
on
of
an
Ass
isti
ve
Tel
epre
sen
ceR
ob
ot
for
Eld
erly
Hea
lth
care
Eld
erly
[31]
Eye-
Gaze
-Contr
oll
edT
elep
rese
nce
Rob
ots
for
Peo
ple
wit
hM
oto
rD
isab
ilit
ies
Moto
rd
isab
ilit
ies
[98]
Gaze
-contr
oll
edL
ase
rP
oin
ter
Pla
tform
for
Peo
ple
wit
hS
ever
eM
oto
rIm
pair
men
ts:
Pre
lim
inary
Tes
tin
Tel
epre
sen
ceM
oto
rdis
abil
itie
s[5
6]
Goin
gto
sch
ool
on
aro
bot:
Rob
ot
an
du
ser
inte
rface
des
ign
featu
res
that
matt
erH
om
ebou
nd
chil
dre
n[3
]H
an
d-
an
dgaze
-contr
ol
of
tele
pre
sen
cero
bots
Moto
rd
isab
ilit
ies
[97]
Han
ds-
free
coll
ab
ora
tion
usi
ng
tele
pre
sen
cero
bots
for
all
ages
Eld
erly
[32]
Hea
dan
dG
aze
Contr
ol
of
aT
elep
rese
nce
Rob
ot
wit
han
HM
DM
oto
rd
isab
ilit
ies
[24]
Hu
man
-rob
ot
coop
erati
on
thro
ugh
bra
in-c
om
pu
ter
inte
ract
ion
an
dem
ula
ted
hap
tic
supp
ort
sM
oto
rd
isab
ilit
ies
[48]
Mea
suri
ng
Ben
efits
of
Tel
epre
sen
cero
bot
for
Ind
ivid
uals
wit
hM
oto
rIm
pair
men
tsM
oto
rd
isab
ilit
ies
[93]
Mob
ile
Rob
oti
cT
elep
rese
nce
Solu
tion
sfo
rth
eE
du
cati
on
of
Hosp
itali
zed
Ch
ild
ren
.H
osp
itali
sed
chil
dre
n[6
5]
My
Stu
den
tis
aR
ob
ot:
How
Sch
ools
Man
age
Tel
epre
sen
ceE
xp
erie
nce
sfo
rS
tud
ents
Hom
ebou
nd
chil
dre
n[4
4]
Navig
ati
on
of
aT
elep
rese
nce
Rob
ot
via
Cover
tV
isu
osp
ati
al
Att
enti
on
an
dR
eal-
Tim
efM
RI
Moto
rd
isab
ilit
ies
[4]
Rea
lW
orl
dH
ap
tic
Exp
lora
tion
for
Tel
epre
sen
ceof
the
Vis
uall
yIm
pair
edV
isu
al
dis
ab
ilit
ies
[51]
Rea
l-ti
me
Hap
tic
Ren
der
ing
an
dH
ap
tic
Tel
epre
sen
ceR
ob
oti
cS
yst
emfo
rth
eV
isu
all
yIm
pair
edV
isu
al
dis
ab
ilit
ies
[52]
Rob
oti
cs-b
ase
dte
lep
rese
nce
usi
ng
mu
lti-
mod
al
inte
ract
ion
for
ind
ivid
uals
wit
hvis
ual
imp
air
men
tsV
isu
al
dis
ab
ilit
ies
[53]
Soci
al
rob
ots
hel
pin
gp
eop
lew
ith
dem
enti
a:
Ass
essi
ng
effica
cyof
soci
al
rob
ots
inth
enu
rsin
gh
om
een
vir
on
men
tE
lder
ly(w
ith
dem
enti
a)
[41]
Tow
ard
sIn
dep
end
ence
:A
BC
IT
elep
rese
nce
Rob
ot
for
Peo
ple
Wit
hS
ever
eM
oto
rD
isab
ilit
ies
Moto
rd
isab
ilit
ies
[36]
Tel
epre
sen
ceh
euri
stic
evalu
ati
on
for
ad
ult
sagin
gw
ith
mob
ilit
yim
pair
men
tE
lder
ly[9
2]
Tel
erob
oti
ch
ap
tic
exp
lora
tion
inart
gall
erie
san
dm
use
um
sfo
rin
div
idu
als
wit
hvis
ual
imp
air
men
tsV
isu
al
dis
ab
ilit
ies
[54]
Th
ero
leof
share
d-c
ontr
ol
inB
CI-
base
dte
lep
rese
nce
Moto
rd
isab
ilit
ies
[76]
Tow
ard
sd
esig
nin
gte
lep
rese
nce
rob
ot
navig
ati
on
for
peo
ple
wit
hd
isab
ilit
ies
Cogn
itiv
ean
d/or
moto
rd
isab
ilit
ies
[79]
Tow
ard
sIn
dep
end
ence
:A
BC
IT
elep
rese
nce
Rob
ot
for
Peo
ple
Wit
hS
ever
eM
oto
rD
isab
ilit
ies
Moto
rd
isab
ilit
ies
[36]
Tra
nsf
erri
ng
bra
in-c
om
pu
ter
inte
rface
sb
eyon
dth
ela
bora
tory
:S
ucc
essf
ul
ap
pli
cati
on
contr
ol
for
moto
r-d
isab
led
use
rsM
oto
rd
isab
ilit
ies
[35]
Usi
ng
aT
elep
rese
nce
Rob
ot
toIm
pro
ve
Sel
f-E
ffica
cyof
Peo
ple
wit
hD
evel
op
men
tal
Dis
ab
ilit
ies
Cogn
itiv
ed
isab
ilit
ies
[21]
Vir
tual
incl
usi
on
via
tele
pre
sen
cero
bots
inth
ecl
ass
room
Hom
ebou
nd
chil
dre
n[4
3]
Vir
tual
incl
usi
on
via
tele
pre
sen
cero
bots
inth
ecl
ass
room
:A
nex
plo
rato
ryca
sest
ud
yH
om
ebou
nd
chil
dre
n[4
5]
Table
1P
ap
ers
sele
cted
for
an
aly
sis.
86 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
6 Guangtao Zhang, John Paulin Hansen
Fig. 3 Distribution of selected papers by their publicationyears.
3.4 Data Analysis
The articles on the final list were characterized in termsof problems addressed, research objectives, types of userconditions, features of devices, features of solutions pro-
posed, and methodology of the study. Data were ex-tracted from the articles into one separate table by thefirst author in a predetermined format validated by thesecond author.
4 Results
A small majority of the articles were conference pro-
ceedings (24 papers, 57%) compared to journal articles(18 papers, 43%). There is a slight increase in the num-ber of papers per year over the time period investigated.No publications were found from 2009, and the year
with the highest number of publications was 2018 (8papers, 19%). The number of publications in 2019 untilthe last date of searching in December was five.
Barriers and challenges of using telepresence robotsdue to specific disabilities were the most common prob-lem statements (e.g. [56,52,51,79]). There were two goals
which were most commonly mentioned as research goals:i) to improve the quality of life of target users withspecial needs (e.g. [76,7,1,77]) ; ii) to increase indepen-
dence of the target users (e.g. [36]).
Six of the identified articles have received 60 or morecitations. Five of these focused on brain-computer in-
terfaces (BCI) for people with motor disabilities, andone focused on elderly healthcare.
4.1 Consulted source
Regarding the source of the paper selected, 61% ofpapers were found from ACM DL and IEEE Xplore.PubMed had 11 papers (26%), including 5 papers that
also appeared in IEEE Xplore. After exclusion of allthe papers from ACM DL and IEEE Xplore, a totalof 7 papers (17%) were found from Scopus and WoS.
The reference lists of all these 37 papers were examinedand we found additional 5 papers (12%) using the sameinclusion criteria (see Fig. 2).
All papers from IEEE Xplore focus on accessibil-
ity and disabilities, especially motor disabilities. Mostof these papers proposed a novel control technique ornew sensing methods. PubMed also covered 5 papers
from IEEE which focused more on medical and healthaspects, like hospitalized users or the use of special med-ical devices for control, for instance a functional mag-netic resonance imaging (fMRI) device [4].
Regarding the source of journals and conferences,the selected papers were distributed in proceedings of16 conferences and 18 journals. The conferences were
mainly organised by ACM and IEEE (15 conferences).Among them, the vast majority of the ACM confer-ences were organised by Special Interest Group on Com-puter–Human Interaction (SIGCHI) (8 papers) focus-
ing on human-robot interaction, and one paper waspresented in the Special Interest Group on AccessibleComputing (SIGACCESS). The journal articles were
distributed in a number of different journals, related totheir specific topics.
4.2 Special needs addressed
Fig. 4 shows the distribution of types of special needsaddressed. Disability was the most frequently consid-
ered use condition (31 papers, 74%). Motor disability(26 papers, 77%) was the most common user conditionamong the disability-related papers, while 4 papers fo-
cused on visual disabilities and 3 focused on cognitivedisabilities. Most of them focused on just one type ofdisability, while two of them [78,79] addressed a com-bination of motor and cognitive disabilities.
Eleven papers target a particular age group, namelychildren (6 papers) and elderly (5 papers). Regardingthe children group, one of the papers focuses on chil-
dren with disabilities in general and the other five focuson home-bound or hospitalised children. For the elderlygroup, all papers are motivated by common problemsfaced by seniors aging-in-place; one paper focus specif-
ically on dementia.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 87
Telepresence Robots for People with Special needs: a Systematic Review 7
motor disabilities24
visual disabilities4
cognitive disabilities1
motor and cognitive disabilities2
children (homebound)
4
children (homebound
with disabilities)1
children (hospitalised)1
elderly4
elderly (with dementia)1
Fig. 4 Distribution of selected papers by types of specialneeds addressed.
Figure 5 indicates a slight increase in the number of
papers addressing disabilities, especially motor disabil-ities.
A majority of application areas were mainly appli-cations for social interaction, social communication, so-
cial engagements in general, mostly in the three maintypes of scenarios mentioned above. The applicationarea of all selected papers about children was using the
robots for education. One of the authors defined vir-tual inclusion in this use case as an educational practicethat allows a student to attend school through a mobilerobotic telepresence system in such a way that the stu-
dent is able to interact with classmates, teachers, andother school personnel as if the student were physicallypresent. [45].
4.3 Types of Research
Applying the classification scheme suggested by [87],the papers selected can be characterized as: i) evalua-tion research (14 papers); ii) proposal of a solution (6
papers); or iii) validation research (22 papers). Some ofthe selected papers (14 papers) only focused on the eval-uation of existing products (mainly commercial prod-ucts). The other papers selected (28 papers) propose
novel control methods and perform evaluation of themethods. If the evaluation was conducted in a lab envi-ronment, they are considered to be validation research
(22 papers). The other papers (6 papers) performed theevaluation in a real environment, and they are consid-ered to be a proposal of a solution. Validation research
(53%) is thus found to be larger than the other twotypes.
4.4 Devices and Hardware
The robots used in the selected papers may be clas-sified into three categories according to their features:
i) Experimental (i.e. prototype) robots; ii) commercialrobots; and iii) adapted commercial robots. Robots men-tioned in the study included commercial telepresencerobots (like VGO [78], Double [3], Padbot [97]) and
other types of robots for adaptation (like LEGO Mind-storms NXT [48], NAO [1], Pepper [7], Robotino byFESTO [4]).
All of the research papers addressing children ap-plied commercial robots without adaptation, while mostof the robots used in studies addressing disabilities were
modified commercial robots or experimental robots. Allthree types of robots were used in studies regarding el-derly people.
Except for a stable robot used in [32], all of therobots featured mobility. Most of them are wheeled (38papers), whiled 3 studies examined walk-able humanoid
robots.
Camera features were not reported in a majority ofthe selected papers, or they just mentioned that a web-
cam or a notebook-integrated webcam was used (e.g.[78]). A few papers reported that they used a 360 degreecamera [24,28,95,98], an HD-camera [56], a pan/tilt
camera [46], or a stereo camera [51]. For studies of peo-ple with visual disabilities the camera was replaced bya RGB-D sensor (e.g. Kinect) [51,52,53,54].
Details about microphones or loudspeakers were notgiven in any of the selected papers. Commercial tele-robots usually show the operator’s face on a LCD dis-
play carried by the robot. Some of the experimentalrobots (e.g. in [41]) also provide a display, while a fewpapers state that the robot did not have a display. In-formation about the possibility to mute the displays or
microphones was not given in any of the selected pa-pers. Most of the commercial robots featured obstacledetection and avoidance (e.g. Double, VGo and Pad-
bot).
A camera in front of the pilot is needed to displaythe operator’s face image in the remote environment.
Usually a webcam or a notebook-integrated webcamwas used for this purpose (e.g. [36]). An LCD displaywas applied in the majority of studies in order to show
the live video stream transferred from the remote space.Head-mounted displays were used in a few of the se-lected papers [28,95,98,97], usually connected with a360 degree camera and sometimes used in combination
with other built-in sensors for gaze- or head interac-tion. When the pilot uses a HMD it becomes impossi-ble to show a live face image on the telerobot, because
the HMD covers the face. This had a negative impact
88 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
8 Guangtao Zhang, John Paulin Hansen
0
1
2
3
4
5
6
7
8
9
2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019
motor visual cognitive elderly children
Fig. 5 Distribution of selected papers by years and types of special needs addressed.
on the user experience reported by [96]. In all papersaddressing visual disabilities [51,52,53,54], the displaywas replaced with a tactile device to sense the remoteenvironment by hands.
Control devices varied depending on the control meth-
ods and robots used (see Fig. 6). Hand control wasthe most common one, performed with mouse and key-board or via a touch screen [43,3,92]. Other control de-vices included BCI electrodes, fMRI, VR headset (with
built-in eye-trackers), screen-based eye-trackers, robot-integrated microphones, and smartphones with acceler-ators. Haptic devices were used for people with visual
disabilities [51,52,53,54].
BCI was the most commonly used method in the
selected studies addressing motor disabilities (14 pa-pers). It does not rely on the brain’s normal outputchannels of peripheral nerves and muscles. Therefore,
it may become a valuable communication channel forpeople with motor disabilities, especially severe levelslike ALS, brain-stem stroke, cerebral palsy, and spinalcord injury.
Eye-tracking-based methods were used in 31% ofthe studies focused on motor disabilities. Existing eye
trackers are not cumbersome, and the increased ac-curacy of eye-tracking equipment makes it feasible toutilize this technology conveniently for users with mo-
tor disabilities. Besides screen-based eye trackers, somecommercial head-mounted displays (HMD) models withbuilt-in eye-trackers were used (see Fig. 6). A solution
combining eye-tracking with BCI [57] has the potentialto solve a common problem of how to get user intention
Electrodes for BCI13
fMRI for BCI1
Screen-based eye tracker
3
VR headset (with built-in eye trackers)
5Microphone (integrated)
3
Haptic device4
Touch screen/mouse and keyboard
16
Smartphone (accelerometer)1
Fig. 6 Distribution of selected papers by devices used forcontrolling.
correctly when using eye-tracking-based user interfaces(UI) only. Eye-tracking was combined with head detec-
tion in [24]. Many partly paralyzed patients have pre-served head movement, although they lack control overthe rest of the body [30].
4.5 User Interface
None of the papers used fully-automatic telepresencerobots, hence they all required some degree of human
piloting. Consequently, the target users were requiredto be the pilot. Some robots did not provide assistance
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 89
Telepresence Robots for People with Special needs: a Systematic Review 9
Control MethodFor motor disabilities
BCI 13 31%eye tracking 6 14%eye tracking and BCI 1 2%eye tracking and head detection 1 2%head detection 1 2%speech-based 3 7%hand 2 5%
For visual disabilitieshand (via a haptic module) 4 10%
For cognitive disabilitieshand 1 2%speech (for motor and cognitive disabilities) 2 5%
For childrenhand 5 12%hand and head detection 1 2%
For Elderlyhand 4 10%head detection 1 2%
Table 2 Distribution of selected papers by control methodsused.
for navigation [51,97,98], while most of the commercial
products (e.g. [78,92]) and the robots used in [18,17,41,12] offered features such as obstacle detection, obstacleavoidance or semi-autonomous driving (e.g. [46]).
As listed in Tab. 2, control methods varied depend-ing on types of special needs. For the group of elderlyusers [92,41,12,31] and children [43,3,65,45], hand con-
trol was used. Two of these studies [28,32], applied headmovement detection to change the field of view.
Motor disability was the user condition that most
of the alternative control methods were proposed for,mainly to support navigation tasks. In total, 26 papersaddressed motor disabilities, of which 2 papers focused
on both cognitive and motor disabilities. In addition tothe hand control method, a number of other methodswere proposed, c.f. Tab. 2. As previously mentioned,
BCI was the most common alternative method (14 pa-pers). In addition, eye-tracking was considered in 8 pa-pers. Speech-based control methods were used in 3 pa-pers, of which two of them [78,79] addressed both cog-
nitive and motor disabilities. Hand control via a hapticmodule was used in the four papers focusing on visualdisabilities [52,53,30,54].
A large number of the selected papers focused onmotor disabilities (see Fig. 4) using different types ofbody movements, and physiological signals and their
UIs belong to the natural user interface category. Allthe solutions proposed for visual disabilities utilisedhaptic devices for sensing and control. Thus, their UIs
belonged to a category of tangible user interfaces (4papers).
The control methods were mainly for navigationtasks or for changing the field of view. Additional con-
Target users as participants number %None (other participants only) 18 43%Both target users and others 5 12%Target users only 17 40%Not mentioned 2 5%
Table 3 Distribution of types of participants.
Test environment number %Lab 22 52%Realistic scenario 19 45%Not mentioned 1 2%
Table 4 Distribution of test environments.
trol tasks, for instance adjusting the height of robots,muting a microphone, or adjusting the volume of theloudspeaker, were not mentioned in any of the papers.
4.6 Design and evaluation methods
The most common interaction design approach was user-
centered design (UCD), taken by e.g. [22]. Participatorydesign was also reported by [78,79]. The evaluation re-search papers which focused on usability issues for hos-pitalised children (e.g. [65]) or elderly (e.g. [31]) consid-
ered specific use cases, for instance education [44] andsocial interaction [41]. Papers conducting a validationor proposing a solution mainly focused on exploring a
novel approach for a specific type of disability. Theirresearch questions were usually focused on challengesor uncertainty when using the new method.
Table 3 reveals that 18 of the selected papers did
not involve target users. In the studies that did includeparticipants from the target group, three of them [35,28,3] had more than 5 participants, namely 17, 41 and22, respectively.
Table 4 shows that the number of studies conducted
in a lab setting was slightly larger than those in a real-istic scenario. The realistic test environments were ed-ucation scenarios (e.g. classrooms [43]); cultural sites
(e.g. museum [46]); and health care environments (e.g.care-homes [41] and hospitals [65]). Papers with focuson special needs addressed independent use in general,and did not explore specific application areas, except
for a few papers adressing an art gallery scenario [77],museum and archaeological sites [46], or virtually in-clusive classrooms [28].
The majority of techniques used in the selected pa-
pers were experiments (74%). Case studies (i.e. [77,16,19,45]), heuristic evaluation [92], and interviews (e.g.[44]) were also occasionally used.
The main tasks for evaluation were navigation (e.g.
by [61,51,97]). Task completion time was commonlyused as a metric for evaluation. [36,54,61,97,53]. NASA
90 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
10 Guangtao Zhang, John Paulin Hansen
TLX [25] was applied in a few studies [36,97,21]. Num-ber of collision times were counted by [31,76,97,98].Descriptive analyses, like mapping of robot movements,were presented by [77,35,97] and heat maps of the robots’
movement trajectories were made by [77]. The Technol-ogy Acceptance Model (TAM) questionnaire was alsoused in a study with senior participants [70].
Social Telepresense interaction occurs between a lo-cal target user and the remote site [33]. However, allof the studies focused only on the local part in thier
evaluation.
Visual and auditory experiences play essential rolesin the experience of telepresence but this was ignored in
the evaluation, and information about the devices usedfor social interaction (e.g. cameras, microphones, andloudspeakers) was missing in the selected papers.
Three main areas for future work were suggestedby some of the papers: i) to involve target users in theresearch [17,54,97,56]; ii) to test the proposed applica-tion in a more challenging real case scenario [46]; and
iii) to improve the system [51,46,61].
5 Discussion
In this section, research directions and research consid-erations are discussed.
5.1 Goals and use-cases
Most of the selected papers on children had a com-mon goal of virtual inclusion [43] via using telepresencerobots. This goal could also be addressed by future re-
search for other groups of people with special needs.
Most studies in the field of proposing new solutionsfor people with disabilities did not focus on a specific
application area. There was a lack of evaluations withour target users outside laboratory environments. Toaddress the needs and problems faced by the target
users, more application possibilities could be exploredin future research, like the use of telepresence robotsfor social inclusion in education and working environ-ments, shopping, family communication, and various
indoor and outdoor leisure activities. This is evidentin a study on the elderly [77] which showed that userswanted to apply telepresence robots to attend concerts
or sporting events, and visit museums or theatres.
5.2 User conditions
Two-way audio and video communication is consideredan essential feature [14]. Visual and auditory experi-
ence is vital in telepresence experience. However, peo-
ple with hearing- and speech disabilities have not yetbeen addressed. Except for [28,78,79], all the selectedpapers only focused on one type of condition. Research
restricted to the focus of only one condition of specialneeds may lead to the problem that a group of userswith multiple special needs gets ignored.
5.3 Methods and Solutions
Most often, research on proposing new solutions for dis-
abilities has been restricted to limited comparisons ofdifferent proposed methods. The user condition of mo-tor disabilities is a good illustration of this problem.
While types of different solutions have been proposed(see Tab. 2), only one study [30] provided a comparisonof BCI-based method and eye-tracking-based method,except for the comparisons of the proposed method
to hand control as a baseline found in [10,97]. There-fore, more comprehensive studies comparing methodsare needed in future research.
As mentioned previously, user conditions with mul-tiple special needs have not been addressed. More solu-tions need to be explored for people with multiple spe-
cial needs. The novel solutions presented in the selectedpapers were usually based on mapping body movements,or physiological signals to corresponding control inputsto the robotic system, or converting visual signals into
tactile signals. However, many people with disabilitiesalready have their own preferred assistive devices (e.g.eye trackers or speech recognition devices) that they use
for other purposes. Future research could explore howto integrate existing assistive devices with telepresencerobots seamlessly and easily. Moreover, multimodal in-teraction [81] may be explored by combining different
input methods.
5.4 Devices
The costs of typical commercial robots used in the se-lected studies are relatively high. This is evident in
the case of using VGo ($6,000) and Double ($2,499).A study estimated that the deployment of the home-to-school mobile robot telepresence solution was at a
cost of $3,100 to $3,300 [65]. A few robots featuredhumanoid appearance, and the price was significantlyhigher. An example of this is an Engineered Art’s Roboth-espian ($59,000) used by [30]. It is notable that inex-
pensive Raspberry Pi ($35 to $75) was widely used inexperimental telerobots [56,57,24,97]. Cost of devicesshould be considered in future research and it is highly
relevant to develop low-cost telerobots for people with
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 91
Telepresence Robots for People with Special needs: a Systematic Review 11
special needs who cannot afford the high-end solutionsoffered today.
Beam Pro, VGo and Padbot do not allow for ad-justment of the face display height. Possibility to adjust
height is an important feature for our target users andtheir social connections [58] in order to communicateat the same eye level, and may be considered a funda-mental requirement when designing telepresence robots
with a concern for the dignity of the user. Moreover, theimpact of robot height may be explored in field stud-ies of how remote participants relate to the user of a
telerobot.
Most of the robots in the selected papers featuredwheels for driving. However, this may limit their mo-bility range to flat surfaces. Walking humanoid robots
have been presented in a few studies and drone-liketelepresence robots [85] may be further explored forpeople with special needs.
5.5 Universal Access
Universal access [68] needs to be addressed in future re-
search and the design of telepresence robots. It is impor-tant to explore how to combine telepresence robots eas-ily with other assistive technologies, for instance gaze-
or head- input. From a system development perspec-tive, application programming interfaces (APIs) wouldbe needed to provide this. One of the selected papers
[46] presents a platform with an API manager included.Future work on APIs for telerobots may pave the wayfor an easy adaptation to different user needs. Froma hardware perspective, possibilities of connecting the
robots with existing assistive devices need to be consid-ered in the design process. It should be possible for thetarget users to use their own input device when piloting
a telepresence robots.
5.6 Safety
In human-robot interaction, safety is crucial [83]. Howto safely operate the device without doing damage to
other people or to the environment is important [92].The most common accident cause is a collision in theremote environment [31,76,97].
Some of the telerobots weight around 10kg (e.g.[46]), which can be dangerous in case of a collision
with humans. It is still and open issue how a telep-resence robot should balance between user’s movementcommands and safety in an environment crowded with
people [77]. Also, damaging interiors in the remote en-vironment can be costly.
A heuristic evaluation by [92] suggested that the
base of the system should be stable, sturdy, and havesome free distance from the floor. An unstable or lightweightbase was difficult to drive over normal thresholds like a
doorstep and when driving on slightly uneven surfaces,the lightweight robot wobbled and toppled over in somecases.
Obstacle detection and avoidance are important forsafety reasons and may be essential for some target
users, for instance people with cognitive disabilities.However, only a few studies mentioned this [79,41,46,92,7].
When navigating a telepresence robot, the defini-tion of an obstacle or a targets are not absolute [36].An object in the remote environment could be consid-
ered as an obstacle to avoid, or it could be the target theuser wants to get close to. How to provide the user theinformation from collision avoidance sensors efficiently
needs to be studied aswell. Too much information fromthe sensors can overwhelm the operator and be coun-terproductive [14].
New safety mechanisms may be considered in the fu-ture, for instance, adaptable speed [31] and auto-stop inthe case of loss of network coverage. In a home-setting,
functionalities like stair detection should also consid-ered in future research [92].
5.7 Simulated environments
Simulated environments can be applied in future re-search for economic and safety reasons. The potentialof using such simulation environments for teleoperation
training was demonstrated by [55,86]. Simulated envi-ronment has been used in helping children with dis-abilities to learn how to use a Powered wheelchair [26].
Moreover, these simulation environments can be usedfor user studies [5,89].
There are particular challenges when conducting eval-uation studies in VR wearing HMDs. Requiring par-ticipants to leave and re-enter the virtual environmentdisplayed by the HMDs costs time and can cause dis-
orientation [62]. Moreover, ”break in presence” (BIP)problems [29] happen when users have to remove theHMDs and complete questionnaires. New solutions have
been proposed addressing this problem. An example ofthis is the study carried out by [62], in which they en-abled measurement of presence when users were wear-
ing HMDs. The presence questionnaires (e.g. [90]) werepresented in the virtual environment and the users couldcomplete the questionnaires directly in the virtual en-vironment. Another possibility is to measure situation
awareness when users are driving with the VR head-
92 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
12 Guangtao Zhang, John Paulin Hansen
set on [97]. Measurement of important metrics (e.g. sit-uation awareness, presence) with specific devices likeHMDs need to be considered in future studies with thetarget users.
5.8 Privacy and Security
Privacy and security are important issues in the deploy-
ment of telepresence robots in realistic scenarios [47,44].The privacy and security of both local users and remoteusers need to be taken into consideration. Some prob-lems regarding these issues have been reported in the
selected papers. For example, the system provided thepilot user with a screenshot feature of the remote envi-ronments or participants, but no notification was given
to the remote participants, nor possibility for them toturn-off this functionality [92]. Privacy for home-boundstudents was also mentioned by [44], where they rec-
ommended to place the operator’s computer at a lo-cation that would not violate the household’s privacy.Our target users typically have caregivers and/or fam-ilies to assist with the robots. They also need to fulfill
all the requirements related to the remote site they arevisiting, for instance to follow the school guidelines fora parent volunteer in the classroom.
Most of the commercial products ensure privacy andsecurity by log-in passwords, encrypted links, and bypreventing video or audio recording. However, the pri-
vacy of the local environment and remote environmentcan be potentially violated [44]. The commercial telep-resence robots are usually off-the-shelf robots with con-necting service provided by the company.Hence, privacy
can never be totally guaranteed, and the systems maybe hacked.
5.9 Independence and autonomy
Features of autonomy can assist a person with specialneeds in operating telepresence robots, as it can freethe users from the details of robot navigation, and the
user can then focus on activities in the remote envi-ronment (e.g. social interaction) and on the remote en-vironment itself [79]. For example, Double 3 supports
semi-autonomous driving to a waypoint that the userjust needs to select once.
Autonomous telepresence robots [13] have been ex-
plored for other users, and might be a viable solutionfor our target users. However, some people with dis-abilities prefer to keep as much control as possible [76].Moreover, fully-autonomous systems may increase men-
tal workload if users lack trust in the automated system,
especially when the underlying mechanism of automa-
tion is not clear to them [50]. Some previous studies[36,96] with our target users suggested that they doprefer to retain control authority. Therefore, a balance
between independence and levels of autonomy shouldbe further explored in future research.
At an operational level, some of the solutions thatcurrent research is focusing on do not support indepen-
dent use at its present stage. BCI control, for instance,require users to put on an electrode cap, and ensurecontact between the head and the electrodes. Hence,
use of BCI devices are not yet possible outside the lab-oratory. Similarly, people with motor disabilities mightnot be able to put on a head-mounted display them-selves, even though it provides an attractive complete
field of view and built-in gaze tracking [10,97].
5.10 Evaluation
In future research, we suggest the following issues to beconsidered when conducting a thorough system evalu-ation:
5.10.1 Target users as participants
Our review found that only a limited number of the pa-pers included target users in their evaluation. Good per-
formance of a newly proposed solution for healthy par-ticipants does not necessarily mean good performancefor people with disabilities, which is evident in the caseof using BCI, for instance [8].
Besides supporting independent use for more inde-pendent life, another common goal of the selected paperwas to improve the quality of life of the target users.
Due to a lack of evaluation focusing on this part, it isstill unclear how the research would reach these goals.
Researchers should be aware of some potential diffi-
culties when recruiting target users for their studies. Anotable example of these difficulties can be seen in a re-cruitment process [46], where 65 patients had been ap-proached. However, after considerations by their guardians,
due to their status, and technical problems in the hospi-tal, only 1 patient was able to participate in the systemevaluation.
5.10.2 Training
In future research, the learning effects from trainingwhen using newly introduced control methods need to
be studied. It is notable that some of the evaluationsshowed that it was rather challenging for novices to usethe new methods (e.g. [10,97]). However, among these
studies, no training or tutorial sessions were included.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 93
Telepresence Robots for People with Special needs: a Systematic Review 13
The role of such training on the performance is still un-clear, and the learning effects need to be explored. Forinstance, studies showed that adequate training of usinggaze could improve operation skills of using gaze con-
trol [86]. A pre-trial session to provide adequate train-ing for novices needs to be considered in future evalua-tion. This is also important for safety reasons as stated
above, since the most common cause of accident is col-lision.
5.10.3 Adapting general metrics for HRI
General metrics (e.i. deviation from optimal path, col-lisions, situational awareness) for human-robot interac-
tion (HRI) [67] have been proposed and adopted in eval-uation within other domains. However, these commonmetrics has not been applied in most of the selected
studies with target users. The main advantage of us-ing general metrics is that this allows for comparisonsof findings across studies, and should therefore also be
recommended in future research with our target users.
Existing studies lacked evaluation of communicationquality. [33] has proposed some standard metrics thatcan also be adopted for evaluations of the quality ofcommunication [33].
5.11 Ethics and study guidelines
Ethics and study guidelines for research with peoplewith special needs need to be considered [60]. Targetusers should be informed about the details of the study.
This can be seen in the case of a study [4], which wascarried out with written informed consent from all sub-jects in accordance with the Declaration of Helsinki
[88]. Contact with parents or guardians also needs tobe considered. This is evident in the case of a study[65] where one patient could not consent to participa-tion because the parents/guardians were not available
during the hospitalization.
6 Limitation
The adequacy of a systematic review is dependent onthe time period, keywords, and the databases used. We
used only five main databases related to our topics forsearching for the results. Moreover, we restricted oursearch to those papers published within the last decade
to ensure that the data was contemporary and relevant.By snowballing from the reference lists of the papersselected, we found more key papers for our review. Also,our search were performed in November 2019 and new
relevant papers may have been published since then.
Due to the time limitation, we are aware that not all
articles related to telepresence robots for people withspecial needs may have been identified.
7 Conclusion
This systematic literature review intended to evaluate,synthesize, and present the studies according to differ-ent telepresence robots operated by people with special
needs. The main contributions of the review are: (1)an overview of existing research on telepresence robotsfor people with special needs; (2) a summary of com-
mon research directions based on existing research; (3)a summary of issues, which need to be considered infuture research on telepresence robots for people withspecial needs.
Within the last decade, there have been 42 papers
published on telepresence robots for people with specialneeds as operators. The special needs in the literaturewere disability-related (motor, visual, and cognitive)
and aging-related (children and elderly). Alternative so-lutions have been proposed for people with disabilities(motor, visual, and cognitive). Use-cases in healthcareand education settings have been explored.
The currently developed telepresence robots are not
accessible for all. There are still barriers for people withauditory or verbal disabilities, and for most people withmultiple special needs. Almost half of the systems havebeen evaluated in lab experiments. Only a few had more
than 5 target users. Most of the studies only focused onthe local user, ignoring the remote persons.
Most of the papers have pointed to the potentialimpact on the quality of life. However, due to the short-
comings of their evaluation methods, the actual impactis still unclear.
Acknowledgements The research has been supported bythe China Scholarship Council and the Bevica Foundation.
References
1. Abibullaev, B., Zollanvari, A., Saduanov, B., Alizadeh,T.: Design and optimization of a bci-driven telepresencerobot through programming by demonstration. IEEE Ac-cess 7, 111625–111636 (2019)
2. Aguado-Delgado, J., Gutierrez-Martinez, J.M., Hilera,J.R., de Marcos, L., Oton, S.: Accessibility in videogames: a systematic review. Universal Access in the In-formation Society pp. 1–25 (2020)
3. Ahumada-Newhart, V., Olson, J.S.: Going to school on arobot: Robot and user interface design features that mat-ter. ACM Transactions on Computer-Human Interaction(TOCHI) 26(4), 1–28 (2019)
94 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
14 Guangtao Zhang, John Paulin Hansen
4. Andersson, P., Pluim, J.P., Viergever, M.A., Ramsey,N.F.: Navigation of a telepresence robot via covert visu-ospatial attention and real-time fmri. Brain topography26(1), 177–185 (2013)
5. Araujo, J.M., Zhang, G., Hansen, J.P.P., Puthussery-pady, S.: Exploring eye-gaze wheelchair control. In: ACMSymposium on Eye Tracking Research and Applications,pp. 1–8 (2020)
6. Beam: Picture. URL https://suitabletech.com/
7. Beraldo, G., Antonello, M., Cimolato, A., Menegatti, E.,Tonin, L.: Brain-computer interface meets ros: A roboticapproach to mentally drive telepresence robots. In: 2018IEEE International Conference on Robotics and Automa-tion (ICRA), pp. 1–6. IEEE (2018)
8. Bi, L., Fan, X.A., Liu, Y.: Eeg-based brain-controlledmobile robots: a survey. IEEE transactions on human-machine systems 43(2), 161–176 (2013)
9. Carrascosa, C., Klugl, F., Ricci, A., for Multi,O.B.A.E., undefined 2015: From physical to virtual:widening the perspective on multi-agent environments.Springer URL https://link.springer.com/chapter/
10.1007/978-3-319-23850-0_9
10. Carreto, C., Gego, D., Figueiredo, L.: An eye-gaze track-ing system for teleoperation of a mobile robot. Journalof Information Systems Engineering & Management 3(2),16 (2018)
11. Chang, E.: Experiments and Probabilities in TelepresenceRobots. Exploring Digital Technologies for Art-BasedSpecial Education: Models and Methods for the InclusiveK-12 Classroom 40 (2019)
12. Clotet, E., Martınez, D., Moreno, J., Tresanchez, M.,Palacın, J.: Assistant personal robot (apr): Conceptionand application of a tele-operated assisted living robot.Sensors 16(5), 610 (2016)
13. Cosgun, A., Florencio, D.A., Christensen, H.I.: Au-tonomous person following for telepresence robots. In:2013 IEEE International Conference on Robotics and Au-tomation, pp. 4335–4342. IEEE (2013)
14. Desai, M., Tsui, K.M., Yanco, H.A., Uhlik, C.: Essen-tial features of telepresence robots. In: Technologies forPractical Robot Applications (TePRA), 2011 IEEE Con-ference on, pp. 15–20. IEEE (2011)
15. Double: Picture. URL https://www.doublerobotics.
com/
16. Eid, M.A., Giakoumidis, N., El-Saddik, A.: A noveleye-gaze-controlled wheelchair system for navigating un-known environments: Case study with a person with als.IEEE Access 4, 558–573 (2016)
17. Escolano, C., Antelis, J.M., Minguez, J.: A telepres-ence mobile robot controlled with a noninvasive brain–computer interface. IEEE Transactions on Systems, Man,and Cybernetics, Part B (Cybernetics) 42(3), 793–804(2011)
18. Escolano, C., Murguialday, A.R., Matuz, T., Birbaumer,N., Minguez, J.: A telepresence robotic system operatedwith a p300-based brain-computer interface: initial testswith als patients. In: 2010 Annual International Confer-ence of the IEEE Engineering in Medicine and Biology,pp. 4476–4480. IEEE (2010)
19. Fels, D.I., Weiss, P.L.T.: Video-mediated communicationin the classroom to support sick children: a case study. In-ternational Journal of Industrial Ergonomics 28(5), 251–263 (2001)
20. Finlayson, M., Van Denend, T.: Experiencing the loss ofmobility: perspectives of older adults with ms. Disabilityand rehabilitation 25(20), 1168–1180 (2003)
21. Friedman, N., Cabral, A.: Using a telepresence robot toimprove self-efficacy of people with developmental dis-abilities. In: Proceedings of the 20th International ACMSIGACCESS Conference on Computers and Accessibil-ity, pp. 489–491 (2018)
22. Gandhi, V., Prasad, G., Coyle, D., Behera, L., McGin-nity, T.M.: Eeg-based mobile robot control through anadaptive brain–robot interface. IEEE Transactions onSystems, Man, and Cybernetics: Systems 44(9), 1278–1285 (2014)
23. Giraff: Picture. URL https://telepresencerobots.com/
robots/giraff-telepresence24. Hansen, J.P., Alapetite, A., Thomsen, M., Wang, Z., Mi-
nakata, K., Zhang, G.: Head and gaze control of a telep-resence robot with an hmd. In: Proceedings of the 2018ACM Symposium on Eye Tracking Research & Applica-tions, pp. 1–3 (2018)
25. Hart, S.G., Staveland, L.E.: Development of nasa-tlx(task load index): Results of empirical and theoretical re-search. In: Advances in psychology, vol. 52, pp. 139–183.Elsevier (1988)
26. Hasdai, A., Jessel, A.S., Weiss, P.L.: Use of a computersimulator for training children with disabilities in the op-eration of a powered wheelchair. American Journal ofOccupational Therapy 52(3), 215–220 (1998)
27. Heshmat, Y., Jones, B., Xiong, X., Neustaedter, C., Tang,A., Riecke, B.E., Yang, L.: Geocaching with a beam:Shared outdoor activities through a telepresence robotwith 360 degree viewing. In: Proceedings of the 2018 CHIConference on Human Factors in Computing Systems, p.359. ACM (2018)
28. Jadhav, D., Shah, P., Shah, H.: A study to design vi class-rooms using virtual reality aided telepresence. In: 2018IEEE 18th International Conference on Advanced Learn-ing Technologies (ICALT), pp. 319–321. IEEE (2018)
29. Jerald, J.: The VR book: Human-centered design for vir-tual reality. Morgan & Claypool (2015)
30. Kishore, S., Gonzalez-Franco, M., Hintemuller, C.,Kapeller, C., Guger, C., Slater, M., Blom, K.J.: Compar-ison of ssvep bci and eye tracking for controlling a hu-manoid robot in a social environment. Presence: Teleop-erators and virtual environments 23(3), 242–252 (2014)
31. Koceski, S., Koceska, N.: Evaluation of an assistive telep-resence robot for elderly healthcare. Journal of medicalsystems 40(5), 121 (2016)
32. Kosugi, A., Kobayashi, M., Fukuda, K.: Hands-free col-laboration using telepresence robots for all ages. In: Pro-ceedings of the 19th ACM Conference on Computer Sup-ported Cooperative Work and Social Computing Com-panion, pp. 313–316 (2016)
33. Kristoffersson, A., Coradeschi, S., Loutfi, A.: A reviewof mobile robotic telepresence. Advances in Human-Computer Interaction 2013, 3 (2013)
34. Lee, M.K., Takayama, L.: Now, i have a body: Uses andsocial norms for mobile remote presence in the workplace.In: Proceedings of the SIGCHI conference on human fac-tors in computing systems, pp. 33–42. ACM (2011)
35. Leeb, R., Perdikis, S., Tonin, L., Biasiucci, A., Tavella,M., Creatura, M., Molina, A., Al-Khodairy, A., Carlson,T., dR Millan, J.: Transferring brain–computer interfacesbeyond the laboratory: successful application control formotor-disabled users. Artificial intelligence in medicine59(2), 121–132 (2013)
36. Leeb, R., Tonin, L., Rohm, M., Desideri, L., Carlson, T.,Millan, J.d.R.: Towards independence: a bci telepresencerobot for people with severe motor disabilities. Proceed-ings of the IEEE 103(6), 969–982 (2015)
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 95
Telepresence Robots for People with Special needs: a Systematic Review 15
37. Minsky, M.: Telepresence. Omni 2(9), 44–52 (1980)38. Moher, D., Liberati, A., Tetzlaff, J., Altman, D.G.,
Group, P., et al.: Preferred reporting items for systematicreviews and meta-analyses: the prisma statement. PLoSmed 6(7), e1000097 (2009)
39. Mohr, G.C.: ROBOTIC TELEPRESENCE. In: Proceed-ings of the Annual Reliability and Maintainability Sym-posium (1987)
40. Moyle, W., Arnautovska, U., Ownsworth, T., Jones, C.:Potential of telepresence robots to enhance social con-nectedness in older adults with dementia: an integra-tive review of feasibility. International psychogeriatrics29(12), 1951–1964 (2017)
41. Moyle, W., Jones, C., Cooke, M., O’Dwyer, S., Sung, B.,Drummond, S.: Social robots helping people with demen-tia: Assessing efficacy of social robots in the nursing homeenvironment. In: 2013 6th International Conference onHuman System Interactions (HSI), pp. 608–613. IEEE(2013)
42. Moyle, W., Jones, C., Cooke, M., O’Dwyer, S., Sung,B., Drummond, S.: Connecting the person with demen-tia and family: a feasibility study of a telepresence robot.BMC geriatrics 14(1), 7 (2014)
43. Newhart, V.A.: Virtual inclusion via telepresence robotsin the classroom. In: CHI’14 Extended Abstracts on Hu-man Factors in Computing Systems, pp. 951–956 (2014)
44. Newhart, V.A., Olson, J.S.: My student is a robot: Howschools manage telepresence experiences for students. In:Proceedings of the 2017 CHI conference on human factorsin computing systems, pp. 342–347 (2017)
45. Newhart, V.A., Warschauer, M., Sender, L.: Virtual in-clusion via telepresence robots in the classroom: An ex-ploratory case study. The International Journal of Tech-nologies in Learning 23(4), 9–25 (2016)
46. Ng, M.K., Primatesta, S., Giuliano, L., Lupetti, M.L.,Russo, L.O., Farulla, G.A., Indaco, M., Rosa, S., Ger-mak, C., Bona, B.: A cloud robotics system for telep-resence enabling mobility impaired people to enjoy thewhole museum experience. In: 2015 10th InternationalConference on Design & Technology of Integrated Sys-tems in Nanoscale Era (DTIS), pp. 1–6. IEEE (2015)
47. Niemela, M., van Aerschot, L., Tammela, A., Aaltonen,I.: A telepresence robot in residential care: Family in-creasingly present, personnel worried about privacy. In:International Conference on Social Robotics, pp. 85–94.Springer (2017)
48. Pacaux-Lemoine, M.P., Habib, L., Carlson, T.: Human-robot cooperation through brain-computer interactionand emulated haptic supports. In: 2018 IEEE Interna-tional Conference on Industrial Technology (ICIT), pp.1973–1978. IEEE (2018)
49. Padbot: Picture. URL www.padbot.com/50. Parasuraman, R., Sheridan, T.B., Wickens, C.D.: Situa-
tion awareness, mental workload, and trust in automa-tion: Viable, empirically supported cognitive engineeringconstructs. Journal of cognitive engineering and decisionmaking 2(2), 140–160 (2008)
51. Park, C.H., Howard, A.M.: Real world haptic explorationfor telepresence of the visually impaired. In: Proceedingsof the seventh annual ACM/IEEE international confer-ence on Human-Robot Interaction, pp. 65–72 (2012)
52. Park, C.H., Howard, A.M.: Real-time haptic renderingand haptic telepresence robotic system for the visuallyimpaired. In: 2013 World Haptics Conference (WHC),pp. 229–234. IEEE (2013)
53. Park, C.H., Howard, A.M.: Robotics-based telepresenceusing multi-modal interaction for individuals with visual
impairments. International Journal of Adaptive Controland Signal Processing 28(12), 1514–1532 (2014)
54. Park, C.H., Ryu, E.S., Howard, A.M.: Telerobotic hapticexploration in art galleries and museums for individualswith visual impairments. IEEE transactions on Haptics8(3), 327–338 (2015)
55. Perez, L., Diez, E., Usamentiaga, R., Garcıa, D.F.: In-dustrial robot control and operator training using virtualreality interfaces. Computers in Industry 109, 114–120(2019)
56. Petrushin, A., Barresi, G., Mattos, L.S.: Gaze-controlledlaser pointer platform for people with severe motor im-pairments: Preliminary test in telepresence. In: 2018 40thAnnual International Conference of the IEEE Engineer-ing in Medicine and Biology Society (EMBC), pp. 1813–1816. IEEE (2018)
57. Petrushin, A., Tessadori, J., Barresi, G., Mattos, L.S.: Ef-fect of a click-like feedback on motor imagery in eeg-bciand eye-tracking hybrid control for telepresence. In: 2018IEEE/ASME International Conference on Advanced In-telligent Mechatronics (AIM), pp. 628–633. IEEE (2018)
58. Rae, I., Takayama, L., Mutlu, B.: The influence of heightin robot-mediated communication. In: Proceedings ofthe 8th ACM/IEEE international conference on Human-robot interaction, pp. 1–8. IEEE Press (2013)
59. Riek, L.D.: Healthcare robotics. Communications of theACM 60(11), 68–78 (2017)
60. Rios, D., Magasi, S., Novak, C., Harniss, M.: Conduct-ing accessible research: including people with disabilitiesin public health, epidemiological, and outcomes studies.American journal of public health 106(12), 2137–2144(2016)
61. Sankhe, P., Kuriakose, S., Lahiri, U.: A step towards arobotic system with smartphone working as its brain: Anassistive technology. In: 2013 International Conference onControl, Automation, Robotics and Embedded Systems(CARE), pp. 1–6. IEEE (2013)
62. Schwind, V., Knierim, P., Haas, N., Henze, N.: Usingpresence questionnaires in virtual reality. In: Proceed-ings of the 2019 CHI Conference on Human Factors inComputing Systems, pp. 1–12 (2019)
63. Shishehgar, M., Kerr, D., Blake, J.: A systematic reviewof research into how robotic technology can help olderpeople. Smart Health 7, 1–18 (2018)
64. Simpson, R.C., LoPresti, E.F., Cooper, R.A.: How manypeople would benefit from a smart wheelchair? Journal ofRehabilitation Research and Development (2008). DOI10.1682/JRRD.2007.01.0015
65. Soares, N., Kay, J.C., Craven, G.: Mobile robotic telepres-ence solutions for the education of hospitalized children.Perspectives in health information management 14(Fall)(2017)
66. Stawicki, P., Gembler, F., Volosyak, I.: Driving a semiau-tonomous mobile robotic car controlled by an ssvep-basedbci. Computational intelligence and neuroscience 2016(2016)
67. Steinfeld, A., Fong, T., Kaber, D., Lewis, M., Scholtz, J.,Schultz, A., Goodrich, M.: Common metrics for human-robot interaction. In: Proceedings of the 1st ACMSIGCHI/SIGART conference on Human-robot interac-tion, pp. 33–40. ACM (2006)
68. Stephanidis, C., Savidis, A.: Universal Access in the In-formation Society: Methods, Tools, and Interaction Tech-nologies. Universal Access in the Information Society1(1), 40–55 (2001). DOI 10.1007/s102090100008
96 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
16 Guangtao Zhang, John Paulin Hansen
69. Stuck, R.E., Hartley, J.Q., Mitzner, T.L., Beer, J.M.,Rogers, W.A.: Understanding Attitudes of Adults Ag-ing with Mobility Impairments toward TelepresenceRobots. In: Proceedings of the Companion of the 2017ACM/IEEE International Conference on Human-RobotInteraction, pp. 293–294. ACM, New York, NY, USA(2017). DOI 10.1145/3029798.3038351
70. Szajna, B.: Empirical evaluation of the revised technol-ogy acceptance model. Management science 42(1), 85–92(1996)
71. Takayama, L., Marder-Eppstein, E., Harris, H., Beer,J.M.: Assisted driving of a mobile remote presencesystem: System design and controlled user evaluation.In: Proceedings - IEEE International Conference onRobotics and Automation (2011). DOI 10.1109/ICRA.2011.5979637
72. Tanaka, F., Takahashi, T., Matsuzoe, S., Tazawa, N.,Morita, M.: Telepresence robot helps children in commu-nicating with teachers who speak a different language. In:Proceedings of the 2014 ACM/IEEE international con-ference on Human-robot interaction, pp. 399–406. ACM(2014)
73. Tanaka, K., Nakanishi, H., Ishiguro, H.: Comparingvideo, avatar, and robot mediated communication: prosand cons of embodiment. In: International conference oncollaboration technologies, pp. 96–110. Springer (2014)
74. Tavakoli, M., Carriere, J., Torabi, A.: Robotics, smartwearable technologies, and autonomous intelligent sys-tems for healthcare during the covid-19 pandemic: Ananalysis of the state of the art and future vision. Ad-vanced Intelligent Systems p. 2000071 (2020)
75. Tonin, L., Carlson, T., Leeb, R., Millan, J.d.R.: Brain-controlled telepresence robot by motor-disabled people.In: 2011 Annual International Conference of the IEEEEngineering in Medicine and Biology Society, pp. 4227–4230. IEEE (2011)
76. Tonin, L., Leeb, R., Tavella, M., Perdikis, S., Millan,J.d.R.: The role of shared-control in bci-based telepres-ence. In: 2010 IEEE International Conference on Sys-tems, Man and Cybernetics, pp. 1462–1466. IEEE (2010)
77. Tsui, K.M., Dalphond, J.M., Brooks, D.J., Medvedev,M.S., McCann, E., Allspaw, J., Kontak, D., Yanco,H.A.: Accessible human-robot interaction for telepres-ence robots: A case study. Paladyn, Journal of BehavioralRobotics 1(open-issue) (2015)
78. Tsui, K.M., Flynn, K., McHugh, A., Yanco, H.A., Kon-tak, D.: Designing speech-based interfaces for telep-resence robots for people with disabilities. In: 2013IEEE 13th International Conference on RehabilitationRobotics (ICORR), pp. 1–8. IEEE (2013)
79. Tsui, K.M., McCann, E., McHugh, A., Medvedev, M.,Yanco, H.A., Kontak, D., Drury, J.L.: Towards designingtelepresence robot navigation for people with disabilities.International Journal of Intelligent Computing and Cy-bernetics (2014)
80. Tsun, M.T.K., Theng, L.B., Jo, H.S., Lau, S.L.: A robotictelepresence system for full-time monitoring of childrenwith cognitive disabilities. In: Proceedings of the interna-tional Convention on Rehabilitation Engineering & As-sistive Technology, pp. 1–4 (2015)
81. Turk, M.: Multimodal interaction: A review. PatternRecognition Letters 36, 189–195 (2014)
82. UNICEF and WHO: Assistive technology for childrenwith disabilities: creating opportunities for education, in-clusion and participation: a discussion paper. Geneva:WHO (2015)
83. Vasic, M., Billard, A.: Safety issues in human-robot in-teractions. In: 2013 IEEE International Conference onRobotics and Automation, pp. 197–204. IEEE (2013)
84. VGO: Picture. URL http://www.vgocom.com/
85. Wang, K.J., You, K., Chen, F., Thakur, P., Urich, M.,Vhasure, S., Mao, Z.H.: Development of seamless telep-resence robot control methods to interact with the en-vironment using physiological signals. In: Companionof the 2018 ACM/IEEE International Conference onHuman-Robot Interaction, pp. 44–44 (2018)
86. Watson, G.S., Papelis, Y.E., Hicks, K.C.: Simulation-based environment for the eye-tracking control of tele-operated mobile robots. In: Proceedings of the Model-ing and Simulation of Complexity in Intelligent, Adap-tive and Autonomous Systems 2016 (MSCIAAS 2016)and Space Simulation for Planetary Space Exploration(SPACE 2016), pp. 1–7 (2016)
87. Wieringa, R., Maiden, N., Mead, N., Rolland, C.: Re-quirements engineering paper classification and evalua-tion criteria: a proposal and a discussion. Requirementsengineering 11(1), 102–107 (2006)
88. Williams, J.R.: The declaration of helsinki and publichealth. Bulletin of the World Health Organization 86,650–652 (2008)
89. Williams, T., Hirshfield, L., Tran, N., Grant, T., Wood-ward, N.: Using augmented reality to better studyhuman-robot interaction. In: HCII Conference on Vir-tual, Augmented, and Mixed Reality (2020)
90. Witmer, B.G., Singer, M.J.: Measuring immersion in vir-tual environments. Tech. rep., ARI Technical Report1014). Alexandria, VA: US Army Research Institute forthe Behavioral and Social Sciences (1994)
91. World Health Organization: World report on disability.Geneva: WHO (2011)
92. Wu, X., Thomas, R.C., Drobina, E.C., Mitzner, T.L.,Beer, J.M.: Telepresence heuristic evaluation for adultsaging with mobility impairment. In: Proceedings of theHuman Factors and Ergonomics Society Annual Meet-ing, vol. 61, pp. 16–20. SAGE Publications Sage CA: LosAngeles, CA (2017)
93. Yamaguchi, J., Parone, C., Di, D.F., Beomonte, P.Z.,Felzani, G.: Measuring benefits of telepresence robot forindividuals with motor impairments. Studies in healthtechnology and informatics 217, 703–709 (2015)
94. Yang, L., Neustaedter, C., Schiphorst, T.: Communicat-ing through a telepresence robot: A study of long distancerelationships. In: Proceedings of the 2017 CHI Confer-ence Extended Abstracts on Human Factors in Comput-ing Systems, pp. 3027–3033. ACM (2017)
95. Zhang, G., Hansen, J.P.: Accessible control of telepres-ence robots based on eye tracking. In: Proceedings ofthe 11th ACM Symposium on Eye Tracking Research &Applications, pp. 1–3 (2019)
96. Zhang, G., Hansen, J.P.: People with motor disabilitiesusing gaze to control telerobots. In: Extended Abstractsof the 2020 CHI Conference on Human Factors in Com-puting Systems, pp. 1–9 (2020)
97. Zhang, G., Hansen, J.P., Minakata, K.: Hand-and gaze-control of telepresence robots. In: Proceedings of the 11thACM Symposium on Eye Tracking Research & Applica-tions, pp. 1–8 (2019)
98. Zhang, G., Hansen, J.P., Minakata, K., Alapetite, A.,Wang, Z.: Eye-gaze-controlled telepresence robots forpeople with motor disabilities. In: 2019 14th ACM/IEEEInternational Conference on Human-Robot Interaction(HRI), pp. 574–575. IEEE (2019)
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 97
A.2 Gazecontrolled Telepresence: An Experimental StudyAuthors: Guangtao Zhang, John Paulin Hansen, and Katsumi Minakata
Submitted to International Journal of Human Computer Studies
98 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Gaze-controlled Telepresence Robots: An Experimental StudyGuangtao Zhang∗, John Paulin Hansen and Katsumi Minakata
Technical University of Denmark, 2800 Kgs. Lyngby, Denmark
ART ICLE INFO
Keywords:Human-robot interactionTeleoperationSituation awarenessEvaluation methodAccessibilityHMDEye-trackingGaze interaction
ABSTRACT
People with motor disabilities have the opportunity to use eye-gaze to control tele-robots. Emerginghead-mounted displays (HMD) with built-in eye-tracking sensors that are connected with a remote360 camera enable hands-free control and provide an immersive telepresence experience. However,use of gaze control for navigation of tele-robots with an HMD needs to be explored. Moreover, ex-ploration on evaluation methods for gaze-controlled telepresence robots is necessary. Particularly,situation awareness (SA) plays an important role in teleoperation tasks, but challenges of measure-ment exist in this case. We utilized an existing real-time measurement technique of SA and used itin the experiment. The goals of the experiment were twofold: 1) to examine the possibility and chal-lenges of gaze control by comparing it with hand control; 2) to validate the SA technique within atelepresence context. Results showed that participants (n = 16) had similar experience of presenceand self-assessment, but gaze control was 31% slower than hand control. Gaze-controlled robots hadmore collisions and higher deviations from optimal paths. Participants reported higher workload, areduced feeling of dominance, and their SA was significantly degraded with gaze control. The accu-racy rates of their post-trial reproduction of the maze layout were lower and the trial duration werealso significantly higher. Regarding the validation of SA, our proposed SA measurement techniqueprovided more reliable data than a post-trial subjective measure (SART). Correlations of SA and othermetrics further confirmed previous findings on teleoperation, namely a positive correlation betweenSA and human performance, and a negative correlation between SA and workload.
1. IntroductionTelepresence robots are increasingly used to promote so-
cial interaction between geographically dispersed people [27].Eye-tracking technology has matured and become inexpen-sive. Gaze-tracking sensors can be built into computers, head-mounted displays (HMD) and mobile devices. Our targetusers are people with motor disabilities. By controlling thetelerobots with gaze, they may be able to have a feeling of"being there" in a remote space, and have access to educa-tion, events, and places. In order to evaluate the effective-ness and the challenges of gaze-based, human-robot interac-tion, we compare it to a well-known input method, namelyhand-controlled joy-sticks. Joy-sticks would not be an op-tion for our target group, but by using this as a baseline, wecan identify metrics and test methods that are able to mea-sure significant differences in the design of control princi-ples, user-interfaces and the telerobot form factors.
One main motivation for our research is to explore theuse of gaze control when interacting with telerobots becausegaze has shown to be an effective control method for peo-ple with profound motor deficits. Another motivation is tofurther validate the measurement of SA, which is importantamong the other metrics of evaluation.
The paper is organised as follows: In section 2, we out-line previous research in related areas and research prob-lems. Section 3 describes our prototype of a robotic sys-tem for gaze-controlled telepresence. Section 4 presents theprocess of a real-time objective measurement technique for
∗Corresponding [email protected] (G. Zhang); [email protected] (J.P. Hansen);
[email protected] (K. Minakata)ORCID(s): 000-0003-0794-3338 (G. Zhang); 0000-0001-5594-3645 (J.P.
Hansen); 0000-0001-6104-8284 (K. Minakata)
teleoperation with HMDs. Section 5 presents an experimentwith 16 subjects. Results of comparing difference of usinggaze and hand is presented in Section 6. A comparison of theproposed SA technique (based on SPAM [8]) and another SAmeasure (i.e., SART [47]), and their correlations with othermetrics were presented in Section 7. Finally, we discuss onour findings and potential challenges for the target users toutilize gaze-controlled telepresence and we consider possi-ble improvement of our current system. Evaluation methodsand SA measures are also discussed.
2. Related work and Problem Statement2.1. Gaze-controlled Telepresence robots with
HMDsTelepresence robots are gaining increased attention as a
means for remote communication [50]. The systems com-bine video-conferencing capabilities with a robot vehicle thatcan maneuver in remote locations [41]. These systems arebecoming increasingly popular within certain application do-mains, for example, distributed collaboration for geograph-ically distributed teams [28], shopping over distance [60],academic conferences [40], children-teacher communication[50], communication between distance couples [61], and out-door activities [22].
Previous studies have explored accessibility of telepres-ence robots for people with disabilities, such as visual [35,36], motor [29, 53, 3], and cognitive disabilities [34, 55].People with motor disabilities may benefit from telepresencerobots to overcome mobility problems, especially those withserver motor disabilities, for instance cerebral palsy [56] andAmyotrophic Lateral Sclerosis (ALS/MND) [16]. While teler-obots are typically controlled with hand input through theuse of a joystick, mouse, or keyboard, hands-free human-
G Zhang et al.: Preprint submitted to Elsevier Page 1 of 15Gazecontrolled Telepresence: Accessibility, Training and Evaluation 99
Gaze-controlled Telepresence Robots: An Experimental Study
Type Feature Pros ConsObserver ratings subjective - Ease of use; No interruption Limited knowledge of a person’s concept
SART [47] subjective post-testEase of use;No interruption Memory problem
SA-SWORD [26] subjective post-testEase of use;No interruption Memory problem
SAGAT [12] objective real time Overcomes memory problem Total interruption of main task
SPAM [9] objective real timeOvercomes memory problem;Embedded in tasks interruption of main task
Table 1A comparison of SA techniques.
telerobot interaction for people with motor disabilities hasbeen demonstrated with speech [54, 55], brain activity [29,53, 3], movements of the eyes [49], and head gesture [24],eye movement [6].
Gaze has been proven to be a good, alternative inputmodality for people with limited hand mobility [2]. Pre-vious studies have made empirical evaluation of eye-gaze-controlled typing [31], video games [23], driving [49], fly-ing a drone [19], wheelchair steering [11], and robot control[6].
EmergingHMDs have built-in, eye-tracking sensors. Us-ing the HMDs connected with a remote 360 video cameramay increase sensation of presence and the user’s spatial per-ception compared to a 2D narrow view [62]. Moreover, eye-tracking in HMDs for robot teleoperation has additional ad-vantages compared to traditional screen-based eye-trackers,because it offers more flexibility in where it can be applied,including from bed or at out-door locations, and robustnesstowards uncontrolled head movements.
However, it has not yet been explored, how situation aware-ness, presence, performance, workload, and subjective ex-perience may be influenced by gaze control of telerobotswith HMDs. Early research [7] argued that exclusive con-trol of aircraft cockpits by gaze only is very unnatural andresults in higher workload. Moreover, there are difficultiesin evaluation with specific devices like HMDs. Requiringparticipants to leave and re-enter the virtual environment dis-played by the HMDs costs time and can cause disorientation[46]. Moreover, "break in presence" (BIP) problems [25]happen when users have to remove the HMDs and completethe questionnaires.
2.2. Situation AwarenessThe process of remote control of a telepresence robot is
teleoperation. Situation awareness (SA) plays an importantrole for teleoperation and for understanding of the environ-ment the telerobot is navigating through [14]. The often-cited definition of SA is "the perception of elements in theirenvironment within a volume of space and time, the compre-hension of their meaning, and the projection of their statusin the near future" [14].
SA is also considered a primary basis for performance[13, 42, 43]. There is a correlation between mental workloadand SA [58].
A number of methods have been proposed, which can be
categorized as subjective and objective measures (see Tab.1). Subjective measures such as the situation awareness rat-ing technique (SART) rely on the ability of participants toassess their own level of subconstructs such as supply, de-mand and understanding. These subconstructs are combinedin order to obtain a composite SA score/level. Even thoughSART is considered to be a psychometrically reliable andvalid questionnaire [52], it is subject to the limitations ofsurvey research such as response bias and subjectivity. Ob-jectivemeasures include the SituationAwareness Global As-sessment Technique (SAGAT), and the Situation Present As-sessment Method (SPAM) [8], which rely on the correctnessof responses as well as the reaction time (RT) to respondto queries. The SAGAT requires the implementation of agoal-directed tasks analysis with subject matter experts inorder to determine the content of the questions used for theadministration of SAGAT. During a simulation run, an ex-perimenter randomly freezes the simulation and administersa questionnaire regarding pertinent information for the sim-ulation such as a map of a sector and aircraft information,which has been criticized because this method seems to relyon the working memory of the participants. This was ad-dressed by Endsley with a simulation experiment that testedwhether SAGAT was measuring working memory in com-bination with SA [14]. The SPAM is an on-line measurethat consists of operationally-relevant questions that are ad-ministered in real time. The SPAM could be used to predictperformance [15], for example in a cognitively-oriented, air-traffic-management task [8].
Teleoperation research has focusedmore on the construc-tion of technologies in order to improve SA. However, thesesystems have not been tested with reliable and valid mea-sures of situation awareness. The following studies haveactually used an objective (i.e., SAGAT or SPAM) or sub-jective measure (i.e., SART) of situation awareness. Ale-otti et al. [1] conducted a study on the teleoperation of aUAV that was used to detect and localize nuclear radiationsources. Their interface was developed in order to relievemental workload and improve performance and situation aware-ness. A combination of force feedback and a visualization ofthe radiation levels collected by a spectroscopic gamma-raydetector of a site were implemented. The teleoperator wasgiven an overview of the site that presented the spatial lo-cation of the UAV. Of relevance, was the implementation
G Zhang et al.: Preprint submitted to Elsevier Page 2 of 15
100 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Gaze-controlled Telepresence Robots: An Experimental Study
of the SPAM and NASA-Task Load index, which are mea-sures of situation awareness and mental workload, respec-tively. Their results lead to conclusion that the system in-creased performance, situation awareness and mental work-load. Another group of researchers sought to improve thespatial knowledge transmitted to an operator of a telerobotby using two different viewpoints [51]; a tethered view anda bird’s-eye view. Unfortunately, tethered views are suscep-tible to objects that occlude the view of the telerobot. As aresult, Tara and Teng [51] created and tested an interface thataugmented an edited tethered view. That is, the occluded ob-ject was removed from viewpoint and outline was then su-perimposed on the display. This display was accompaniedby an exocentric viewpoint that was supposed to enhanceSA and facilitate the creation of a spatial map. The subjec-tive ratings from the SART suggested that the display indeedenhanced both performance and SA.
Gatsoulis and Virk [18] used SA, telepresence and work-load measures in order to validate, via linear regression anal-ysis, performance metrics related to teleoperation task per-formance. A subjective rating version of the SAGAT wasused. The subjective SA, telepresence and workload mea-sures were used to predict task performance. It was foundthat SA and telepresence both significantly predicted taskperformance, which lead the authors to suggest that SA andtelepresence are positively correlated. Gatsoulis and Virk[18] used: the SAGAT; the subjective confidence of partici-pants’ own responses (Quantitative Analog SAGAT); telep-resence questionnaire; and a subjective workload measure.The task was similar to that of [18] and, of interest, theirresults lead to similar conclusions; SA contributes signifi-cantly to task performance but telepresence does not. Specif-ically, the SA component of the QASAGAT positively pre-dicted task performance. These two studies contributed tothe literature by demonstrating that SA and telepresence aresimilar but are not the exact same psychological construct.
Not all teleoperation studies have led to positive results,however. For example, Van Erp et al. [57] utilized SAGAT,the presence questionnaire and the Misery Scale (measureof motion sickness) when testing their teleoperation system.They, however, found that their system did significantly im-prove SA or telepresence relative to the control conditionof using a wheelchair to control the remotely operated ve-hicle. Nevertheless, one can conclude, based on this previ-ous research, that an objective and subjective measure of SAshould be incorporated in order to validate future teleopera-tion systems.
Challenges of using SPAM technique for assessing situ-ation awareness as employed by Durso et al.[8] in teleoper-ation tasks with HMDs. Based on this previous research, itseems that an objective and subjective measure of SA shouldbe incorporated in order to validate future teleoperation sys-tems. Current methods for measuring SA in teleoperationcontexts have been adopted from aviation (e.g., SPAM [8]).These methods need to be explored in our case for robot tele-operation with HMDs. However, it is unclear if these meth-ods are reliable. Moreover, challenges exist when adapt-
ing the real time and objective method (e.g., SPAM) in theHMD setting. For evaluation of controlling a telepresencerobot, common metrics have been proposed [48]. For in-stance, task completion time [29, 37, 45, 36] and collisiontime [29, 17] have been widely used as metrics of perfor-mance. Given the correlation between SA and other metrics(e.g., performance), we can further validate the reliability ofthe SAmeasurement methods by comparing themwith othermetrics [39].
[44] a review of applicability for command, control, com-munication, computers and intelligence systems.
3. A gaze interactive telerobot systemOur gaze-controlled telepresence systems [20] were built
around a robot platform, with eye tracking in an HMD, and auser interface was developed in Unity (version: 2018.1.6)1.The platform leverages on the open source robot operatingsystem (ROS) and its ecosystem, which supports various clientplatforms and reuse. Several types of robots have been ap-plied in our system, including an off-the-shelf robot (Pad-Bot), developer-oriented robots (Parallax Arlo), and modi-fied wheelchairs. In this study, a Padbot could be movedaround with gaze control or a joystick. The robot carries a360 video camera, and a microphone. A Fove2 HMD trans-mits the live video stream from the 360 video camera.
VR HMDwith eye trackers
Telerobots
Gaze control
Live video stream360° video camera
Figure 1: Our gaze interactive interface can serve a range oftelerobots, including a modified wheelchair, a build-yourselfmodel (Parallax Arlo) and third-party models (Padbot).
The user interface (UI) is connected with the ROS sys-tem. Two modes (parking and driving) can be selected byusing the virtual control panel [20]. Parkingmode allows thepilot user to get a full 360 view of the local environment viaa panning-tool. Driving mode displays a fixed front cameraview in low-resolution to minimize delay of the live videostream. In this study we only used the driving mode. The UIis an invisible control layout working on top of the live videostream. Gaze movements are mapped to the robot’s move-ments via the UI. The robot turns in the direction the user islooking. When the driver closes his/her eyes or looks out-side the area of the live video stream, the robot stops moving.
1https://unity3d.com [last accessed - 22-09-2020]2https://www.getfove.com [last accessed - 02-09-2020]
G Zhang et al.: Preprint submitted to Elsevier Page 3 of 15
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 101
Gaze-controlled Telepresence Robots: An Experimental Study
1
Figure 2: Image of gaze UI that overlays the live video stream.The pink circle (on the floor) shows the gaze cursor and isvisible to the user. The red separator lines and direction arrowsare not show to the user. Mode shifts and overlay interfacesare done by gazing at icons in the bottom.
Figure 2 shows a screen-shot of the layout. The gaze point(pink cursor) is visible to the user inside the video steam,indicating the position from where continuous input is sentas a command to the telerobot. When looking in the left up-per area, the robot turns left; in the middle area, it drivesforward; in the right upper area, it turns right; in the leftlower area, it turns left (spin turn); in the right lower area, itturns right (spin turn). The velocity is controlled by the dis-tance between gaze position and the center position of thelive video stream (maximum linear velocity: 1.2 m/s).
4. An SA measurement method (SPAM)SPAM was originally developed for use in the assess-
ment of air traffic controllers SA. Our implementation of thismethod was adapted for use in the assessment of teleopera-tion with an HMD.
In SPAM for air traffic control, the question was pre-sented while all relevant information was available to be ref-erenced to by the user. The controller answered the ques-tion, and the response was recorded [9]. The SPAM ques-tion question sequence began by activating the controller’slandline. In our version, the pop-up appeared and showed,"Are you ready to answer the question"(see Fig. 7). The timetaken to answer the telephone is recorded, acting as an indi-cator of workload [44]. In our method, the latency betweenthe pre-pop-up (see Fig. 7) and their answer is recorded in-stead. After the participant answered the landline, the exper-imenter read the question from a computer screen and initi-ated the timer. When the participant responded, the timerwas stopped and the experimenter recorded the response. Inour method, after the participant answered the pre-pop-up,the SA query (see Fig. 8) was activated by the experimenter.
PerceptionWhat gender was the voice you just heard?Did the person talked to you also smile with you?Is the person still in the room now?Do you know where he is now?Can you tell me which direction you are facing now?Which direaction are you facing now?
ComprehensionWhat kind of information did the person tell you?What was the graph you just saw?Can you estimate the percentage of your current progress level (according to theinformation)?What kind of information did the person tell you?Is the description from the person in accordance with the information on the wall?
ProjectionCan you estimate when you will be finished with the task?
Table 2Pop-up queries
When the participant responded, the pop-up sequence wasterminated. The response time is recorded by the system,and the response is recorded by the experimenter.
Correctness of answers and the query response time aretaken as an indicator of SA [39]. In the original SPAM,whenthe answer is correct, the response time is taken as an in-dicator of SA. The original study showed that most of theanswers were correct [8]. In our study, their accuracy ofanswers were also calculated as an indicator, because partic-ipants could not always answered correctly.
During the trials, the experimenter observed their oper-ation via an LCD display. When the telerobot passed certainareas in the maze or when a maneuver, e.g. a turn, had beendone, a query pop-up in the control display was prompted bythe experimenter. When the participants had given a verbalresponse, the experimenter closed the query pop-up downand recorded their answers.
Based on task analysis and SA theory [13], the queriesincluded perception-related questions, comprehension-relatedqueries, and projection-related queries.
5. Experiments5.1. Participants
A total of 16 able-bodied participants (6 females, 10males,mean age= 29 years) participated in this study. 11 partic-ipants had experience with VR glasses, 8 had experiencewith gaze interaction and 2 had experience with telepresencerobots.
5.2. ApparatusThe test subjects were sitting in a remote control room
(cf. Figure 3). The HMD (FOVE) and a joystick (MicrosoftXbox 360 Controller) were connected with a computer withUnity. The computer was connected with the telerobot viaa wireless network. The FOVE HMD weigh 520 grams andhas a resolution of 2560 x 1440 px, renders at a maximumof 70 fps, and has an FOV of 100. During driving, the livevideo stream is displayed at the HMDvia the Unity software.Gaze tracking is handled by two built-in infrared eye trackersof the Fove. Tracking precision is reported to be less than 1
G Zhang et al.: Preprint submitted to Elsevier Page 4 of 15
102 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Gaze-controlled Telepresence Robots: An Experimental Study
Figure 3: Two conditions: gaze control (top image) or handcontrol (bottom image).
at a 120 Hz sampling rate. It also offers IMU-based sensingof head orientation, and optical tracking of head position.
In the driving room (cf. Figure 4), a telerobot carried a360 camera (Ricoh Theta S)3, a microphone, and two sen-sors for indoor positioning. The camera was 1.3 m abovethe floor. Five ultra-sound sensors (GamesOnTrack)4 weremounted on the wall and with a transmitter placed on topof the telerobot this tracking system allowed positioning in3D with a precision of 1 - 2 centimeters. Plastic sticks onthe floor were used to mark up the maze tracks covering anarea of 20 square meters (length: 5m, width: 4m, cf. Fig-ure 5). Three sheets of A4 paper with a pie-chart was hungon the wall at different locations to show how far the robothad driven at that position.
Figure 4: The telerobot driving through the maze.
3https://theta360.com/) [last accessed - 24-01-2019]4http://www.gamesontrack.dk/ [last accessed - 24-01-2019]
5.3. MeasuresQuantitative and qualitative measures were used to cover
performance metrics [48] in our study. They included:
1. Log data of the telerobot from bothGamesontrack ultra-sound sensors and from the telerobot’s encoders, in-cluding a timestamp, telerobot’s position (x, y), andvelocity.
2. Log data from the UI, including response time to twotypes of on-screen, pop-up queries (see 3 below). Eachquestion would appear only once, in total 4 times foreach trial (except an orientation-related query, "Canyou tell me in which direction you are facing now"),in order to reduce learning effects.
3. Responses to the SPAM pop-up queries.4. A task Load Index (NASA-TLX) [21] questionnaire
was used to collect workload ratings after each trial(with 6 rating scales, including mental demand, phys-ical demand, temporal demand, performance, effortand frustration). Each rating scale had 21 gradations.
5. A Presence Questionnaire [59] revised by the UQOCyberpsychology Lab was also used to rate the feelingof presence on a 7-point scale after each trial.
6. An SART Questionnaire [47] was used to rate the SA.7. Self Assessment Manikin (SAM) [4] was used for the
participants to report their feelings of pleasure, arousaland dominance on a 5-graded facial pictorial form.
8. The participants’ response to post-trial questions aboutestimated task duration and recollection of the mazelayout, positions of the person who was talking withthem, and the number of times they communicatedwith the person. All the response were quantified withpercentage of accuracy.
Qualitative measures included: (1) Video recorded inthe remote environment for post trial analysis; (2) Videorecorded of the Unity UI environment (including their gazepoint) for post analysis; (3) Feedback provided in post-trialinterviews.
5.4. Experimental DesignA within-subjects design was used in this experiment.
Therewere two groups of independent variables: inputmethod(gaze control & hand control), and order of trials with samemethod (Trial 1 & Trial 2). Dependent Variables includedthe participants’ SA, presence, workload, performance, post-trial estimation and recollection, self-assessment, and expe-rience of navigating the telerobot with the control methods.
Each participant used each method to navigate throughtwomazes. Each participant navigated through four differentmazes with a layout that were of similar length and complex-ity (cf. Figure 5). The orders in which participants were ex-posed to each maze was counterbalanced by a Latin square.Half of the participants started with gaze control, and half ofthem started with hand control. In total, 64 trials with datacollection and observation had been made (i.e. 16 partici-pants x 4 trials; two trials with joystick and two trials withgaze control).
G Zhang et al.: Preprint submitted to Elsevier Page 5 of 15
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 103
Gaze-controlled Telepresence Robots: An Experimental Study
Figure 5: Maze layouts (green: starting point, red: end point).
5.5. ProcedureThe main task was to navigate the telerobot through a
maze. Half of the participants did this first by driving twotrials with gaze control, and half of them started with twotrials using joystick control. Each participant had four trialsin total, two for each of the control conditions. The mazelayouts were similar in terms of length and difficulty but dif-ferent in the path structure , cf. Figure 5. The orders of themaze layouts were balanced in a latin square design acrossparticipants and control conditions. Before each gaze trialthe standard gaze calibration procedure for the FOVE head-set was conducted.
During the trials, the experimenter observed their oper-ation via an LCD display. When the telerobot passed certainareas in the maze or when a maneuver, e.g. a turn, had beendone, a query pop-up in the control display was prompted bythe experimenter. When the participants had given a verbalresponse, the experimenter closed the query pop-up downand recorded their answers. Their response timewere loggedin the system.
In the remote driving room, a person was standing atthree different positions during the trials. When the teler-obot passed by, this person faced the camera and talked to theparticipants via the telerobot, providing information relatedto the remote environment. For instance, when the telerobotreached to the middle of the maze, the person would say,"Hello, I would like to inform you that the status of yourprogress is 50 percent."
After each trial, the participant took off the HMD, andanswered the set of questionnaires described above (SAM,TLX, Presence, estimation, and recollection). At the veryend of the test, the participant were interviewed shortly withthe following questions: "How did you feel about wearingthe HMD? Did you experience any sickness, headache, orgaze control problems?" Each session lasted approximately65 minutes in total.
5.5.1. Pop-up process6. Results: Comparison of Control Methods6.1. Performance
1. Task completion time:With two-wayANOVAand followed-up pairwise com-parisons with a bonferroni correction, we found sig-nificant main effects of input method, F(1,64)= 5.40,p = .023, 2 = 0.083, on the task completion time(driving task only). Neither the main effect of trial or-
Activated by the experimenter
Pop-up: “Are you ready to answer a question?”
Yes No
Timer stops and record RS Timer stops as pre-set
Pop-up: SA query
Closed by the experimenter
Test person starts to answer question orally
RS and answers recorded
Figure 6: SA SPAM pop-up process.
Figure 7: SA SPAM pop-up.
der nor the interaction between input method anandtrial order were statistically significant.The mean task completion time for gaze control was93.86 seconds (SD=61.51). Hand control was 30.72%faster than gaze control with a mean task completiontime of 65.03 seconds (SD = 31.43).In addition, we analysed the time between the par-ticipants’ had finished answering a pop-up query andstarted driving again. With two-wayANOVA,we foundno significant main effects of the input method, thetrial order, or the interaction between input methodand trial order on the length of this time slot.
2. Deviation from optimal path:By comparing the path with its corresponding optimalpath with the same maze layout, we calculated eachpath’s root-mean-square deviation (RMSD) value us-
G Zhang et al.: Preprint submitted to Elsevier Page 6 of 15
104 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Gaze-controlled Telepresence Robots: An Experimental Study
Figure 8: SA SPAM pop-up process.
ing the following equation (Pt: a position on the path;Pt: a position on the optimal path with the shortestdistance with Pt.).
RMSD =∑nt=1(Pt − Pt)
2
n(1)
With two-wayANOVAand followed-up pairwise com-parisons with a bonferroni correction, we found sig-nificant main effects of input method, F(1,64) = 20.05,p = .000073, 2 = 0.35, on the deviation from opti-mal path. Neither the main effect of trial order nor theinteraction between input method and trial order werestatistically significant.The mean RMSD for gaze control was 0.40 (SD =0.90). Hand control had 58.75% smaller deviation thangaze control with a mean RMSD of 0.25 (SD = 0.11).
Figure 9: Mean and standard deviation (SD) of taskcompletion time and RMSD with eye control and handcontrol.
3. Number of collisions:There were two types of collisions: room wall hitsand maze divider hits. With two-way ANOVA andfollowed-up pairwise comparisons with a bonferroni
Figure 10: A path plot from a participant using gazecontrol. One collision (a maze divider hit) is seen in thetop right corner.
Table 3Mean percentage of accuracy (%) [Mean (SD)].
Perception Comprehension ProjectionHand 0.86 (0.20) 0.47 (0.51) 0.59 (0.23)Gaze 0.85 (0.25) 0.47 (0.51) 0.36 (0.29)
Table 4Mean response time (s) to the pop-up queries [Mean (SD)].
Perception Comprehension ProjectionHand 6.00 (2.96) 13.23 (7.94) 22.41 (10.22)Gaze 7.43 (2.67) 13.41 (5.28) 21.89 (8.26)
correction, we found significant main effects of the in-put method, F(1,64) = 4.75, p = .033, 2 = 0.073, onnumber of collisions. Neither the main effect of trialorder nor the interaction between input method andtrial order were statistically significant.The mean number of collisions for gaze control was1.68 (SD= 1.73), while hand control had amean num-ber of collisions of 0.75 (SD = 1.67).
4. Task completion:Analysis of path plots of all 64 trials (c.f. Fig.10) andthe video recordings, showed that only 1 trial was notcompleted (i.e. gaze control in the first trial).
6.2. Situation AwarenessThe results from SPAM measures included the partici-
pants’ percentage of accuracy to the SA-related queries, andtwo types of response time.
A two-way ANOVAs was conducted on the percentageof accuracy to the projection-related queries and followedup with pairwise comparisons with a bonferroni correction.With two-way ANOVAs, we found that input methods hadsignificant main effects , F(1,64)= 12.36, p = .00084, 2 =0.17, on the percentage of accuracy to the projection-relatedqueries.
With two-way ANOVAs, we found no significant effectsof inputmethods on the percentage of accuracy to perception-or comprehension-related queries. Moreover, we found nosignificant effect of the trial order on all the abovementionedaspects, and no interaction of input method and trial order.
The gaze control (M = 0.36, SD = 0.29) yielded lower
G Zhang et al.: Preprint submitted to Elsevier Page 7 of 15
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 105
Gaze-controlled Telepresence Robots: An Experimental Study
percentage of accuracy in response to the projection-relatedSA queries than the hand control (M = 0.59, SD = 0.23).
A two-way ANOVAwas conducted on the response time(s) to the perception-related queries and followed up withpairwise comparisonswith a bonferroni correction. We foundsignificant main effects of input method, F(1,64) = 4.01, p= .049, 2 = 0.062, on the response time to the perception-related queries.
Specifically, the gaze control (M = 7.43, SD = 2.67)yielded longer response time to the perception-related queriesthan the hand control (M = 6.00, SD = 2.96).
However, with two-way ANOVAs, we found no signifi-cant effects of inputmethods on response time to the comprehension-or projection-related queries. We found no significant effectof trial order on response time to all three types of queries,and no interaction of input method and trial order.
Moreover, response time to the pre-query pop-upwas an-alyzed. Specifically, we focused on the participants’ meanresponse time in the first trial of each method. In the firsttrial, the gaze control (M = 3.53, SD = 2.02) yielded longerresponse time to the pre-query than the hand control (M =2.85, SD = 0.76).
6.3. WorkloadBased on the participants’ response to NASA-TLX, 6
two-way ANOVAs were conducted on the 6 measures andfollowed up with pairwise comparisons with a bonferronicorrection. We found that inputmethods had significantmaineffects , F(1,64)= 5.04, p= .028, 2 = 0.076, on the mentaldemand; significant main effects, F(1,64)= 2.67, p= .030,2 = 0.042, on the physical demand; significant main ef-fects, F(1,64)= 8.14, p= .00059, 2 = 0.12, on the effort;significant main effects, F(1,64)= 6.60, p= .016, 2 = 0.098,on the frustration; significant main effects, F(1,64)= 2.25,p= .019, 2 = 0.035, on the performance.
However, we found no significant effects of input meth-ods on the Temporal Demand. The gaze control (M = 10.75,SD= 5.14) yielded moreMental Demand than the hand con-trol (M = 7.97, SD = 4.65). The gaze control (M = 6.88, SD= 4.46) yielded more Physical Demand than the hand con-trol (M = 5.22, SD = 3.52). The gaze control (M = 9.69,SD = 4.73) yielded more Effort than the hand control (M =6.47, SD = 4.33). The gaze control (M = 8.41, SD = 3.99)yielded more Frustration than the hand control (M = 5.91,SD= 3.75). The gaze control (M= 9.41, SD= 4.83) yieldedmore Performance than the hand control (M = 7.19, SD =4.88).
Neither the main effect of trial order nor the interactionbetween input method and trial order were statistically sig-nificant.
6.4. Post-trial Estimation and RecollectionWith a two-way ANOVA and a followed up pairwise
comparisons with a bonferroni correction, we found that in-put methods had significant main effects, F(1,64) = 5.93, p= .018, 2 = .090, on participants abiity to draw a correctsketch of the maze they had just been driving through.
Effort
Frustration
Mental Demand
Performance
Physical Demand
Temporal Demand
0 5 10 15
Mean
Asp
ect
Method Eye Hand
Figure 11: Mean and SD of ratings of workload for eye controland hand control.
Gaze control yielded lower percentage of accuracy inmaze sketch (M = 58.43, SD = 37.60) than hand control (M= 79.68, SD = 30.85).
With two-way ANOVAs, we found no significant effectof the input methods on correct memory of the positions atwhich a person had shown up in the maze.
The participants’ duration estimationwas quantified (score= |participant’s duration estimation - duration indeed|). Withtwo-way ANOVAs and a followed up pairwise comparisonswith a bonferroni correction, we found that input methodshad significant main effects, F(1,64) = 5.80, p = .019, 2 =0.09, on their estimation of driving duration. Moreover, Theinput methods had significant main effects, F(1,64) = 7.55,p = .0079, 2 = 0.11, on on their estimation of number ofcollisions.
Gaze control yielded lower percentage of accuracy (M= 1.21, SD = 0.99) in duration estimation than hand control(M = 2.13, SD = 2.03).
6.5. PresenceSix two-way, mixed ANOVAs were conducted on the 6
measures included in the Presence Questionnaire. We foundno significant effect of trial order, no significant effect of in-put method, and no interaction of input method and trial or-der with scores of the following aspects: realism, quality ofinterface, possibility to act, self-evaluation of performance,and sounds.
With one of theANOVAs and followed up pairwise com-parisons with a bonferroni correction, we found that inputmethod had significant main effects, F(1,64) = 6.00, p =.017, 2 = 0.091, on scores of one aspect, namely possi-bility to examine. However, we found no significant effects
G Zhang et al.: Preprint submitted to Elsevier Page 8 of 15
106 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Gaze-controlled Telepresence Robots: An Experimental Study
of trial order on scores of this aspect, and no interaction ofinput method and trial order.
Specifically, the gaze control (M = 4.35, SD = 0.77)yielded less mean score of possibility to examine than thehand control (M = 4.91, SD = 2.21).
6.6. Self-AssessmentThe participants’ pre- and post-trial self-assessment of
pleasure, arousal and dominance were analyzed. Post-trialvalue minus pre-trial value indicated the changes during atrial.
With two-way ANOVAs and followed up with pairwisecomparison with a bonferroni correction, we found signif-icant main effects of input method, F(1,64) = 13.10, p =.00067, 2 = 0.182, on the changes in the dominance aspect.However, we found no significant effects of trial order, andno interaction of input method and trial order. Specifically,the gaze control (M = -0.43, SD = 1.11) yielded a feeling ofreduced dominance, while the hand control yielded a feelingof increased dominance (M = 0.43, SD = 0.80). We foundno significant effects of input method on the changes in thepleasure aspect or the arousal aspect. Neither the main effectof order nor the interaction between input method and trialorder were statistically significant. Finally, we found no sig-nificant effect of trial order on the aspect, and no interactionof input method and trial order.
6.7. Path AnalysisGaze-controlled telerobot’s paths and hand-controlled teler-
obot’s paths were compared (cf. Figure 12). When turningthe gaze-controlled telerobot at the corner, users usually keptbigger distance to the maze markers than when using hand-controlled telerobot. This phenomenon was most obvious ina 180 degree turning.
Figure 12: Visualisation of gaze-controlled telerobots’ paths(left) and hand-controlled telerobots’ paths (right).
6.8. User commentsAt the end of the experiment, each participant was asked
to share their observations. Therewere a few common themesappearing in the comments.
Four participants specifically mentioned that they pre-ferred the gaze control. One typical feedback was "fun buthard to control". Eight participants mentioned that they pre-ferred the hand control. Here are some typical examples ofwhat they said about the control methods:
"I .... prefer the gaze but it is harder to controlwhen it moves forward. The controller is morenatural ...."
"Joystick is much easier, very responsive. I hadnever tried VR before, so ... first time sicknesswas a distracting factor for me."
"With the joystick there is the possibility of stop-ping the movement with a button, for exampleto look around andmaking better choices for thenext movements. In gazing mode, I couldn’t dothat and every time I was looking somewhereto gain information about the environment, therobot was moving and I was losing control. Ifeel the necessity, also in gazing mode, for amove and stop switch."
"... the VR-glasses remain heavy and they haveto sit low on my head to be able to calibrate. Abit to low in terms of comfort". "
"The VR headset is heavy and uncomfortable towear for more than 5 minutes... "
7. Results: Comparison of SA measurementmethods
7.1. Correlation with other metricsPearson’s correlation coefficient was calculated to check
the existence of correlation between scores. The data maybe seen in Table 5.
In terms of SPAM, there are three types of data recordedfor each pop-up, including: 1) response time to "Are youready" question (PRS); 2) response time to the pop-up query(QRS); 3) accuracy percentage of answers.
We found that PRS were significantly correlated withtask completion time (Pearson’s r(62) = 0.25, p < 0.05),Mental Demand (r(62) = 0.25, p < 0.05), and Temporal De-mand (r(62) = 0.33, p < 0.05). We found that QRS had sig-nificantly correlated with mental demand (Pearson’s r(62)= 0.31, p < 0.05). Specifically, QRS to perception-relatedqueries had significant correlation with task completion time(Pearson’s r(62) = 0.28, p < 0.05) and collision times (Pear-son’s r(62) = 0.38, p < 0.01). QRS to projection-relatedqueries was significantly correlatedwithmental demand (Pear-son’s r(62) = 0.43, p < 0.01) and temporal demand (Pear-son’s r(62) = 0.33, p < 0.01).
Percentage of accuracywere significantly correlatedwithsubjective ratings of Frustration Pearson’s r(62) = 0.30, p <0.05). Particularly, accuracy of answers to projection-relatedqueries were significantly correlated with task completiontime (Pearson’s r(62) = - 0.29, p < 0.05) and mental de-mand (Pearson’s r(62) = 0.25, p < 0.05), and Frustration(Pearson’s r(62) = 0.30, p < 0.05).
These statistically significant correlations validate pre-vious findings [10] on positive correlation between perfor-mance and SA, negative correlation between peformance and
G Zhang et al.: Preprint submitted to Elsevier Page 9 of 15
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 107
Gaze-controlled Telepresence Robots: An Experimental Study
workload, and negative correlation between workload andSA. Fig. 13 14 15 16 show these correlations.
However, by using data collected from subjective ratingsof the SART questionnaire, we could not validate the previ-ous findings (see Fig. 5).
R = 0.25, p = 0.048
2.5
5.0
7.5
10.0
100 200 300Task Completion Time (s)
PR
S (
s)
Figure 13: Correlation of PRS and task completion time. Eachblack dot represents results from one trial. The blue line is theregression line and the shaded region is the CI. PRS: Responsetime to "are you ready" question.
R = 0.38, p = 0.0018
0
2
4
6
8
4 8 12 16Response time (perception queries)
Col
lisio
n Ti
mes
Figure 14: Correlation of response time (perception queries)and collision times.
R = 0.28, p = 0.023
100
200
300
4 8 12 16Task Completion Time (s)
PR
S (
s)
Figure 15: Correlation of response time (perception queries)and time.
7.2. Consistency with other MetricsOne goal of the experiment was to identify challenges by
comparing gaze control as a novel control method with handcontrol, which is widely used and was served as a baseline
R = − 0.46, p = 0.00025
40
60
80
100
0.00 0.25 0.50 0.75 1.00Accuracy of answers (Projection)
Task
Com
plet
ion
Tim
e (s
)
Figure 16: Correlation of accuracy of answers (projectionqueries) and task completion time
condition. Table 6 show an overview of significant differ-ence between the two methods. Consistency of significantdifference can be found in most measures of performanceandworkload. In terms of SA, data from SPAM showed con-sistency with the other metrics. These significant differencecan be found in the percentage of accuracy to the projection-related queries, and the response time to the perception-relatedqueries. However, regarding data measured with SART, dif-ference between gaze and hand conditionswere non-significant,which is not consistent with other metrics.
7.3. Random Effects ModelA random effects model was used for our analysis of data
collected from SPAM and SART. Since we had two indepen-dent variables, we used the following random effects model:
SA(SART ) = +(metℎod)+(trial)+ (metℎod, trial)+(2)
SA(SPAM) = +(metℎod)+(trial)+ (metℎod, trial)+(3)
The residual for data from SPAM (0.069) was relativelysmaller than the residual (1.33) for data from SART. Resultsindicated that SPAM provided more reliable data in this testwith small data size.
G Zhang et al.: Preprint submitted to Elsevier Page 10 of 15
108 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Gaze-controlled Telepresence Robots: An Experimental Study
SPAM
Perform
ance
Workload
PRS
SPAM.all
Perc
Com
Pro
Q.all
Q.Per
Q.Com
Q.Pro
Tim
eCol
TlxMen
TlxTem
TlxFru
TlxPer
SART
-0.07
0.00
-0.13
0.20
0.10
-0.20
-0.08
-0.11
-0.21
0.21
0.28
0.01
0.06
0.06
0.21
PRS
-0.3
8**
-0.22
-0.3
1*-0
.28*
0.38
**0.
29*
0.28
*0.
27*
0.25
*0.17
0.25
*0.
33**
0.21
0.078
all
0.85
**0.
46**
0.46
**-0.22
-0.3
9**
-0.17
0.02
-0.24
-0.06
-0.02
-0.09
-0.3
0*-0.20
Perc
0.14
0.07
-0.21
-0.4
4**
-0.10
0.05
-0.15
-0.03
0.08
-0.10
-0.20
-0.20
Com
0.02
0.05
0.05
-0.16
0.09
-0.02
-0.05
0.06
0.02
-0.08
-0.10
Pro
-0.18
-0.17
-0.1
-0.12
-0.2
9*-0.05
-0.25
-0.05
-0.3
0*-0.03
Q.all
0.71
**0.
61**
0.79
**0.15
0.08
0.31
*0.19
0.12
0.01
Q.Per
0.41
**0.17
0.28
*0.
38**
0.05
0.048
0.14
0.09
Q.Com
0.26
*0.18
0.05
0.05
-0.22
-0.07
-0.13
Q.Pro
-0.05
-0.19
0.43
**0.
33**
0.10
-0.00
Tim
e0.
48**
-0.16
-0.09
0.02
0.10
Col
-0.11
0.017
0.00
0.12
TlxMen
0.48
**0.
51**
0.38
**TlxTem
0.46
**0.
53**
TlxFru
0.55
***
p<0.05
**p<
0.01
Perc-Perception;
Com
-Com
prehension
;Pro
-Projection;
Tim
e-Taskcompletiontime;
Col
-nu
mberof
collision
s;Men
-MentalDem
and;
Tem
-Tem
poralDem
and;
Eff-
Effort;Perf-Perform
ance;Fru-Frustration
Tabl
e5
PEARSO
NSCORRELA
TIO
NCOEFF
ICIENTSbetweenSA
data
andothermetric
s.
SART
SPAM
Perform
ance
Workload
PRS
Perc
Com
Pro
Q.Per
Q.Com
Q.Pro
Tim
eCol
RMSD
TlxMen
TlxPhy
TlxTem
TlxEff
TlxPerf
TlxFru
p>0.05
>0.05
>0.05
>0.05
<0.
01<
0.05
>0.05
>0.05
<0.
05<
0.05
<0.
01<
0.05
<0.
05>0.05
<0.
01<
0.05
<0.
05F
xx
xx
12.36
4.01
xx
5.40
4.75
20.05
5.04
2.67
08.14
2.25
6.60
Perc-Perception;
Com
-Com
prehension
;Pro
-Projection;
Tim
e-Taskcompletiontime;
Col
-nu
mberof
collision
s;Men
-MentalD
emand;
Phy
-PhysicalD
emand;
Tem
-Tem
poral
Dem
and;
Eff-Effort;Perf-Perform
ance;Fru-Frustration
Tabl
e6
Sign
ificanceconsistency:
comparison
ofhand
andgaze
control.
G Zhang et al.: Preprint submitted to Elsevier Page 11 of 15
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 109
Gaze-controlled Telepresence Robots: An Experimental Study
8. Discussion8.1. Control methods (gaze control and hand
control)A main observation was that telepresence robots could
actually be controlled by gaze. All the participants were ableto use gaze control to finish the trials, except for one. Achiev-ing the feeling of presence is a key issue and the biggestchallenge for telepresence [33]. Adding gaze control to atelepresence system had significant impact on one aspectsof presence, namely the feeling of "possibility to exam".
There are still serious challenges for the use of gaze driv-ing in teleoperationwhen in comes to performance andwork-load, which are both of great importance for human-robot in-teraction [48]. Gaze control of robots are clearly more diffi-cult than hand control in all the aspects wemeasured. Noviceusers obviously needmore practicewith gaze-control of telerots,because they are unfamiliar with both gaze interaction andtelerobots. Consequently, our future research plan is to studythe possible effects of practice on performance. Moreover,sensors for collision avoidance and an interface map viewmay be considered. In particular, we will improve the inter-face for gaze control to allow for more free examination ofthe environment without moving the telerobot at the sametime.
Challenges of gaze control in performance were also re-flected in the difference of the participants’ level of situa-tional awareness between two input methods. The differ-ences in their responses to our projection-related queries be-tween the two control methods were obvious, which indi-cates a higher SA when using hand control. We also ob-served a difference in mean response time to the projection-queries between two control methods. For example, whenthe query "Can you estimate when you will be finished withthe task" appeared, participants using gaze control neededmore time to think about thier answer.
With gaze-controlled telerobots, the self-reported work-load were higher on all six measures, and the measures usingthe SPAMmethod also found a higher response time to a pre-question before each query (i.e. "Are you ready to answer aquestion?"). The participants needed more response time toswitch from the navigation task to a mental task.
The self-assessments indicated that feelings of dominancewere smaller when using gaze control, but interestingly, thefeelings of pleasure and arousal increased. This was con-sistent with comments like "fun but hard", to the experiencewith gaze control. When using gaze control, the participantsusually kept bigger distances from obstacles to avoid colli-sion. This was consistent with the lower self-reported feelingof dominance.
The interview also suggested severe problemswith wear-ing a HMD for a longer time. In our future study, we willexamine if there is a difference between using an HMD anda monitor.
8.2. SA measurePrevious studies on SA have not dealt with using SPAM
in teleoperation with an HMD. Using data from SPAM in
this study, we could validate previous findings on correlationof SA, workload, and performance. In our study, there were8 pop-ups in each trial. Three types of data were collectedby using SPAM. Moreover, queries in the pop-ups for eachtrial include three levels of SA, which allowed us to discovermore details in three levels from the data. In the originalSPAM instructions in air traffic control, PRS is the responselatency to the landline, which acts as an indicator of work-load [44]. In our study, we further validate its correlationwith performance and workload (mental demand and tem-poral demand). As we had data for three SA levels, we foundthat accuracy to the projection-related queries had a signif-icant correlation with both performance and workload. Thesignificant difference between the projection-related queriesshowed significance. These results indicated that projection-related queries showed more accuracy in reflecting the SAlevel, as projection is the highest level of SA.
However, we could not validate the correlations or founda significant difference between gaze control and hand con-trol. For our study with a small sample size, SPAM pop-upprovide more reliable data than SART. One reason is thatSART questionnaire is based on human subjects’ subjectiveratings. It may also be caused by a common problem of post-trial questionnaires, which usually rely on memory and rec-ollection. In our case, users must remove the HMD and leavethe immersive virtual world before completing the question-naire in the real world after the trial. In doing so, the personhas to re-orientate in the real-world, which causes a “break-in-presence (BIP)” problem [25], which can distort the phe-nomenon that the questionnaires measure [46]. BIP couldbe one potential reason why we could not find significantimpacts of the control method on their subjective ratings tothe presence questionnaire. It is still unclear whether thefeeling of presence was not affected by the control method,or the sensitivity of the measure decreases due to the BIPproblem. Recently, using presence questionnaire in HMDhas been studied [46]. For teleoperation with an HMD, thisneeds to be explored in future study.
SPAM provided more reliable data in our study. How-ever, we also observed that there were still limitations in theadaptation of SPAM. SPAM is not as intrusive to the maintask as SAGAT (see Table 1).
A previous study on air traffic control [9, 8] has examinedthe issue of intrusiveness and showed that there was no ef-fect on performance or workload. However, some criticizeSPAM as being intrusive to the operator (e.g., [38]). Thenegative effect of SPAM on performance has been shown in[30]. In each trial of our study, there were in total 8 pop-upsto cover three levels of SA. Each pop-up consists of answer-ing "ready" question and answering the query. This kind of"ready" prompt for probe question administration is not suf-ficient for reducing performance decrements to some degree[39].
Simplification of this process or reduction of pop-up num-bers could be considered. SPAM requires dual-tasking to an-swer queries while performing a task, potentially interferingwith performance and creating a secondary task workload
G Zhang et al.: Preprint submitted to Elsevier Page 12 of 15
110 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Gaze-controlled Telepresence Robots: An Experimental Study
measure [15]. It could actually be considered a secondaryworkload task [15].
Dual-task methods are used for measurement of work-load [5]. In our case, PRS is response latency to "ready",which acts as an indicator of workload. This task acts as asadditional task, which meets the requirement of self-pacingand provides two sets of data from response latency and fromerrors [5]. Future improvement needs to explore alternativemethods by minimizing intrusiveness. Eye-tracking-basedSA measure have advantages of objectiveness [15] and rel-atively not intrusive to primary task performance [44]. Pro-saccade and Anti-saccade tests are widely used in patho-physiology and psychology research [32]. These saccadetests can be seamlessly implemented for teleoperation withan HMD. The task for a saccade test meets the requirementas an additional task, as it provided data of response latencyand errors [5].
The measurement of RS needs to be improved in futureresearch to improve the sensitivity of the measure. The re-sponse time (both PRS and QRS) we measured were thesubject’s response time together with the experimenter’s re-sponse time. In our pilot study, the subjects found it chal-lenging and spendmore time to find and press the correct keyfor answering when wearing an HMD. To avoid this prob-lem, the experimenter presses the key for themwhen hearingthe verbal answer. This problem needs to be solved in the fu-ture study. For example, this confirmation process could beembedded into the virtual world displayed by the HMD, andthey could confirm by using eye or head movement.
8.3. LimitationThere were several imitations in this study. First and
foremost, the participants only had a limited number of tri-als. Considering that gaze interaction as completely new tomost of them, the large difference we found between handand gazer control may be reduced with more training. Dueto space constraints, the size of the maze was rather small. Ifparticipants had driven for a longer time, made more turns,and tested several different layouts than the four we used, atraining effect might also have become more clear. In fact,we saw no training effect in our present experiment (i.e. dif-ferences between first and second trial). The duration ofthe navigation task was relatively short, and the participantswere frequently interrupted by pop-up queries. In our futurestudies, we intend to reduce the number of queries but stillmaintain some, because some of them seemed to be quitesensitive when measuring SA. Improvement of the SPAMpop-up addressing the problems of the SPAM pop-up men-tioned above will be also explored.
9. ConclusionThis study demonstrated the possibility to control a telep-
resence robot with gaze. Compared to hand control the per-formance and subjective experiencewere significantly lower.We have presented a set of measures and we have developeda maze-based test method that may be considered as a com-mon ground for future research in alternative control princi-
ples for telerobot control. Regarding the validation of SA, itis the first time SPAM has been used in robot teleoperationwith HMDs. We found more evidence for SPAM as a morereliable SA technique when compared to a post-trial subjec-tive one (SART). Limitations of the SPAM pop-up were alsofound, which could be addressed in our future research.
CRediT authorship contribution statementGuangtao Zhang: First author. John Paulin Hansen:
Second author. Katsumi Minakata: Third author.
References[1] Aleotti, J., Micconi, G., Caselli, S., Benassi, G., Zambelli, N., Bet-
telli, M., Zappettini, A., 2017. Detection of nuclear sources by uavteleoperation using a visuo-haptic augmented reality interface. Sen-sors 17, 22–34. doi:10.3390/s17102234.
[2] Bates, R., Donegan, M., Istance, H.O., Hansen, J.P., Räihä, K.J.,2007. Introducing cogain: communication by gaze interaction. Uni-versal Access in the Information Society 6, 159–166.
[3] Beraldo, G., Antonello, M., Cimolato, A., Menegatti, E., Tonin, L.,2018. Brain-computer interface meets ros: A robotic approach tomentally drive telepresence robots, in: 2018 IEEE International Con-ference on Robotics and Automation (ICRA), IEEE. pp. 1–6.
[4] Bradley, M.M., Lang, P.J., 1994. Measuring emotion: the self-assessment manikin and the semantic differential. Journal of behaviortherapy and experimental psychiatry 25, 49–59.
[5] Brown, I., 1978. Dual task methods of assessing work-load. Er-gonomics 21, 221–224.
[6] Carreto, C., Gêgo, D., Figueiredo, L., 2018. An eye-gaze trackingsystem for teleoperation of a mobile robot. Journal of InformationSystems Engineering & Management 3, 16.
[7] Ciger, J., Herbelin, B., Thalmann, D., 2004. Evaluation of gaze track-ing technology for social interaction in virtual environments, in: Proc.of the 2ndWorkshop onModeling andMotion Capture Techniques forVirtual Environments (CAPTECH’04), pp. 1–6.
[8] Durso, F.T., Bleckley,M.K., Dattel, A.R., 2006. Does situation aware-ness add to the validity of cognitive tests? Human Factors 48, 721–733.
[9] Durso, F.T., Hackworth, C.A., Truitt, T.R., Crutchfield, J., Nikolic,D., Manning, C.A., 1998a. Situation awareness as a predictor of per-formance for en route air traffic controllers. Air Traffic Control Quar-terly 6, 1–20.
[10] Durso, F.T., Truitt, T.R., Hackworth, C.A., Crutchfield, J.M., Man-ning, C.A., 1998b. En route operational errors and situational aware-ness. The International Journal of Aviation Psychology 8, 177–194.
[11] Eid, M.A., Giakoumidis, N., El-Saddik, A., 2016. A novel eye-gaze-controlled wheelchair system for navigating unknown environments:Case study with a person with als. IEEE Access 4, 558–573.
[12] Endsley, M.R., 1988. Situation awareness global assessment tech-nique (sagat), in: Proceedings of the IEEE 1988 national aerospaceand electronics conference, IEEE. pp. 789–795.
[13] Endsley, M.R., 1995. Measurement of situation awareness in dynamicsystems. Human factors 37, 65–84.
[14] Endsley, M.R., 2000. Direct measurement of situation awareness:Validity and use of sagat, in: Garland, M.R.E..D.J. (Ed.), Situa-tion Awareness Analysis and Measurement. Lawrence Erlbaum As-sociates, Mahwah NJ, pp. 147–173.
[15] Endsley, M.R., 2019. A systematic review and meta-analysis of directobjective measures of situation awareness: A comparison of sagat andspam. Human factors , 0018720819875376.
[16] Escolano, C., Murguialday, A.R., Matuz, T., Birbaumer, N., Minguez,J., 2010. A telepresence robotic system operated with a p300-basedbrain-computer interface: initial tests with als patients, in: 2010 An-nual International Conference of the IEEE Engineering in Medicineand Biology, IEEE. pp. 4476–4480.
G Zhang et al.: Preprint submitted to Elsevier Page 13 of 15
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 111
Gaze-controlled Telepresence Robots: An Experimental Study
[17] Friedman, N., Cabral, A., 2018. Using a telepresence robot to im-prove self-efficacy of people with developmental disabilities, in: Pro-ceedings of the 20th International ACM SIGACCESS Conference onComputers and Accessibility, pp. 489–491.
[18] Gatsoulis, Y., Virk, G.S., 2007. Performance metrics for improvinghuman-robot interaction. Advances in Climbing and Walking Robots- Proceedings of 10th International Conference, Clawar 2007 , 716–725.
[19] Hansen, J.P., Alapetite, A., MacKenzie, I.S., Møllenbach, E., 2014.The use of gaze to control drones, in: Proceedings of the Symposiumon Eye Tracking Research and Applications, ACM. pp. 27–34.
[20] Hansen, J.P., Alapetite, A., Thomsen, M., Wang, Z., Minakata, K.,Zhang, G., 2018. Head and gaze control of a telepresence robot withan hmd, in: Proceedings of the 2018 ACM Symposium on Eye Track-ing Research & Applications, ACM.
[21] Hart, S.G., Staveland, L.E., 1988. Development of nasa-tlx (task loadindex): Results of empirical and theoretical research, in: Advances inpsychology. Elsevier. volume 52, pp. 139–183.
[22] Heshmat, Y., Jones, B., Xiong, X., Neustaedter, C., Tang, A., Riecke,B.E., Yang, L., 2018. Geocaching with a beam: Shared outdoor activ-ities through a telepresence robot with 360 degree viewing, in: Pro-ceedings of the 2018 CHI Conference on Human Factors in Comput-ing Systems, ACM. p. 359.
[23] Isokoski, P., Joos, M., Spakov, O., Martin, B., 2009. Gaze controlledgames. Universal Access in the Information Society 8, 323.
[24] Jackowski, A., Gebhard, M., 2017. Evaluation of hands-free human-robot interaction using a head gesture based interface, in: Proceedingsof the Companion of the 2017 ACM/IEEE International Conferenceon Human-Robot Interaction, ACM. pp. 141–142.
[25] Jerald, J., 2015. The VR book: Human-centered design for virtualreality. Morgan & Claypool.
[26] Jones, D.G., 2000. Subjective measures of situation awareness. Situ-ation awareness analysis and measurement , 113–128.
[27] Kristoffersson, A., Coradeschi, S., Loutfi, A., 2013. A review of mo-bile robotic telepresence. Advances in Human-Computer Interaction2013, 3.
[28] Lee, M.K., Takayama, L., 2011. Now, i have a body: Uses and socialnorms for mobile remote presence in the workplace, in: Proceedingsof the SIGCHI conference on human factors in computing systems,ACM. pp. 33–42.
[29] Leeb, R., Tonin, L., Rohm, M., Desideri, L., Carlson, T., Millán,J.d.R., 2015. Towards independence: a bci telepresence robot forpeople with severe motor disabilities. Proceedings of the IEEE 103,969–982.
[30] Loft, S., Morrell, D.B., Ponton, K., Braithwaite, J., Bowden, V., Huf,S., 2016. The impact of uncertain contact location on situation aware-ness and performance in simulated submarine trackmanagement. Hu-man factors 58, 1052–1068.
[31] Majaranta, P., Ahola, U.K., Špakov, O., 2009. Fast gaze typing withan adjustable dwell time, in: Proceedings of the SIGCHI Conferenceon Human Factors in Computing Systems, ACM. pp. 357–360.
[32] Mardanbegi, D., Wilcockson, T., Sawyer, P., Gellersen, H., Crawford,T., 2019. Saccademachine: software for analyzing saccade tests (anti-saccade and pro-saccade), in: Proceedings of the 11th ACM Sympo-sium on Eye Tracking Research & Applications, pp. 1–8.
[33] Minsky, M., 1980. Telepresence. Omni 2, 44–52.[34] Moyle, W., Jones, C., Cooke, M., O’Dwyer, S., Sung, B., Drummond,
S., 2013. Social robots helping people with dementia: Assessing ef-ficacy of social robots in the nursing home environment, in: 2013 6thInternational Conference onHuman System Interactions (HSI), IEEE.pp. 608–613.
[35] Park, C.H., Howard, A.M., 2012. Real world haptic exploration fortelepresence of the visually impaired, in: Proceedings of the seventhannual ACM/IEEE international conference on Human-Robot Inter-action, pp. 65–72.
[36] Park, C.H., Howard, A.M., 2014. Robotics-based telepresence us-ing multi-modal interaction for individuals with visual impairments.International Journal of Adaptive Control and Signal Processing 28,
1514–1532.[37] Park, C.H., Ryu, E.S., Howard, A.M., 2015. Telerobotic haptic ex-
ploration in art galleries and museums for individuals with visual im-pairments. IEEE transactions on Haptics 8, 327–338.
[38] Pierce, R.S., 2012. The effect of spam administration during a dy-namic simulation. Human factors 54, 838–848.
[39] Pierce, R.S., Vu, K.P.L., Nguyen, J., Strybel, T.Z., 2008. The relation-ship between spam, workload, and task performance on a simulatedatc task, in: Proceedings of the Human Factors and Ergonomics So-ciety Annual Meeting, Sage Publications Sage CA: Los Angeles, CA.pp. 34–38.
[40] Rae, I., Neustaedter, C., 2017. Robotic telepresence at scale, in: Pro-ceedings of the 2017 CHI Conference on Human Factors in Comput-ing Systems, ACM. pp. 313–324.
[41] Rae, I., Takayama, L., Mutlu, B., 2013. The influence of heightin robot-mediated communication, in: Proceedings of the 8thACM/IEEE international conference on Human-robot interaction,IEEE Press. pp. 1–8.
[42] Riley, J., Kaber, D., Draper, J., 2004. Situation awareness and atten-tion allocation measures for quantifying telepresence experiences inteleoperation. Human Factors and Ergonomics in Manufacturing 14,51–67. doi:10.1002/hfm.10050.
[43] Ruff, H., Narayanan, S., Draper, M., 2002. Human interactionwith levels of automation and decision-aid fidelity in the supervi-sory control of multiple simulated unmanned air vehicles. Presence-teleoperators and Virtual Environments 11, 335–351. doi:10.1162/105474602760204264.
[44] Salmon, P., Stanton, N., Walker, G., Green, D., 2006. Situation aware-ness measurement: A review of applicability for c4i environments.Applied ergonomics 37, 225–238.
[45] Sankhe, P., Kuriakose, S., Lahiri, U., 2013. A step towards a roboticsystemwith smartphoneworking as its brain: An assistive technology,in: 2013 International Conference on Control, Automation, Roboticsand Embedded Systems (CARE), IEEE. pp. 1–6.
[46] Schwind, V., Knierim, P., Haas, N., Henze, N., 2019. Using presencequestionnaires in virtual reality, in: Proceedings of the 2019CHICon-ference on Human Factors in Computing Systems, pp. 1–12.
[47] Selcon, S.J., Taylor, R., 1990. Evaluation of the situational awarenessrating technique(sart) as a tool for aircrew systems design. AGARD,Situational Awareness in Aerospace Operations 8 p(SEE N 90-2897223-53) .
[48] Steinfeld, A., Fong, T., Kaber, D., Lewis, M., Scholtz, J., Schultz,A., Goodrich, M., 2006. Common metrics for human-robot interac-tion, in: Proceedings of the 1st ACM SIGCHI/SIGART conferenceon Human-robot interaction, ACM. pp. 33–40.
[49] Tall, M., Alapetite, A., San Agustin, J., Skovsgaard, H.H., Hansen,J.P., Hansen, D.W., Møllenbach, E., 2009. Gaze-controlled driving,in: CHI’09 Extended Abstracts on Human Factors in Computing Sys-tems, ACM. pp. 4387–4392.
[50] Tanaka, F., Takahashi, T., Matsuzoe, S., Tazawa, N., Morita, M.,2014. Telepresence robot helps children in communicating withteachers who speak a different language, in: Proceedings of the2014 ACM/IEEE international conference on Human-robot interac-tion, ACM. pp. 399–406.
[51] Tara, R.Y., Teng, W.C., 2017. Improving the visual momentum oftethered viewpoint displays using spatial cue augmentation. Intelli-gent Service Robotics 10, 313–322. doi:10.1007/s11370-017-0231-z.
[52] Taylor, R.M., 1989. Situational awareness rating technique (SART):The development of a tool for aircrew systems design, in: Proceed-ings of the AGARD AMP Symposium on Situational Awareness inAerospace Operations, CP478, Seuilly-sur Seine: NATO AGARD.
[53] Tonin, L., Carlson, T., Leeb, R., Millán, J.d.R., 2011. Brain-controlled telepresence robot by motor-disabled people, in: 2011 An-nual International Conference of the IEEE Engineering in Medicineand Biology Society, IEEE. pp. 4227–4230.
[54] Tsui, K.M., Flynn, K., McHugh, A., Yanco, H.A., Kontak, D., 2013.Designing speech-based interfaces for telepresence robots for peoplewith disabilities, in: Rehabilitation Robotics (ICORR), 2013 IEEE
G Zhang et al.: Preprint submitted to Elsevier Page 14 of 15
112 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Gaze-controlled Telepresence Robots: An Experimental Study
International Conference on, IEEE. pp. 1–8.[55] Tsui, K.M., McCann, E., McHugh, A., Medvedev, M., Yanco, H.A.,
Kontak, D., Drury, J.L., 2014. Towards designing telepresence robotnavigation for people with disabilities. International Journal of Intel-ligent Computing and Cybernetics .
[56] Tsun, M.T.K., Theng, L.B., Jo, H.S., Lau, S.L., 2015. A robotic telep-resence system for full-time monitoring of children with cognitivedisabilities, in: Proceedings of the international Convention on Re-habilitation Engineering & Assistive Technology, pp. 1–4.
[57] Van Erp, J.B.F., Duistermaat, M., Jansen, C., Groen, E., Hoede-maeker, M., 2016. Tele-presence: Bringing the operator back in theloop. Virtual Media for Military Applications , 9–1–9–18doi:10.1.1.889.1261.
[58] Vidulich, M.A., 2000. The relationship between mental workloadand situation awareness, in: Proceedings of the human factors and er-gonomics society annual meeting, SAGE Publications Sage CA: LosAngeles, CA. pp. 3–460.
[59] Witmer, B.G., Singer, M.J., 1998. Measuring presence in virtual en-vironments: A presence questionnaire. Presence 7, 225–240.
[60] Yang, L., Jones, B., Neustaedter, C., Singhal, S., 2018. Shoppingover distance through a telepresence robot. Proceedings of the ACMon Human-Computer Interaction 2.
[61] Yang, L., Neustaedter, C., 2018. Our house: Living long distancewitha telepresence robot. Proceedings of the ACM on Human-ComputerInteraction 2.
[62] Zhang, J., Langbehn, E., Krupke, D., Katzakis, N., Steinicke, F.,2018. Detection thresholds for rotation and translation gains in 360video-based telepresence systems. IEEE transactions on visualizationand computer graphics 24, 1671–1680.
G Zhang et al.: Preprint submitted to Elsevier Page 15 of 15
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 113
A.3 Saccade Test as a New Tool for Estimating Operators’Situation Awareness in Teleoperation with an HMD
Authors: Guangtao Zhang, Sebastian Hedegaard Hansen, Oliver Repholtz Behrens, andJohn Paulin Hansen
Submitted to Applied Ergonomics
114 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Saccade Test as a New Tool for Estimating Operators’ SituationAwareness in Teleoperation with an HMDGuangtao Zhang∗, Sebastian Hedegaard Hansen, Oliver Repholtz Behrens and John Paulin Hansen
Technical University of Denmark, 2800 Kgs. Lyngby, Denmark
ART ICLE INFO
Keywords:TeleoperationHuman-robot interactionSituation awarenessEvaluation methodHead-mounted displayEye-trackingDual-taskSaccade test
ABSTRACT
Recent advances in mixed, augmented and virtual reality (MR/AR/VR) provide new options for expe-rience and applications of robotic teleportation. Emerging head-mounted displays (HMD) have beenused in robot teleoperation, providing an immersive telepresence experience and more possibilities formultimodal interaction. Situation awareness (SA), performance, and workload are important metricsin robot teleportation are correlated with each other. Compared to post-trial questionnaire, real timepop-up technique (e.g. SPAM) has its unique advantages, but it has disadvantage of interruption to theprimary task. Saccade test has been widely used in human factors research, especially driving, avia-tion, and teleportation. The test, based on saccadic eye movements and dual-task paradigm, has thepotential to be used as a tool to estimate operators’ SA. However, previous work has not yet exploredusing saccade test in robot teleoperation using an HMD. Its relationship with similar SA techniques(e.g. SPAM) based on dual-task paradigm, and its effectiveness are still unclear. In this experiment,we utilized an existing real-time measurement technique of SA and a method based on saccadic eyemovements. The goal of the experiment was to explore the use and validity of saccade test as analternative SA technique within a context of robot teleoperation with an HMD. We implemented apro-saccade test and a SPAM-based pop-up in our robotic system by using the HMD’s built-in eye-tracking component. In our research practice with gaze-controlled telerobots with an HMD, data werecollected for comparison. We found that saccade test data had strongly significant correlations withdata from SPAM-based pop-up. In particular, latency to correct saccade test (especially the first one)has a strongly significant correlation (r = 0.75) with response time to the preliminary question ofSPAM pop-up (PRS). Therefore, this information suggests that saccade test could be an alternativemethod for assessment of operators’ SA. Correlations of saccade test data and other metrics furtherconfirmed previous findings on teleoperation, namely a positive correlation between SA and humanperformance, and a negative correlation between SA and workload. Based on the findings, we en-vision the potential usages in future application, and further discuss the possibility of applying thesaccade test to improve the safety mechanism, an enhanced version of the dead-man’s button in robotteleoperation with an HMD.
1. IntroductionTeleoperation has been widely used in a variety of ap-
plication areas, such as telepresence [37, 30], teleexistence[55], search and rescue [58], agriculture [12], military [43],and teleguide, teleexploration and teleintervention in medi-cal care [46].
VR/AR/MR have been widely used in robotic teleop-eration [39]. Emerging head-mounted displays (HMD) arewidely used in robotic teleoperation. They have unique ad-vantages in providing operators better experience and acces-sibility, but they also bring challenges to evaluation, espe-cially key metrics like situation awareness (SA). Based on aprevious study [74] addressing these challenges, a new toolfor estimating operators’ situation awareness with less inter-ruption to the primary task is needed. Saccadic eye move-ments have been used in assessment in related areas. Eye-tracking sensors are built into several recent HMD models(e.g. FOVE). In this work, we aimed to explore the new toolby looking at implementation of saccade test in this case, andfurther explore its validity as a new assessment technique.
∗Corresponding [email protected] (G. Zhang); [email protected] (S.H. Hansen);
[email protected] (O.R. Behrens); [email protected] (J.P. Hansen)ORCID(s): 000-0003-0794-3338 (G. Zhang); 0000-0001-5594-3645 (J.P.
Hansen)
This work elaborates on the exploration of using saccade testin the context of teleoperation with an HMD.
The paper is organised as follows: In section 2, we out-line previous research in related areas and research prob-lems. Section 3 describes our robotic system for teleoper-ation, and the two assessment methods of SA implementedin the system. Section 4 presents an experiment with 32 sub-jects for data collection. Results of comparing difference ofpop-up and the saccade test is presented in Section 5. Acomparison of the saccade-test-based technique and a pop-up-based technique (based on SPAM [19]), and their corre-lations with other metrics are presented in Section 5. In sec-tion 6, we discuss our findings and potential usages of thetechnique in future application and research.
2. Related work and Problem Statement2.1. Robotic Teleoperation with an HMD
Teleoperation has been widely applied to a whole rangeof situations, where a machine is operated by a operator froma distance [37, 30, 55, 58, 12, 43, 46]. Combined with ex-isting robot technology, these tasks have even broader ap-plication areas, and have intelligent features. Despite thedevelopment of autonomous functionality in robots, roboticteleoperation by an operator is still a means for performing
G Zhang et al.: Preprint Page 1 of 13Gazecontrolled Telepresence: Accessibility, Training and Evaluation 115
Saccade Test as a New Tool for Estimating Operators’ Situation Awareness in Teleoperation with an HMD
delicate tasks in daily life and industry, especially safety incritical contexts [37].
The development of virtual reality (VR) has also resultedin new generations of human–machine interfaces for teleop-eration and significant progress in this research area [39] ofrobotic teleoperation was made with the development of VRtechniques. For instance, an AR system that allows users toteleoperate a robot end-effector with their hands in real time[52]. VR can provide teleoperation in an immersive envi-ronment for teleoperation of a mobile wheeled robot [39, 51]or humanoid robot[2]. This can be seen in the case of im-mersive teleoperation for agricultural field robots [12]. Oneexample is that a telerobot arm can enable the user to see andfeel what the robot sees and feels from a first person point ofview [10]. This is evident in the case of space science andmilitary applications [39].
Head-mounted displays (HMDs) are the most populardevices for virtual reality, telexistence/telepresence humanoidoperation [55]. Operators tend to gain more adequate 3D sit-uation awareness and task awareness using HMDs with steroviewing using an HMD [2], in comparison with viewing ona flat and 2D screen. Compared with other equipments forVR, HMD is cost-effective and portable [43]. Previous stud-ies have examined using an HMD for teleopeation, such asrobot-assisted surgery [46], mining [59], and telepresence[44]. Some report better performance with an HMD for nav-igation [3], manipulation [49], or perception and situationawareness [48]. An ecological interface with a 3-D displaymake it easier, safer, and faster to guide the telerobot, com-pared with the 2-D one [51].
The HMD with emerging built-in sensors have poten-tials to improve user experience, accessibility in teleopera-tion, and provide biometric data for researchers. For exam-ple, eye tracking sensors are built into several recent HMDmodels. The increased accuracy of eye-tracking equipmentmakes it feasible to utilize this technology for assessment ofoperators’ cognitive status, and implementation of accessi-ble control of teleoperation for people withmotor disabilities[30]. However, HMDs for robot teleoperation also bring spe-cific problems, such as cybersickness and challenges to theevaluation studies. A previous study has used data from thebuilt-in eye tracking component for prediction of cybersick-ness [8]. Using traditional ex post questionnaires has limi-tation in measurement, such as presence [64] and SA [74].Requiring participants to leave and re-enter the virtual en-vironment displayed by the HMDs costs time and can causedisorientation [64] and "break in presence" (BIP) problems[40]. A revious study explored the measure of SA [74], buta technique with less interruption from the secondary task isstill needed.
2.2. Situation AwarenessThe process of remote control of a telerobot is teleop-
eration [51]. Situation awareness (SA) plays an importantrole for teleoperation and for understanding of the environ-ment the telerobot is navigating through [23]. Though SAhas its critics [26], a large number of SA studies have ap-
peared in the past 30 years. It is critical to effective decision-making, operator performance and workload in numerousdynamic control tasks [41, 27]. SA has become a core themewithin human factors research [62]. The often-cited defini-tion of SA is "the perception of elements in their environ-ment within a volume of space and time, the comprehensionof their meaning, and the projection of their status in the nearfuture" [23]. SA is considered the primary basis for perfor-mance [22, 60, 61]. There is a correlation between mentalworkload and SA [70].
A number of methods have been proposed, which can becategorized as subjective and objective measures. Typicalsubjective measures such as the ex post questionnaire sit-uation awareness rating technique (SART) [65] rely on theability of participants to assess their own level of subcon-structs such as supply, demand and understanding. Typicalobjective measures, such as the Situation Present Assess-ment Method (SPAM) [20], rely on the correctness of re-sponses as well as the reaction time to respond to queries. Inthe context of teleoperation with an HMD, objective mea-sures have unique advantages compared to the subjectiveex post questionnaire [75]. However, some studies reportednegative effects of SPAM on operator performance [24]. Inan adapted version of a pop-up for teleoperation task with anHMD, the problem of interruption has also been found.
In addition to these typical methods, it is also feasible toassess SA from eyemovements [72, 50, 67, 36]. Eye trackingrepresents a psycho-physiologically based and quantifiablemeasurement technologywhich is non-invasive and effective[50] and it is applicable through real-time feedback technolo-gies [72]. For example, eye movement features (dwell andfixation time) were found to be correlated with performancemeasures [32].
2.3. Dual-task ParadigmDual-task paradigm [71] indicates a procedure in exper-
imental psychology that requires an individual to performtwo types of tasks simultaneously, including a primary taskand a secondary task. Methods based on this paradigm in-cludemeasurement of workload [4, 6, 53, 17], and sleepiness[5]. Reaction time is a key metric in such methods. For in-stance, reaction time to the secondary task [17] to estimatemental workload in driving. Using reaction time to deter-mine sleepiness in drivers is also of increasing interest [5].
Dual-task paradigm has been used in previous researchin SA [7]. The typical method SPAM [20]) is based on theparadigm. The secondary task in SPAM is reaction to a land-line. This task acts as a secondary task, which meets the re-quirement of self-pacing and provides two sets of data fromresponse latency and from errors [6]. The time taken to an-swer acts as an indicator of workload [62]. However, a pre-vious study found that SPAM induce some dual-task perfor-mance decrements [57].
2.4. Saccade TestSaccadic eye movements have been extensively applied
in psychology, oculomotor and cognitive research [47, 29],for instance, prosaccade tests and antisaccade tests. Data
G Zhang et al.: Preprint Page 2 of 13
116 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Saccade Test as a New Tool for Estimating Operators’ Situation Awareness in Teleoperation with an HMD
collected from a saccade test can be used to accurately andprecisely distinguish unique individuals [35]. Saccade testcan be used in application areas for other purposes, like de-tection ofmental workload [69, 69, 17], attention [34], sleepi-ness [33], and fatigue [16, 14, 15, 18, 33]. These have beenused in different research domains. In terms of detectionof fatigue, it has been used for transportation (drivers) [16],medical care (surgical residents) [14], aviation (aviators) [15],and military [18]. Different types of saccadic eye move-ment metrics are commonly applied in detection of fatigue,including saccade mean velocity, magnitude, frequency, ac-curacy, latency, duration, and peak velocity [16, 14, 15, 18].Saccadic eye movement performance can also act as an in-dicator of driving ability in elderly drivers [63]. PreviousResults showed a strong correlation between anti-saccadeperformance and driving behavior [63]. Saccadic eye move-ments can be used for detection of mental workload in driv-ing. An algorithm to detect saccadic intrusions based onfixational eye movements and regular saccades were devel-oped for estimation of mental workload [69]. Typical de-tection workload based on pupil diameter is overly sensitiveto brightness changes, but detection based on saccadic eyemovements is independent from the brightness changes, ro-bust in most environments, and more accurate [69]. Sac-cadic main sequence (amplitude, duration and peak veloc-ity) was used as a diagnostic measure of mental workload[17]. Previous findings indicated that among amplitude andduration of saccades, the peak velocity of saccadic eyemove-ments is particularly sensitive to changes in mental fatigue[16]. Saccadic peak velocity was used as a key metric formeasuring for measuring mental workload [17]. Dual-taskparadigm is commonly used in estimation of mental work-load based on saccadic eye movements, such as using reac-tion time to the secondary task as a metric [17]. In addi-tion, saccadic eye movements have been extensively used inhealthcare [68, 66, 13, 28].
2.5. Dead Man’s SwitchDeadman’s switches have beenwidely equipped for safety-
critical motion control devices, especially in control of ve-hicles or machines [1]. An example of this is that whenthe driver is incapacitated with his foot off the pedal, thedead man’s switch may act as a safety device to immediatelycease the locomotive’s motion [1]. It has also been applied toother emerging fields like robots and teleoperation [54, 25],which is also called live-man button [25]. With the devel-opment of smart technology, such a safety mechanism hasbeen enhanced and become more intelligent. This is exem-plified in the work of robotic dead man’s switch in driving[1]. In the research of assistive technology, this mechanismshows its unique importance. It has been used as an essentialfeature in electric wheelchairs and rehabilitation equipment[45] (e.g., to enhance safety for people with limited mobility[11, 9, 38]).
2.6. Problem StatementPrevious work has not yet explored using saccade test in
robot teleoperation using an HMD. Moreover, it is still un-
clear its relationshipwith similar SA techniques (e.g. SPAM)based on dual-task paradigm, and its effectiveness. We uti-lized an existing real-timemeasurement technique of SA anda method based on saccadic eye movements in the experi-ment. The goal of the experiment was to explore the use andvalidity of the saccade test as an alternative SA techniquewithin a context of robot teleoperation with an HMD.
3. System3.1. Robotic Teleoperation System
The robotic teleoopeation system is based on a projectdeveloped in our lab [73]. The system consists of three parts:(1) a controller for the operator, (2) a robotic platform foruse in reality, (3) and a VR-based robot and environmentsimulator.
The system was built around a robot platform, whichleverages on the open source robot operating system (ROS)and its ecosystem, and supports various client platforms andreuse. In our previous study and this study, we used a Pad-bot, which carries a 360 degree camera (Ricoh Theta), and amicrophone. The control panel was built in Unity1, and fea-tured live video stream integrated into a virtual 3D-modelingpanel. The robot controller module processes movement in-structions based on the operator’s gaze location, and sendsgaze steering commands to the virtual or real robot. Thetelerobot carries a camera, transmitting a live video streamto the user from the remote reality. In the simulator, the livestream is generated from a pre-rendered 3D model of thesame room as that which the real robot is driving in. Thetele-robots were modeled with the same essential features(e.g. velocity, shape and size) as the real robot for the train-ing session. An operator wearing an HMD can navigate awheeled tele-robot or a virtual robot by gazing in the direc-tion in which the robot should drive [73].
3.2. SPAM Pop-up
Figure 1: The preliminary pop-up.
SPAM was originally developed for use in the assess-ment of air traffic controllers SA. Our implementation ofthis method was adapted for use in the assessment of tele-operation with an HMD. In SPAM for air traffic control, thequestion was presented while all relevant information wasavailable to be referenced to by the user. The controller an-swered the question, and the response was recorded [20].
1https://unity.com [last accessed - August 2020]
G Zhang et al.: Preprint Page 3 of 13
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 117
Saccade Test as a New Tool for Estimating Operators’ Situation Awareness in Teleoperation with an HMD
Activated by the experimenter
Pop-up: “Are you ready to answer a question?”
Yes No
Timer stops and record RS Timer stops as pre-set
Pop-up: SA query
Closed by the experimenter
Test person starts to answer question orally
RS and answers recorded
Figure 2: A pop-up of perception-related query.
The SPAM question sequence began by activating the con-troller’s landline. In our version, the pop-up appeared andshowed, "Are you ready to answer the question"(see Fig. 1).The time taken to answer the telephone was recorded, actingas an indicator of workload [62]. In our method, the latencybetween the preliminary pop-up (see Fig. 1) and their answerwas recorded. After the participant answered the landline,the experimenter read the question from a computer screenand initiated the timer. When the participant responded, thetimerwas stopped and the experimenter recorded the response.In our method, after the participant answered the prelimi-nary question, the SA query (see Fig. 3) was activated by theexperimenter. When the participant responded, the pop-upsequence was terminated. The response time was recordedby the system, and the response was recorded by the experi-menter.
Correctness of answers and the query response timeweretaken as an indicator of SA [57]. In the original SPAM,whenthe answer was correct, the response time was taken as anindicator of SA. The original study showed that most of theanswers were correct [19]. In our study, their accuracy ofanswers were also calculated as an indicator, because partic-ipants could not always answer correctly.
During the trials, the experimenter observed their oper-ation via an LCD display. When the telerobot passed certainareas in the maze or when a maneuver, e.g. a turn, had been
done, a query pop-up in the control display was prompted bythe experimenter. When the participants had given a verbalresponse, the experimenter closed the query pop-up downand recorded their answers.
Based on task analysis and SA theory [22], the queriesincluded perception-related questions, comprehension-relatedqueries, and projection-related queries.
Figure 3: SA SPAM pop-up process.
3.3. Saccade TestA pro-saccade test based on the SaccadeMachine [47]
was implemented in our teleoperation system (c.f Fig. 4).The saccade test was implemented in Unity. We createda virtual screen with a transparent glass background and asolid red dot. During the experiment the experiment con-ductor could press a hot key on the keyboard of the computerand the test was initialized. While the test was running allthe data was collected in the background and stored locallyon the computer with a dynamic file naming system. Fig. 5)shows an overview of the process of the saccade test.
Figure 4: Saccade test: red dot and the gaze cursor (pink).
A simple saccade detection algorithm was used to knowif the test subject was performing a saccade. For every frameit examined the previous and current gaze coordinates andevaluated the absolute difference. If the difference was big-ger than a predefined threshold a saccadewas detected. Whenthe algorithm detected a saccade the value of IN_SACCADE[47]would change from null to 1. Fig. 6 provides an overviewof the algorithm. The SaccadeMachine [47] was used to an-alyze the data from the pro-saccade test in the experiment.
4. ExperimentThe data for our exploration were collected from an ex-
periment on teleoperation of a telepresence robot with an
G Zhang et al.: Preprint Page 4 of 13
118 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Saccade Test as a New Tool for Estimating Operators’ Situation Awareness in Teleoperation with an HMD
Activated by the experimenterBy pressing the hot key
saccade-test begin
Data recorded
Red dot appear
A correct pro-saccade made towards the dot
5 times
Main task (navigating the telerobot with an HMD)
Saccade test ends
Trial measured as attempted but wrongtrial measured as correct
Main task (navigating the telerobot with an HMD)
No response A wrong pro-saccade made towards the dot
Figure 5: Saccade test process
Figure 6: Saccade algorithm: Determines if the user is per-forming a saccade by comparing if the current point of gaze iswithin a threshold of the previous point of gaze.
HMD using gaze [73]. In the experiment, participants couldhave the first trial of teleoperation, a training session forlearning skills of teleoperation, and a final trial. This exper-iment allowed us to collect a wide distribution of data un-der different conditions. it ensured the different complexity
levels of the task, providing multi-dimensional data. Dataof both trials were collected for our research on exploringthe saccade test as a new tool for estimating their situationawareness. Data of their performance in teleportation, namelytask completion time and number of collisions, were used todivide them into groups.
4.1. HypothesisWe hypothesized that, when wearing a VR HMD to nav-
igate a robot:
1. There is no difference in their response to the saccadetest between the two groups of pilots (the best or theworst in performance)
2. There is no difference in their response to the SPAMpop-up between the two groups of pilots (the best orthe worst in performance)
G Zhang et al.: Preprint Page 5 of 13
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 119
Saccade Test as a New Tool for Estimating Operators’ Situation Awareness in Teleoperation with an HMD
4.2. ParticipantsA total of 32 able-bodied participants (15 females, 17
males) participated in the experiment. The mean age of theparticipants was 28.19 (SD = 7.31). Data for one partici-pant proved invalid due to severe cybersickness during thetest. Each participant was compensated with a gift card val-ued 100 DKK for their participation. 22 participants had ex-perience with VR glasses, mostly for entertainment. 6 par-ticipants had experience with mobile telepresence robots. 6participants had VR sickness to some degree. Fig. 8 showsa participant sitting in front of the computer. During the ex-periment, she could use eye gaze to navigate a telerobot inthe remote environment (see Fig. 7).
Figure 7: Test setup in remote environment
Figure 8: Test setup in local environment
4.3. Experimental DesignA between-group design was used in this experiment.
An independent variable for this research is the group basedon their performance. We divided the partcipants into groupsbased on their performance by using two performance met-rics, namely task completion time and number of collisions.The 8 participants (25%) with the best performance weregrouped into the best group, while 8 participants with theworst performance was grouped into the worst group.
Dependent variables for this research included the par-ticipants’ workload, response to the saccade test, and the re-sponse to the SPAN pop-up.
4.4. ApparatusThe experiment was conducted in a lab. The test sub-
jects were sitting outside the room where the telerobot was
located. The FOVE headset was connected to a computerwith Unity. The computer was connected to the telerobot viawireless network. A laptop with limesurvey on local server,was connected to GamesOnTracks (GOT) sensors.
In the driving room (inside the lab), three GOT sensorswere mounted on the wall. A telerobot carried a 360 degreecamera (Ricoh Theta S), a microphone, and two sensors forindoor positioning connected with the GOT sensors. Whiteplastic sticks on the floor were used as maze separators.
4.5. TaskThe entire test for each participant consisted of two tri-
als and a training session in between. There were two typesof tasks in each trial, namely a primary task and secondarytasks. The primary task for the participant in each trial wasto navigate the telerobot through a maze, and interact shortlywith a person there. The secondary tasks in each trail in-cluded response to the saccade test and the SPAM pop-up.
4.6. ProcedureParticipants performed the experiment in a single ses-
sion lasting around 60 minutes. Each participant signed aconsent form at the beginning, their demographics were col-lected with a questionnaire. First, the participants navigatedthe telerobot through a maze in a room. Afterwards, theynavigated five times for training purposes. Before each trialand the training session, the standard eye calibration proce-dure for the HMD was conducted. In each trial, the partici-pants were asked to reach a person sitting in the room (see.Fig. 8). When the robot reached the person, the participantsneeded to switch the robot to parking mode. The person in-troduced himself to the participants and informed them ontheir next task to continue navigating the telerobot.
Before starting the pre- and final trial, theywere informedthat they were tasked with providing information needed foranswering one of the situation awareness queries. Duringeach trial, a saccade test and two SPAM pop-ups were acti-vated by the experimenter manually.
In the saccade test, the participants needed to follow thered dot. The control of the robot was stopped. Process ofthe saccade test can be seen in Fig. /reffig:saccade When aSPAM pop-up was activated, it first started with a prelimi-nary query (see Fig. 1), then a query related to a level of SAwas activated. During these two types of secondary tasks,the robot stopped moving until they had finished the sec-ondary task.
4.7. MeasureRegarding the primary task, the following quantitative
and qualitative measures were used:
1. Log data of the telerobot from bothGamesontrack ultra-sound sensors, including a timestamp, and telerobot’sposition (x, y).
2. Log data from the user interface in Unity3. A task Load Index (NASA-TLX) [31]) questionnaire
was used to collect workload ratings after each trial.Each rating scale had 21 gradations.
G Zhang et al.: Preprint Page 6 of 13
120 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Saccade Test as a New Tool for Estimating Operators’ Situation Awareness in Teleoperation with an HMD
Qualitative measures included video recorded in the re-mote environment and the Unity UI environment, as well asfeedback provided in post-trial interviews.
Data from the saccade test were collected for analysiswithin the SaccadeMachine [47]. The data included: gazeposition, saccade status (whether the test subject was per-forming a saccade), index of the fixation, fixation status (whetherthe test subject was in a fixation), target on/off set and fix-ation on/off set, timestamp, number of trials, screen resolu-tion, blinking, and position of the target.
Data from SPAM pop-up were collected. Participants’responses to the pop-up queries were collected, namely re-sponse time and accuracy of answers. There were two typesof response time, namely response time to the pop-up ofthe preliminary query "are you ready" (PRS), and reponsetime to the SA query. Based on task analysis and SA the-ory [22], the queries included perception-related questions(e.g. "Can you tell me which direction you are facing now?"), comprehension-related queries (e.g. "What kind of in-formation did the person tell you?"), and projection-relatedqueries (e.g. "Can you estimate when you will be finishedwith the task"?). The accuracy of the answers to each SAquery was also recorded.
5. ResultsThe analysis consisted of two parts. First, Pearson’s cor-
relation coefficient was calculated to check the existence ofcorrelation between scores of all the participants. Table 1shows an overview of the correlations. Then, regarding thehypotheses, analyses of the difference between the best groupand the worst group were conducted.
5.1. Correlation of Saccade Test and Pop-upIn the original SPAM instructions in air traffic control,
PRS acted as an indicator of workload [62]. In our study, wefurther validate its correlation with performance and SA.Wefound that PRS significantly correlated with metrics of per-formance, including task completion time (Pearson’s r(62)= 0.47 , p<0.01) and collision times (Pearson’s r(62) = 0.39, p < 0.01). Moreover, PRS also had significant correlationwith response time to projection related queries (r(62) = -0.27 , p < 0.05), and self-reported Effort (r(62) = 0.31 , p <0.05). These statistically significant correlations corroborateearlier findings [21] on positive correlation between perfor-mance and SA, and negative correlation between workloadand SA.
Data collected from the saccade test have significant cor-relationswith PRS, including number of correct saccade (r(62)= -0.34 , p < 0.05), number of successful saccade (r(62) =-0.37, p < 0.01), latency of first saccade (r(62) = 0.28 , p <0.01). The most striking results to emerge were the stronglysignificant correlations of PRS with latency of any correctsaccade (r(62) = 0.73 , p < 0.01) and latency of first correctsaccade (r(62) = 0.75 , p < 0.01).
Number of correct saccade has only significant correla-tion with collision times (r(62) = -0.36 , p < 0.01). Num-ber of successful trials has significant correlations with task
R = 0.73, p = 4.2e−10
200
400
600
5 10 15 20PRS (s)
Late
ncy
to a
ny c
orre
ct s
acca
de (
ms)
Figure 9: Correlation between PRS and latency of any correctsaccade
R = 0.75, p = 4.9e−11
200
400
600
5 10 15 20PRS (s)
Late
ncy
to fi
rst c
orre
ct s
acca
de (
ms)
Figure 10: Correlation between PRS and latency of the firstcorrect saccade
completion time and (r(62) = -0.26 , p < 0.05), collisiontimes (r(62) = -0.31 , p < 0.05), response time to perception-related queries (r(62) = -0.27 , p < 0.05), and response timeto comprehension-related queries (r(62) = -0.28 , p < 0.05).
5.2. Correlation of Saccade Test and OtherMetrics
We also found that data collected from the saccade testhad significant correlationswith data of other important met-rics of the primary task. Specifically, latency of first cor-rect saccade has significant correlations with task comple-tion time and (r(62) = 0.39 , p < 0.01), collision times (r(62)= 0.33 , p<0.01), response time to perception-related queries(r(62) = 0.33 , p < 0.05), and self-reported Physical demand(r(62) = 0.25 , p < 0.05). Latency to first saccade has cor-relations with task completion time and (r(62) = 0.29 , p< 0.05), self-reported Frustration (r(62) = 0.31 , p < 0.05),and self-reported Performance (r(62) = 0.46 , p < 0.01). La-tency of any correct saccade has significant correlations withtask completion time and (r(62) = 0.31 , p < 0.05), collisiontimes (r(62) = 0.44 , p < 0.01), response time to perception-related queries (r(62) = 0.33 , p < 0.05), and self-reportedphysical demand (r(62) = 0.28 , p < 0.05).
Amplitude is an important metric in saccade tests [47].
G Zhang et al.: Preprint Page 7 of 13
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 121
Saccade Test as a New Tool for Estimating Operators’ Situation Awareness in Teleoperation with an HMD
R = 0.39, p = 0.0015
200
400
600
3 4 5 6 7Task Completion Time (s)
Late
ncy
to a
ny c
orre
ct s
acca
de (
ms)
Figure 11: Correlation between task completion time and la-tency of the first correct saccade
R = 0.33, p = 0.0085
200
400
600
0 5 10 15Collision
Late
ncy
to a
ny c
orre
ct s
acca
de (
ms)
Figure 12: Correlation between collision times and latency ofthe first correct saccade
R = 0.33, p = 0.015
200
400
600
10 20RS to query (perception)
Late
ncy
to fi
rst c
orre
ct s
acca
de (
ms)
Figure 13: Correlation between response time to the pop-upquery (perception) and latency of the first correct saccade
Surprisingly, results show that data of amplitude (both am-plitude of first wrong saccade and amplitude of correct sac-cade) have no significant correlation with any other met-ric (p > 0.05). Moreover, we found no significant correla-tion between the first trial of the first saccade test and thefirst/second pop-up query ("are you ready"). No significantcorrelation between the first trial of the second saccade testand the second/third pop-up query ("are you ready").
Overall, these results indicate that data of the saccadetest had correlations with data from the SPAM-based pop-up measures.
5.3. Eight Best and Eight Worst PerformersBesides the correlation, we aimed to see whether the re-
sults of the saccade test could be used to create this divisionin robot teleoperation and how the results varied betweengood and bad robot performers. Thus, we grouped partic-ipants into two groups: the eight best and the eight worstperformers. The grouping was based on each participant’sperformance, for which task completion time and the aver-age number of collisions were used as metrics. Since thecomparison involved two metrics with two types of data, thedata of each metric were normalized by doing a min-maxfeature scaling (normalization) on the data.
The following equation was used:
V alue_normalized =V alue −MIN(value)
MAX(value) −MIN(value)(1)
Results regarding saccade test can be found as followed.Latency of Correct Saccade: Data transformationwas usedsince the residuals of the latency of any correct saccade onboth the pretrial and the final trial was not normally dis-tributed. The lambda values for the pretrial and the finaltrial were so different (-0,9 and -2,4) on the same measuredparameter, that a non parametric Mann-Whitney U-test wasused instead.
With Mann-Whitney U-tests, we found significant maineffect of Group (the best group or the worst group) on the la-tency of any correct saccade for both the pre- and final trials.In the pretrial the mean ranks for the best and worst perform-ers was 5 and 12, respectively; U = 4, Z = −2.7775, p <0.01, r = 0.6943, the medians of best and worst perform-ers, were 273.84 ms and 346 ms, respectively. In the fi-nal trial the mean ranks for the best and worst performerswere 6 and 11 respectively, U = 12, Z = −2.1004, p <0.05, r = 0.5251) the medians for best and worst performerswas 266.85 ms and 323.83 ms, respectively. These findingsindicate that the best performers had shorter reaction timeon the saccade test during the entire experiment.
Lastly, we found significant main effect of Group on thelatency of the first correct saccade in the pretrials. The meanrank for the best and worst were 4.125 and 12.18, respec-tively, U = 2.5, Z = −2.95, p < 0.01, r = 0.73. The me-dians for the best and worst performers were 273.8 ms and346.8 ms, respectively. Therefore, our null hypothesis canbe rejected.
G Zhang et al.: Preprint Page 8 of 13
122 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Saccade Test as a New Tool for Estimating Operators’ Situation Awareness in Teleoperation with an HMD
S.all
S.Perc
S.Com
S.Proj
RS.Proj
Tim
eCol
TlxMen
TlxEff
TlxFru
TlxPer
PRS
-0.18
0.14
-0.24
-0.20
-0.27*
0.47**
0.39**
0.23
0.31*
0.11
-0.07
S.All
0.56**
0.56**
0.69**
0.07
0.00
-0.30*
-0.18
-0.12
-0.28*
-0.09
S.Perc
-0.07
0.08
0.05
0.20
0.08
0.03
0.14
-0.11
-0.03
S.Com
0.08
0.02
-0.06
-0.19
-0.26*
0.00
-0.25
-0.01
S.Proj
0.03
-0.08
-0.35**
-0.05
-0.25
-0.13
-0.05
RS.Proj
0.24
0.10
-0.08
-0.03
0.06
0.19
Tim
e0.47**
0.28*
0.40**
0.23
0.31*
Col
0.29*
0.31*
0.35**
0.19
TlxMen
0.51**
0.39**
0.09
TlxEff
0.52**
0.38**
TlxFru
0.47**
*p<
0.05
**p<
0.01
Perc-Perception;
Com
-Com
prehension
;Pro
-Projection;
Tim
e-Taskcompletiontime;
Col
-nu
mberof
collision
s;Men
-MentalD
emand;
Tem
-Tem
poralD
emand;
Eff-Effort;Perf-Perform
ance;Fru-Frustration
Tabl
e1
Pearson
correlationcoeffi
cients
(r)betweenpo
p-up
data
andothermetric
s.
PRS
RS
S.All
S.Com
S.Pro
RS.Per
RS.Com
Tim
eCol
TlxMen
TlxPhy
TlxFru
TlxPer
NUM
(correct
saccade)
-0.34*
-0.24
0.07
-0.03
-0.10
-0.27*
-0.21
-0.23
-0.36**
-0.12
-0.09
-0.09
0.04
NUM
(sucessful
trials)
-0.37**
-0.23
-0.04
0.02
-0.29*
-0.27*
-0.28*
-0.26*
-0.31*
-0.08
-0.09
-0.02
0.02
LAT
(firstsaccade)
0.29*
0.24
-0.19
-0.32*
0.04
0.22
0.08
0.29*
0.10
0.15
0.12
0.31*
0.46**
LA(any
correctsaccade)
0.73**
0.25
-0.30*
-0.20
-0.25
0.34*
0.01
0.31*
0.45**
0.21
0.28*
0.14
0.04
LA(firstcorrectsaccade)
0.75**
0.23
-0.16
-0.16
-0.08
0.33*
0.03
0.39**
0.33**
0.21
0.25*
0.14
0.16
*p<
0.05
**p<
0.01
Perc-Perception;
Com
-Com
prehension
;Pro
-Projection;
Tim
e-Taskcompletiontime;
Col
-nu
mberof
collision
s;Men
-MentalDem
and;
Tem
-Tem
poralDem
and;
Eff-Effort;Perf-Perform
ance;Fru-Frustration;
LA-La
tency
Tabl
e2
Pearson
correlationcoeffi
cients
betweenSa
ccadetest
data
andothermetric
s.
G Zhang et al.: Preprint Page 9 of 13
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 123
Saccade Test as a New Tool for Estimating Operators’ Situation Awareness in Teleoperation with an HMD
SPAM: With one-way ANOVAs followed-up by pair-wise comparisons with Bonferroni corrections, we found asignificant effect of group on test subjects’ reaction time tothe second projection related query, F (1, 9) = 6.589, p =<0.05, 2 = 0.351, (the mean was for the best group and theworst group respectively, 7.57 s (SD=2.28 s) and 12.05 s (SD= 5.42 s)); a significant effect of group on test subjects’ reac-tion to the preliminary question for the first projection relatedquestion, F (1, 9) = 6.163, p =< 0.05, 2 = 0.406 (the meanfor the best and worst respectively, 1.83 s(SD=0.45 s) and9.84 s(SD=15.65 s)).
NASA-TLX: With one-way ANOVAs followed-up bypairwise comparisonswith Bonferroni corrections, we founda significant effect of group on perceived effort after the pre-trial, F (1, 14) = 5.141, p < 0.05, 2 = 0.2686; a significanteffect of group on perceived physical demand after the train-ing session and after the final trial, respectively, F (1, 14) =6.13, p < 0.05, 2 = 0.304, and F (1, 14) = 11.66, p <0.01, 2 = 0.454.
6. Discussion6.1. Saccade Test as a New Tool
The aim of implementing the test was to examine whatvaluable information the results of a pro-saccade test canprovide when the test subject wears an HMD with built-ineye-trackers to control a robot. Then we could further see ifit could be an alternative method for SPAM-based pop-up.
By using Pearson’s r, significant correlation of data fromsaccade test and data fromSPAM-based pop-up can be foundin Table 2. Strongly significant correlations between dataof latency of the correct saccade (especially the first one)and PRS indicates that the latency can be an alternative met-ric for PRS for pop-up. Participants who had longer latencyto response to pop-up also had significantly longer responsetime to the pop-up of the preliminary query "are you ready"(PRS). The comparison of the correlation degree Pearson’sr value shows that latency to correct saccades, especially tothe first correct saccade, can be used as a metric like PRS asan indicator of workload, reflecting the operator’s SA level.In addition, correlation of saccade test with other importantmetrics (e.g. performance and workload), indicates that fur-ther supports the argument that saccade test could be an al-ternative method of SPAM pop-up.
In addition, results with comparisons of the 8 best vs.the 8 worst performers also support the argument that thesaccade test could be an alternative for SPAM-based pop-up. The best performers had shorter latency on all of theircorrect saccades in both trials. This result indicates that thebest robot performers had a higher alertness to any surpris-ing stimuli such as a pro-saccade test. It was also foundthat the first correct saccade also was significantly quickerfor the best robot performers. This is another piece of ev-idence that it could be provide the same measure of work-load as the SPAM-preliminary question. The results againsuggest that the best performers experienced a lower work-load than the worst performers. The significant difference
in PRS could be explained much like the latency of the sac-cades was explained, simply that the best performers had anexcess of mental capacity to register an object appearing infront of them, and therefore were faster at responding. Thisis backed up by the SPAM method [62], where PRS acts asan indicator of workload. The test showed that the best per-formers had a faster reaction time on the projection-relatedpop-up query, indicating that they obtained a higher levelof SA. Furthermore, the significantly increased perceivedphysical demand and effort for the worst group are consistentwith the abovementioned and previous findings. In addition,through descriptive analysis based on the plots provided bythe SaccadeMachine, we could see a clear tendency that theworst performers had more flickering eye movement duringthe saccade test.
Besides the analysis results, the saccade test meets the re-quirement of self-pacing and provides two sets of data fromresponse latency and from errors [6]. The test has similari-ties with other assessment based on dual task paradigm (e.g.SPAM).
It is concluded that the saccade test can be used as an al-ternative measurement method for pop-up, in this case whileteleoperating a robot with an HMD, thereby reducing intru-siveness. Under certain circumstances, for instance, if inter-ruption to the main task is a key issue, the saccade test has itsunique advantages as a new tool. Regarding intrusiveness,SPAM appears to create significant interference with perfor-mance and increases operator workload [42]. The secondarytask may interrupt the main task. This is a typical shortageof assessment based on dual-task paradigm. SPAM pop-up,as a secondary task, interfered with the primary task of nav-igation. However, the saccade test has less interruption. Theoperators need to simply move their eyes. People make eyemovements almost continuously, rarely being conscious ofdevoting much mental activity towards the goal of movingthe eyes per se. Thus, a secondary task involving only eyemovements is not thought by most people as a dual-task sit-uation [56]. Moreover, it also supports accurate assessment.In addition, compared to the SPAM pop-up, it is much easyto use. When using SPAM, the experimenter needs to workmore on collecting different types of data. For instance, topress a hot key and write down an answer, if an operatorgives an oral answer to the pop-up. The saccade test allowedall the data to be recorded much more conveniently.
However, the saccade test as a new tool also has its disad-vantages. SPAM utilizes the three hierarchical levels of SAillustrated in the theory of SA [22], i.e., perception, com-prehension, and projection. SPAM requires development ofdomain-specific queries [24]. Though it makes the processmore complex, the entire process of SPAM involves mea-surement of the three levels by using queries for the threelevels. However, the saccade test cannot provide detailed in-formation of each level. It relys on latency to the saccade asan importance metric, which is similar with the PRS.
Overall, a trade-off of interruption and detailed informa-tion of different levels of SA needs to be considered, whendetermining the use of the saccade test or SPAM or pop-up
G Zhang et al.: Preprint Page 10 of 13
124 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Saccade Test as a New Tool for Estimating Operators’ Situation Awareness in Teleoperation with an HMD
for teleoperation with an HMD. When interruption is a keyissue and detailed information of different levels of SA arenot needed, the saccade test is a good candidate. Future re-search needs to explorewhether saccadic eyemovements canprovide information of the three hierarchical levels of SA.
6.2. Validation of Previous FindingsAnother contribution of this research is to further val-
idate the relationship between SA, performance and work-load in the context of teleoperationwith anHMD. Prior workon other domain has validated their relationship asmentionedabove. In this work, the relationship between metrics of per-formance (time and collision) and self-reported workloadfurther validates the relationship stated in the literature. Ifwe take the data of the saccade test as an indicator of SA,we can even find that their significant correlation with thesemetrics and further verify these correlations. In addition,when we looked at the data of percentage of accuracy. Thequery of projection (the highest level of SA) has negativesignificant correlation with number of collisions. It also val-idated the correlation of SA and performance. Moreover,it is evidence of how important SA is in teleoperation andsafety in remote environment, as it is correlated to the num-ber of collisions.
6.3. Further ApplicationSafety is an important issue in teleoperation. Based on
the findings, further application of the saccade test could beconsidered to enhance the safety mechanism. As mentionedabove, the dead man’s switch has been widely used in relateddomains as a safety mechanism. For assistive devices, it hasbeen widely used. A typical dead-man’s switch is binary, tobe activated or deactivated, which is based on two statuses ofthe operators, for instance, loss of consciousness, or bodilyaway from control. Together with data from the saccade test,the dead-man switch could be used as an enhanced version ofthe dead-man’s button in robot teleoperation with an HMD.Three statuses of the target users as operator could be pro-vided by the saccade test, namely awake (no response to thesaccade test), low awareness (bad performance in the test),or high awareness (good performance in the test). Three cor-responding actions could be provided by the enhanced dead-man’s switch correspondingly: 1) stop the telerobot imme-diately; 2) more assistance needed (e.g. speed down, keeplarger distance with obstacle); 3) no action needed. The en-hanced version of dead-man’s switch based on the saccadetest could be used as a essential safety mechanism.
7. ConclusionIt is the first time the saccade test has been used in robot
teleoperation with HMDs. We observed that the saccade testcould be an alternative method for existing SA measure un-der certain circumstances, especially when interruption tothe main task is a main issue. We conclude that latency tothe first correct saccade could be a useful diagnostic indexfor the assessment of operators’ mental workload in the con-text of teleopertion. We found more evidence for correlation
between SA, workload and performance in a context of robotteleoperation.
CRediT authorship contribution statementGuangtao Zhang: First author. Sebastian Hedegaard
Hansen: Co-author. Oliver Repholtz Behrens: Co-author.John Paulin Hansen: Co-author.
References[1] Ahsan, U., Fuzail, M., Raza, Q., Muhammad, A., 2012. Development
of a virtual test bed for a robotic dead man’s switch in high speeddriving, in: 2012 15th International Multitopic Conference (INMIC),IEEE. pp. 97–104.
[2] Allspaw, J., Heinold, L., Yanco, H.A., 2019. Design of virtual realityfor humanoid robots with inspiration from video games, in: Inter-national Conference on Human-Computer Interaction, Springer. pp.3–18.
[3] Almeida, L., Menezes, P., Dias, J., 2017. Improving robot teleopera-tion experience via immersive interfaces, in: 2017 4th Experiment@International Conference (exp. at’17), IEEE. pp. 87–92.
[4] Baldisserri, L., Bonetti, R., Pon, F., Guidotti, L., Losi, M.G., Monta-nari, R., Tesauri, F., Collina, S., 2014. Motorsport driver workloadestimation in dual task scenario, in: Sixth International Conferenceon Advanced Cognitive Technologies and Applications, Citeseer.
[5] Baulk, S.D., Reyner, L., Horne, J.A., 2001. Driver sleepi-ness—evaluation of reaction time measurement as a secondary task.Sleep 24, 695–698.
[6] Brown, I., 1978. Dual task methods of assessing work-load. Er-gonomics 21, 221–224.
[7] Cak, S., Say, B., Misirlisoy, M., 2020. Effects of working memory, at-tention, and expertise on pilots’ situation awareness. Cognition, Tech-nology & Work 22, 85–94.
[8] Chang, E., Kim, H.T., Yoo, B., 2021. Predicting cybersickness basedon user’s gaze behaviors in hmd-based virtual reality. Journal of Com-putational Design and Engineering .
[9] Chen, C.f., Chang, Y.c., Luh, J.j., Chong, F.c., 2004. M3s systemprototype-a comprehensive system with straightforward implementa-tion, in: Proceedings of the 2004 IEEE International Conference onControl Applications, 2004., IEEE. pp. 1278–1283.
[10] Chen, J., Glover, M., Yang, C., Li, C., Li, Z., Cangelosi, A., 2017.Development of an immersive interface for robot teleoperation, in:Annual Conference Towards Autonomous Robotic Systems, Springer.pp. 1–15.
[11] Chen, W.L., Chen, S.C., Chen, Y.L., Chen, S.H., Hsieh, J.C., Lai, J.S.,Kuo, T.S., 2005. The m3s-based electric wheelchair for the peoplewith disabilities in taiwan. Disability and rehabilitation 27, 1471–1477.
[12] Chen, Y., Zhang, B., Zhou, J., Wang, K., 2020. Real-time 3d un-structured environment reconstruction utilizing vr and kinect-basedimmersive teleoperation for agricultural field robots. Computers andElectronics in Agriculture 175, 105579.
[13] Curtis, C.E., Calkins, M.E., Grove, W.M., Feil, K.J., Iacono, W.G.,2001. Saccadic disinhibition in patients with acute and remittedschizophrenia and their first-degree biological relatives. AmericanJournal of Psychiatry 158, 100–106.
[14] Di Stasi, L.L., McCamy, M.B., Macknik, S.L., Mankin, J.A., Hooft,N., Catena, A., Martinez-Conde, S., 2014. Saccadic eye movementmetrics reflect surgical residents’ fatigue. Annals of surgery 259, 824–829.
[15] Di Stasi, L.L., McCamy, M.B., Martinez-Conde, S., Gayles, E.,Hoare, C., Foster, M., Catena, A., Macknik, S.L., 2016. Effects oflong and short simulated flights on the saccadic eye movement veloc-ity of aviators. Physiology & behavior 153, 91–96.
[16] Di Stasi, L.L., Renner, R., Catena, A., Cañas, J.J., Velichkovsky,B.M., Pannasch, S., 2012. Towards a driver fatigue test based on
G Zhang et al.: Preprint Page 11 of 13
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 125
Saccade Test as a New Tool for Estimating Operators’ Situation Awareness in Teleoperation with an HMD
the saccadic main sequence: A partial validation by subjective reportdata. Transportation research part C: emerging technologies 21, 122–133.
[17] Di Stasi, L.L., Renner, R., Staehr, P., Helmert, J.R., Velichkovsky,B.M., Cañas, J.J., Catena, A., Pannasch, S., 2010. Saccadic peakvelocity sensitivity to variations in mental workload. Aviation, space,and environmental medicine 81, 413–417.
[18] Diaz-Piedra, C., Rieiro, H., Suárez, J., Rios-Tejada, F., Catena, A.,Di Stasi, L.L., 2016. Fatigue in the military: towards a fatigue detec-tion test based on the saccadic velocity. Physiological measurement37, N62.
[19] Durso, F.T., Bleckley,M.K., Dattel, A.R., 2006. Does situation aware-ness add to the validity of cognitive tests? Human Factors 48, 721–733.
[20] Durso, F.T., Hackworth, C.A., Truitt, T.R., Crutchfield, J., Nikolic,D., Manning, C.A., 1998a. Situation awareness as a predictor of per-formance for en route air traffic controllers. Air Traffic Control Quar-terly 6, 1–20.
[21] Durso, F.T., Truitt, T.R., Hackworth, C.A., Crutchfield, J.M., Man-ning, C.A., 1998b. En route operational errors and situational aware-ness. The International Journal of Aviation Psychology 8, 177–194.
[22] Endsley, M.R., 1995. Measurement of situation awareness in dynamicsystems. Human factors 37, 65–84.
[23] Endsley, M.R., 2000. Direct measurement of situation awareness:Validity and use of sagat, in: Garland, M.R.E..D.J. (Ed.), Situa-tion Awareness Analysis and Measurement. Lawrence Erlbaum As-sociates, Mahwah NJ, pp. 147–173.
[24] Endsley, M.R., 2019. A systematic review and meta-analysis of directobjective measures of situation awareness: A comparison of sagat andspam. Human factors , 0018720819875376.
[25] Ferland, F., Michaud, F., 2016. Selective attention by perceptual fil-tering in a robot control architecture. IEEE Transactions on Cognitiveand Developmental Systems 8, 256–270.
[26] Flach, J.M., 1995. Situation awareness: Proceed with caution. Humanfactors 37, 149–157.
[27] Fong, T., Kaber, D., Lewis, M., Scholtz, J., Schultz, A., Steinfeld, A.,2004. Common metrics for human-robot interaction, in: IEEE 2004International Conference on Intelligent Robots and Systems, Sendai,Japan.
[28] Fukushima, J., Fukushima, K., Chiba, T., Tanaka, S., Yamashita, I.,Kato, M., 1988. Disturbances of voluntary control of saccadic eyemovements in schizophrenic patients. Biological psychiatry 23, 670–677.
[29] Guidetti, G., Guidetti, R., Manfredi, M., Manfredi, M., Lucchetta, A.,Livio, S., 2019. Saccades and driving. Acta OtorhinolaryngologicaItalica 39, 186.
[30] Hansen, J.P., Alapetite, A., Thomsen, M., Wang, Z., Minakata, K.,Zhang, G., 2018. Head and gaze control of a telepresence robot withan hmd, in: Proceedings of the 2018 ACM Symposium on Eye Track-ing Research & Applications, ACM.
[31] Hart, S.G., Staveland, L.E., 1988. Development of nasa-tlx (task loadindex): Results of empirical and theoretical research, in: Advances inpsychology. Elsevier. volume 52, pp. 139–183.
[32] Hauland, G., 2008. Measuring individual and team situation aware-ness during planning tasks in training of en route air traffic control.The International Journal of Aviation Psychology 18, 290–304.
[33] Hirvonen, K., Puttonen, S., Gould, K., Korpela, J., Koefoed, V.F.,Müller, K., 2010. Improving the saccade peak velocity measurementfor detecting fatigue. Journal of neuroscience methods 187, 199–206.
[34] Hoffman, J.E., Subramaniam, B., 1995. The role of visual attention insaccadic eye movements. Perception & psychophysics 57, 787–795.
[35] Holland, C., Komogortsev, O.V., 2011. Biometric identification viaeye movement scanpaths in reading, in: 2011 International joint con-ference on biometrics (IJCB), IEEE. pp. 1–8.
[36] Ikuma, L.H., Harvey, C., Taylor, C.F., Handal, C., 2014. A guide forassessing control room operator performance using speed and accu-racy, perceived workload, situation awareness, and eye tracking. Jour-nal of loss prevention in the process industries 32, 454–465.
[37] Illing, B., Westhoven, M., Gaspers, B., Smets, N., Brüggemann, B.,Mathew, T., . Evaluation of immersive teleoperation systems usingstandardized tasks and measurements, in: 2020 29th IEEE Interna-tional Conference on Robot and Human Interactive Communication(RO-MAN), IEEE. pp. 278–285.
[38] Jackson, R., 1993. Robotics and its role in helping disabled people.Engineering Science & Education Journal 2, 267–272.
[39] Jankowski, J., Grabowski, A., 2015. Usability evaluation of vr inter-face for mobile robot teleoperation. International Journal of Human-Computer Interaction 31, 882–889.
[40] Jerald, J., 2015. The VR book: Human-centered design for virtualreality. Morgan & Claypool.
[41] Kaber, D.B., Onal, E., Endsley, M.R., 2000. Design of automation fortelerobots and the effect on performance, operator situation aware-ness, and subjective workload. Human factors and ergonomics inmanufacturing & service industries 10, 409–430.
[42] Keeler, J., Battiste, H., Hallett, E.C., Roberts, Z., Winter, A., Sanchez,K., Strybel, T.Z., Vu, K.P.L., 2015. May i interrupt? the effect ofspam probe questions on air traffic controller performance. ProcediaManufacturing 3, 2998–3004.
[43] Kot, T., Novák, P., 2018. Application of virtual reality in teleoperationof the military mobile robotic system taros. International journal ofadvanced robotic systems 15, 1729881417751545.
[44] Kratz, S., Vaughan, J., Mizutani, R., Kimber, D., 2015. Evaluat-ing stereoscopic video with head tracking for immersive teleopera-tion of mobile telepresence robots, in: Proceedings of the Tenth An-nual ACM/IEEE International Conference on Human-Robot Interac-tion Extended Abstracts, pp. 43–44.
[45] Linnman, S., 1996. M3s: The local network for electric wheelchairsand rehabilitation equipment. IEEE Transactions on RehabilitationEngineering 4, 188–192.
[46] Livatino, S., De Paolis, L.T., D’Agostino, M., Zocco, A., Agrimi, A.,De Santis, A., Bruno, L.V., Lapresa, M., 2014. Stereoscopic visu-alization and 3-d technologies in medical endoscopic teleoperation.IEEE Transactions on Industrial Electronics 62, 525–535.
[47] Mardanbegi, D., Wilcockson, T., Sawyer, P., Gellersen, H., Crawford,T., 2019. Saccademachine: software for analyzing saccade tests (anti-saccade and pro-saccade), in: Proceedings of the 11th ACM Sympo-sium on Eye Tracking Research & Applications, pp. 1–8.
[48] Martins, H., Oakley, I., Ventura, R., 2015. Design and evaluation of ahead-mounted display for immersive 3d teleoperation of field robots.Robotica 33, 2166–2185.
[49] Mast, M., Materna, Z., Španěl, M., Weisshardt, F., Arbeiter, G.,Burmester, M., Smrž, P., Graf, B., 2015. Semi-autonomous domesticservice robots: Evaluation of a user interface for remote manipulationand navigation with focus on effects of stereoscopic display. Interna-tional Journal of Social Robotics 7, 183–202.
[50] Moore, K., Gugerty, L., 2010. Development of a novel measure of sit-uation awareness: The case for eye movement analysis, in: Proceed-ings of the human factors and ergonomics society annual meeting,SAGE Publications Sage CA: Los Angeles, CA. pp. 1650–1654.
[51] Nielsen, C.W., Goodrich, M.A., Ricks, R.W., 2007. Ecological inter-faces for improving mobile robot teleoperation. IEEE Transactionson Robotics 23, 927–941.
[52] Nuzzi, C., Ghidini, S., Pagani, R., Pasinetti, S., Coffetti, G., Sansoni,G., 2020. Hands-free: a robot augmented reality teleoperation system,in: 2020 17th International Conference on Ubiquitous Robots (UR),IEEE. pp. 617–624.
[53] Ogden, G.D., Levine, J.M., Eisner, E.J., 1979. Measurement of work-load by secondary tasks. Human Factors 21, 529–548.
[54] Ondas, S., Juhar, J., Pleva, M., Cizmar, A., Holcer, R., 2013. Servicerobot scorpio with robust speech interface. International Journal ofAdvanced Robotic Systems 10, 3.
[55] Oyama, E., Shiroma, N., Niwa, M., Watanabe, N., Shinoda, S.,Omori, T., Suzuki, N., 2013. Hybrid head mounted/surround displayfor telexistence/telepresence and behavior navigation, in: 2013 IEEEInternational Symposium on Safety, Security, and Rescue Robotics(SSRR), IEEE. pp. 1–6.
G Zhang et al.: Preprint Page 12 of 13
126 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Saccade Test as a New Tool for Estimating Operators’ Situation Awareness in Teleoperation with an HMD
[56] Pashler, H., Carrier, M., Hoffman, J., 1993. Saccadic eye movementsand dual-task interference. The Quarterly journal of experimentalpsychology 46, 51–82.
[57] Pierce, R.S., Vu, K.P.L., Nguyen, J., Strybel, T.Z., 2008. The relation-ship between spam, workload, and task performance on a simulatedatc task, in: Proceedings of the Human Factors and Ergonomics So-ciety Annual Meeting, Sage Publications Sage CA: Los Angeles, CA.pp. 34–38.
[58] Prexl, M., Struebig, K., Harder, J., Hoehn, A., 2017. User studies of ahead-mounted display for search and rescue teleoperation of uavs viasatellite link, in: 2017 IEEE Aerospace Conference, IEEE. pp. 1–8.
[59] Ragan, E.D., Kopper, R., Schuchardt, P., Bowman, D.A., 2012.Studying the effects of stereo, head tracking, and field of regard ona small-scale spatial judgment task. IEEE transactions on visualiza-tion and computer graphics 19, 886–896.
[60] Riley, J., Kaber, D., Draper, J., 2004. Situation awareness and atten-tion allocation measures for quantifying telepresence experiences inteleoperation. Human Factors and Ergonomics in Manufacturing 14,51–67. doi:10.1002/hfm.10050.
[61] Ruff, H., Narayanan, S., Draper, M., 2002. Human interactionwith levels of automation and decision-aid fidelity in the supervi-sory control of multiple simulated unmanned air vehicles. Presence-teleoperators and Virtual Environments 11, 335–351. doi:10.1162/105474602760204264.
[62] Salmon, P., Stanton, N., Walker, G., Green, D., 2006. Situation aware-ness measurement: A review of applicability for c4i environments.Applied ergonomics 37, 225–238.
[63] Schmitt, K.U., Seeger, R., Fischer, H., Lanz, C., Muser, M., Walz,F., Schwarz, U., 2015. Saccadic eye movement performance as anindicator of driving ability in elderly drivers. Swiss medical weekly145, w14098.
[64] Schwind, V., Knierim, P., Haas, N., Henze, N., 2019. Using presencequestionnaires in virtual reality, in: Proceedings of the 2019CHICon-ference on Human Factors in Computing Systems, pp. 1–12.
[65] Selcon, S.J., Taylor, R., 1990. Evaluation of the situational awarenessrating technique(sart) as a tool for aircrew systems design. AGARD,Situational Awareness in Aerospace Operations 8 p(SEE N 90-2897223-53) .
[66] Shafiq-Antonacci, R., Maruff, P., Masters, C., Currie, J., 2003. Spec-trum of saccade system function in alzheimer disease. Archives ofneurology 60, 1272–1278.
[67] Smolensky, M., 1993. Toward the physiological measurement of sit-uation awareness: The case for eye movement measurements, in: Pro-ceedings of the Human Factors and Ergonomics Society 37th annualmeeting, Human Factors and Ergonomics Society Santa Monica.
[68] Terao, Y., Fukuda, H., Ugawa, Y., Hikosaka, O., 2013. New per-spectives on the pathophysiology of parkinson’s disease as assessedby saccade performance: a clinical review. Clinical neurophysiology124, 1491–1506.
[69] Tokuda, S., Obinata, G., Palmer, E., Chaparro, A., 2011. Estimationof mental workload using saccadic eye movements in a free-viewingtask, in: 2011 Annual International Conference of the IEEE Engineer-ing in Medicine and Biology Society, IEEE. pp. 4523–4529.
[70] Vidulich, M.A., 2000. The relationship between mental workloadand situation awareness, in: Proceedings of the human factors and er-gonomics society annual meeting, SAGE Publications Sage CA: LosAngeles, CA. pp. 3–460.
[71] Wickens, C.D., 1991. Processing resources and attention. Multiple-task performance 1991, 3–34.
[72] de Winter, J.C., Eisma, Y.B., Cabrall, C., Hancock, P.A., Stanton,N.A., 2019. Situation awareness based on eye movements in relationto the task environment. Cognition, Technology &Work 21, 99–111.
[73] Zhang, G., Hansen, J.P., 2019. A virtual reality simulator for traininggaze control of wheeled tele-robots, in: 25th ACM Symposium onVirtual Reality Software and Technology, pp. 1–2.
[74] Zhang, G., Hansen, J.P., Minakata, K., 2019a. Hand-and gaze-controlof telepresence robots, in: Proceedings of the 11th ACM Symposiumon Eye Tracking Research & Applications, pp. 1–8.
[75] Zhang, G., Minakata, K., Hansen, J.P., 2019b. Enabling real-timemeasurement of situation awareness in robot teleoperation with ahead-mounted display, in: Nordic Human Factors Society Confer-ence, p. 169.
G Zhang et al.: Preprint Page 13 of 13
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 127
B Appendix Article in ProceedingsB.1 Head and Gaze Control of a Telepresence Robot with an
HMDAuthors: John Paulin Hansen, Alexandre Alapetite, Martin Thomsen, Zhongyu Wang,Katsumi Minakata, and Guangtao Zhang
In Proceedings of the ACM Symposium on Eye Tracking Research & Applications (ETRA2018)
DOI: 10.1145/3204493.3208330
128 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Head and Gaze Control of a Telepresence Robot With an HMDJohn Paulin Hansen
Technical University of Denmark, Kgs.Lyngby, Denmark
Alexandre AlapetiteTechnical University of Denmark, Kgs.
Lyngby, [email protected]
Martin ThomsenTechnical University of Denmark, Kgs.
Lyngby, [email protected]
Zhongyu WangTechnical University of Denmark, Kgs.
Lyngby, [email protected]
Katsumi MinakataTechnical University of Denmark, Kgs.
Lyngby, [email protected]
Guangtao ZhangTechnical University of Denmark, Kgs.
Lyngby, [email protected]
ABSTRACTGaze interactionwith telerobots is a new opportunity forwheelchairusers with severe motor disabilities. We present a video showinghow head-mounted displays (HMD) with gaze tracking can be usedto monitor a robot that carries a 360 video camera and a micro-phone. Our interface supports autonomous driving via way-pointson a map, along with gaze-controlled steering and gaze typing. Itis implemented with Unity, which communicates with the RobotOperating System (ROS).
CCS CONCEPTS•Human-centered computing→Accessibility technologies;
KEYWORDSgaze interaction, head-mounted displays, telerobot, telepresence,human-robot interaction, accessibility, assistive technology, virtualreality, experience prototyping
ACM Reference format:John Paulin Hansen, Alexandre Alapetite, Martin Thomsen, ZhongyuWang,Katsumi Minakata, and Guangtao Zhang. 2018. Head and Gaze Control ofa Telepresence Robot With an HMD. In Proceedings of 2018 Symposium onEye Tracking Research and Applications, Warsaw, Poland, June 14–17, 2018(ETRA ’18), 3 pages.https://doi.org/10.1145/3204493.3208330
1 INTRODUCTIONWe present a system for controlling a telerobot that allows individu-als with severe motor disabilities to be virtually present at a remotelocation. The system features field-of-view (FOV) navigation viagaze, steering of the robots, way-point selection on a map and typ-ing – all of which may be done by gaze only, cf. Figure 1. FutureHMDs are likely to have built-in head- and gaze-tracking. Peoplewith severe motor disabilities might use an HMD for interactionwhile lying in bed if remote gaze tracking is inconvenient. While
Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).ETRA ’18, June 14–17, 2018, Warsaw, Poland© 2018 Copyright held by the owner/author(s).ACM ISBN 978-1-4503-5706-7/18/06.https://doi.org/10.1145/3204493.3208330
HMD provides a 360 view for those who can move their head, thisis not an option for people who are paralysed.
Telepresence is an option to participate in social gatheringsor events when lack of mobility prevents a person from being atthe locations where they take place. Some students, for instance,use telerobots to go to school when they cannot leave their homebecause of medical conditions [Newhart and Olson 2017].
While most of the existing telerobot systems offer a limited FOV,the use of 360 cameras allows for a complete transmission of thetelerobot’s camera location. When people remotely control thecamera by moving a dolly, they are not dependent on assistantscarrying the camera around [Misawa and Rekimoto 2015].
Prior work has shown that people can control a floor-drivingrobot by gaze interaction with the live video-stream from the teler-obot [Tall et al. 2009]. Gaze typing in an HMD is a viable optioneven with a low-cost HMD [Rajanna and Hansen 2018]. We pre-viously examined the precision of remote gaze tracking in a bedscenario [Hansen et al. 2011]. In the present project, we use a HMDbecause it provides an immersive experience and supports bothhead- and gaze -tracking. The drawback, when compared to a re-mote gaze tracking setup, is that the user might get socially isolatedby wearing the HMD. It may also be uncomfortable to have anHMD strapped to the head for long periods of time. However, weexpect HMDs to become lightweight and less obtrusive in the nearfuture.
2 SYSTEMWe focus on affordable mobile robots that can navigate semi- au-tonomously indoors. Our platform1 builds upon the open-sourceRobot Operating System (ROS) and its ecosystem, to maximisereusability. This includes modules for positioning, navigation, inter-action, and scenarios. To widen the possible use-cases, we apply oursoftware on several robot platforms, ranging from off-the-shelf toysto developer-oriented robotic platforms, and modified wheelchairs,cf. Figure 2.
In the present video, we feature a Parallax Arlo robot2, withultrasound proximity sensors and wheel encoders. The robot carriesa Ricoh Theta S 360 camera at approx. 1.3 m above the floor.
We built a user interface in Unity3 that features two modes:parking and driving. Parking mode allows the operator to use a1https://dtu-r3.github.io [last accessed - March 2018]2https://www.parallax.com/product/arlo-robotic-platform-system [last accessed -March 2018]3https://unity3d.com [last accessed - March 2018]
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 129
ETRA ’18, June 14–17, 2018, Warsaw, Poland J. P. Hansen et al.
Figure 1: Person in bed wearing a headset equipped withgaze tracking. The monitor behind shows how he is steer-ing the telerobot forward by looking at the top of the live-stream image transmitted from the robot (the red circleshows his gaze point).
panning-tool in order to get a full 360 view of the current location.Driving mode displays a front camera view in low-resolution inorder to minimise delay in the video transmission [Minakata et al.2018] . The robot can be steered manually with gaze and head move-ments recorded by a FOVE HMD.4 The headset has a resolution of2560 × 1440px, renders at a maximum of 70 fps, and has an FOVof 100. The manufacturer indicates that tracking accuracy is < 1of visual angle. The headset weighs 520 grams and has IR-basedposition tracking plus IMU-based orientation tracking. By lookingupwards (i.e. in the upper quarter of the live video image), the robotwill go forward, cf. Figure 1. Looking down will reverse it. Turningis done by looking at the sides of the image.
When engaged, a semi-transparent virtual keyboard floats infront of the visual outlook. The keys may be activated by gaze selec-tions or by head pointing. The final selection is made by dwellingat the key for 500 milliseconds, cf. Figure 3.
3 EXPERIENCESWe used experience prototyping [Buchenau and Suri 2000] to un-derstand what it will be like to control a telerobot from an HMD.The present video opens with a person lying in bed who gets anHMD strapped to his face. A large monitor in the background showswhat he sees in the HMD and a moving pink circle marks wherehe is looking (this pink circle is not seen by him, though). Next,we show how he can steer the robot by looking at the fringe ofthe video stream. He then types an instant message to a friend,via gaze typing, announcing that he will come to his room in 3minutes. He now uses his gaze to mark a waypoint for that roomon a digital map, which launches the telerobot to drive to this roomautonomously. Once inside the room, the robot switches to manualmode and when parked, the user can explore the room in full 360.If the user is not able to turn his head, he may turn the FOV bygazing at an arrowhead icon layered on top of the live images.
The video ends by showing how the telerobot may also be con-trolled by a wheelchair joystick. The wheelchair is mounted on
4https://www.getfove.com [last accessed - March 2018]
Figure 2: Our gaze and head interactive interface can serve arange of telerobots, including amodifiedwheelchair, a build-yourself model (Parallax Arlo) and third-party models (Pad-bot).
Figure 3: Gaze typing on a transparent visual keyboard
rollers. An encoder (1024 P/R quadrature) along with a micro-controller (Teensy 3.2) are used to pick up the wheel movementsfrom the rollers [Sørensen andHansen 2017].When usingwheelchairmotion as input, gaze is only used for shifting between driving viewand parking view and for typing.
4 RESULTS AND CONCLUSIONOur goal is to make telerobots accessible for everyone, even peoplewho can only move their eyes. As shown in the video, this may beobtained by using an HMD with gaze and head tracking. Peoplewho have tried it are able to master controlling the robot and engagethe various functions of the interface. The limited FOV in the HMD(i.e., 100) can partly be compensated for by providing a pan optioncontrolled by head or gaze movements. In driving mode, however,the FOV becomes even more restricted, which makes it difficult tonavigate quickly through a narrow passage such as a doorway. Alarger FOV in the HMD and higher bandwidth would enhance thesense of space, we hope.
130 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Head and Gaze Control of a Telepresence Robot With an HMD ETRA ’18, June 14–17, 2018, Warsaw, Poland
In conclusion, we have demonstrated the feasibility of our teler-obot platform that allows a person in bed to steer, type, and navigatewith gaze only.
.
ACKNOWLEDGMENTSThis research has been supported by the Bevica Foundation.
REFERENCESMarion Buchenau and Jane Fulton Suri. 2000. Experience prototyping. In Proceedings
of the 3rd conference on Designing interactive systems: processes, practices, methods,and techniques. ACM, 424–433.
John Paulin Hansen, Javier San Agustin, and Henrik Skovsgaard. 2011. Gaze Interactionfrom Bed. In Proceedings of the 1st Conference on Novel Gaze-Controlled Applications(NGCA ’11). ACM, New York, NY, USA, Article 11, 4 pages. https://doi.org/10.1145/1983302.1983313
Katsumi Minakata, Martin Thomsen, and John Paulin Hansen. 2018. Bicycles andWheelchairs for Locomotion Control of a Simulated Telerobot Supported byGaze- and Head-Interaction. In Proceedings of the 11th International Conferenceon PErvasive Technologies Related to Assistive Environments (PETRA ’18). ACM.https://doi.org/10.1145/3197768.3201573
Kana Misawa and Jun Rekimoto. 2015. ChameleonMask: Embodied Physical and SocialTelepresence Using Human Surrogates. In Proceedings of the 33rd Annual ACMConference Extended Abstracts on Human Factors in Computing Systems (CHI EA’15). ACM, New York, NY, USA, 401–411. https://doi.org/10.1145/2702613.2732506
Veronica Ahumada Newhart and Judith S. Olson. 2017. My Student is a Robot: HowSchools Manage Telepresence Experiences for Students. In Proceedings of the 2017CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York,NY, USA, 342–347. https://doi.org/10.1145/3025453.3025809
Vijay Rajanna and John Paulin Hansen. 2018. Gaze Typing in Virtual Reality: Impactof Keyboard Design, Selection Method, and Motion. In Proceedings of the TenthBiennial ACM Symposium on Eye Tracking Research and Applications (ETRA ’18).ACM, New York, NY, USA. https://doi.org/10.1145/3204493.3204541
Lars Yndal Sørensen and John Paulin Hansen. 2017. A Low-cost Virtual RealityWheelchair Simulator. In Proceedings of the 10th International Conference on PErva-sive Technologies Related to Assistive Environments (PETRA ’17). ACM, New York,NY, USA, 242–243. https://doi.org/10.1145/3056540.3064963
Martin Tall, Alexandre Alapetite, Javier San Agustin, Henrik H.T Skovsgaard,John Paulin Hansen, Dan Witzner Hansen, and Emilie Møllenbach. 2009. Gaze-controlled Driving. In CHI ’09 Extended Abstracts on Human Factors in ComputingSystems (CHI EA ’09). ACM, New York, NY, USA, 4387–4392. https://doi.org/10.1145/1520340.1520671
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 131
B.2 Eyegazecontrolled Telepresence Robots for Peoplewith Motor Disabilities
Authors: Guangtao Zhang, John Paulin Hansen, Katsumi Minakata, Alexandre Alapetite,and Zhongyu Wang
In Proceedings of the 14th ACM/IEEE International Conference on HumanRobot Interaction (HRI 2019)
DOI: 10.1109/HRI.2019.8673093
132 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Eye-Gaze-Controlled Telepresence Robots forPeople with Motor Disabilities
Guangtao Zhang, John Paulin Hansen, Katsumi Minakata, Alexandre Alapetite, Zhongyu WangTechnical University of Denmark
Kgs. Lyngby, Denmarkguazha, jpha, katmin, alal, [email protected]
Abstract—Eye-gaze interaction is a common control mode forpeople with limited mobility of their hands. Mobile robotictelepresence systems are increasingly used to promote socialinteraction between geographically dispersed people. We areinterested in how gaze interaction can be applied to such roboticsystems, in order to provide new opportunities for people withphysical challenges. However, few studies have implemented gaze-interaction into a telepresence robot and it is still unclear howgaze-interaction within these robotic systems impacts users andhow to improve the systems. This paper introduces our researchproject, which takes a two-phase approach towards investigatinga novel interaction-system we developed. Results of these twostudies are discussed and future plans are described.
Index Terms—Telepresence, human-robot interaction, robot-mediated communication, gaze interaction, accessibility.
I. INTRODUCTION
Over 190 million persons worldwide have severe disabilities[1], which limits their ability to seamlessly interact with daily-use devices and engage in social communication and activities.Eye-tracking technology is now well-developed and low-cost,gaze-tracking components can be built into computers andmobile devices.
Gaze as an input method can be implemented into atelerobot, in order to promote peoples participation in socialinteractions and enhance their communication quality. Thegoal of our project is to design a usable and hands-freetelepresence system. The system should be extremely simple,compact, non-obtrusive, and comfortable. We have developeda system for gaze-controlled telpresence robots [2]. Our robotplatform (Fig. 1) is built on the open source robot operatingsystem (ROS) and its ecosystem, which supports various clientplatforms and and several types of robots.
II. RELATED WORK
Besides eye typing and cursor control, gaze interaction hasbeen applied in previous studies addressing gaze-control ofwheelchairs [3], driving [4], drone flying [5], and robot control[6].
Telepresence robots are gaining increased attention as a newoption for remote communication. These systems are becom-ing increasingly popular within certain application domains,e.g. collaboration with geographically distributed teams [7],students at schools [8], outdoor activities [9]. Recent studieshave focused on healthcare, e.g. distant communication inhealthcare environments for patients [10], distant learning for
VR HMDwith eye trackers
Telerobot that carriesa 360° video camera
Gaze control
Live video stream
Fig. 1. Eye-gaze-controlled telepresence systems: a telepresence robot witha virtual reality head-mounted display (VR HMD).
home-bound students [8], and independent living for olderadults [10]. Speech-based interfaces for telepresence robotshave been designed for people with disabilities [11].
Situation awareness (SA) is an important aspect of telep-resence and a primary basis for performance [12]. A studyshowed that if people lacked a sense of presence, it resultedin motion sickness [13]. Hence, research is needed to explorehow presence may be ensured when designing teleoperationsystems.
III. RESEARCH OBJECTIVES AND METHODS
The overall objective is to investigate potential impacts ofgaze-controlled telepresence for people with motor disabilities.The final goal is to improve their social interaction and qualityof communication. The current research plan (Fig. 2) hastwo phases and three research questions with the followingintroduction:
• Phase A: impacts of gaze control.In our first study [14], we hypothesized that users wereimpacted by task complexity on their performance, SA,and subjective experience rating, when driving a gaze-controlled telerobot with a VR HMD. The main objectiveof this pilot study was to see whether task complexityneeds to be taken into account in our next study tocompare gaze interaction with other control method. Inour second study, we hypothesized that there were dif-ferences in users SA, presence, performance, workload,and subjective experience between a control conditionwith gaze and a control condition with hands, when
978-1-5386-8555-6/19/$31.00 ©2019 IEEE 574
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 133
wearing a VR HMD connected with a telerobot. Mainobjectives were to identify potential impacts of gazecontrol, compared to the commonly used control methodby hands.
• Phase B: impacts of training of using eye-gaze control.Two types of studies are planned in this phase, wherewe hypothesize that training of using gaze-controlled,telepresence-robots in simulation-based environments im-proves users ability to operate the robots in real scenarios.When compared to those without training, the users andtheir communication quality will be positively impactedby the training, we expect. A laboratory study withable-bodied participants will be conducted. A field-studycomposed of people with motor disabilities will also bemade.
RQ 2: How to improve gaze-controlled user interface?
Review Review
Laboratory study
Situation awareness
Presence
Performance
Workload
User experience
Why to trainWhat to trainHow to train
Communicationquality
User-centered design: improvement of gaze-interaction
RQ1: What kinds of impacts does gaze control have? RQ3: What kinds of impacts does training of gaze control have?
Field study
Laboratory study
Fig. 2. Research plan of this project: an overview.
Study 1 and Study 2 for Phase A have been conductedusing our systems with a VR HMD [2]. In Phase B, weplan to focus on the following sub-questions: (1)how can wefind an affordable and suitable simulation approach for usersto have training of gaze control? (2) what kinds of impactsdoes this training approach have? (3) how can it be used forimprovement of their communication quality in a real scenario.
Throughout both phases, we have also been focusing onhow to improve this kind of gaze-controlled user interfacein human-robot interaction for the target users. User-centereddesign method will be used in this improvement.
IV. RESULTS TO DATE
In Study 1, we found that participants’ performance, SA,and experience were different between two groups. Based onfindings and observation in this pilot study, a new hypothesishad been formulated for Study 2. Task complexity had been setto a certain level, which was close to the field test. Moreover,measures of SA had been changed to overcome limitation ofthe SA measure in Study 1. When comparing gaze controlwith hand control in Study 2, statistical analysis of Study 2with two-way ANOVAs showed a significant increase in robotcollisions, task completion time, workload, and decrease in anaspect of SA, feeling of dominance, post-trial reproduction ofmaze layout and trial duration. Paths of test subjects using thesame control method in the same maze have been compared.Difference between hand control and gaze control have beenfound. Analysis of spontaneous comments from participants
after each trial also indicated differences in the subjectiveexperience of hand control and gaze control.
V. CURRENT STATUS AND EXPECTED CONTRIBUTIONS
In the first year of this PhD project, literature related tothe topic has been reviewed. Above-mentioned Study 1 andStudy 2 have been conducted. Based on the findings, we areplanning the next step for Phase B, and a longitudinal studywill be conducted.
This research is expected to contribute to the research fieldby providing: (1) insights about gaze interaction of robotictelepresence systems for people with motor disabilities; (2)empirical evidence of the potential impacts of gaze interactionand training on the systems.
VI. ACKNOWLEDGMENT
We thank the China Scholarship Council, and the BevicaFoundation in Denmark for financial support of this work.
REFERENCES
[1] World Health Organization, “World report on disability,” Geneva, 2011.[2] J. P. Hansen, A. Alapetite, M. Thomsen, Z. Wang, K. Minakata, and
G. Zhang, “Head and gaze control of a telepresence robot with an hmd,”in Proceedings of the 2018 ACM Symposium on Eye Tracking Research& Applications. ACM, 2018, p. 82.
[3] M. A. Eid, N. Giakoumidis, and A. El-Saddik, “A novel eye-gaze-controlled wheelchair system for navigating unknown environments:Case study with a person with als.” IEEE Access, vol. 4, pp. 558–573,2016.
[4] M. Tall, A. Alapetite, J. San Agustin, H. H. Skovsgaard, J. P. Hansen,D. W. Hansen, and E. Møllenbach, “Gaze-controlled driving,” in CHI’09Extended Abstracts on Human Factors in Computing Systems. ACM,2009, pp. 4387–4392.
[5] J. P. Hansen, A. Alapetite, I. S. MacKenzie, and E. Møllenbach, “Theuse of gaze to control drones,” in Proceedings of the Symposium on EyeTracking Research and Applications. ACM, 2014, pp. 27–34.
[6] G. Gras and G.-Z. Yang, “Intention recognition for gaze controlledrobotic minimally invasive laser ablation,” in 2016 IEEE/RSJ Interna-tional Conference on Intelligent Robots and Systems (IROS). IEEE,2016, pp. 2431–2437.
[7] M. K. Lee and L. Takayama, “Now, i have a body: Uses and socialnorms for mobile remote presence in the workplace,” in Proceedings ofthe SIGCHI conference on human factors in computing systems. ACM,2011, pp. 33–42.
[8] V. A. Newhart and J. S. Olson, “My student is a robot: How schoolsmanage telepresence experiences for students,” in Proceedings of the2017 CHI Conference on Human Factors in Computing Systems. ACM,2017, pp. 342–347.
[9] Y. Heshmat, B. Jones, X. Xiong, C. Neustaedter, A. Tang, B. E. Riecke,and L. Yang, “Geocaching with a beam: Shared outdoor activitiesthrough a telepresence robot with 360 degree viewing,” in Proceedingsof the 2018 CHI Conference on Human Factors in Computing Systems.ACM, 2018, p. 359.
[10] L. D. Riek, “Healthcare robotics,” Communications of the ACM, vol. 60,no. 11, pp. 68–78, 2017.
[11] K. M. Tsui, K. Flynn, A. McHugh, H. A. Yanco, and D. Kontak,“Designing speech-based interfaces for telepresence robots for peoplewith disabilities,” in Rehabilitation Robotics (ICORR), 2013 IEEEInternational Conference on. IEEE, 2013, pp. 1–8.
[12] M. R. Endsley, “Measurement of situation awareness in dynamic sys-tems,” Human factors, vol. 37, no. 1, pp. 65–84, 1995.
[13] P. Howarth and M. Finch, “The nauseogenicity of two methods ofnavigating within a virtual environment,” Applied Ergonomics, vol. 30,no. 1, pp. 39–45, 1999.
[14] G. Zhang, K. Minakata, A. Alapetite, Z. Wang, M. Thomsen, andJ. P. Hansen, “Impact of task complexity on driving a gaze-controlledtelerobot,” in Abstracts of the Scandinavian Workshop on Applied EyeTracking (SWAET 2018), D. Barratt, R. Bertram, and M. Nystrom, Eds.,vol. 11, no. 5. Journal of Eye Movement Research, 2018, p. 30.
575
134 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
B.3 Hand and Gazecontrol of Telepresence RobotsAuthors: Guangtao Zhang, Katsumi Minakata, and John Paulin Hansen
In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications(ETRA 2019)
DOI: 10.1145/3317956.3318149
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 135
Hand- and Gaze-Control of Telepresence RobotsGuangtao Zhang
Technical University of DenmarkKgs. Lyngby, Denmark
John Paulin HansenTechnical University of Denmark
Kgs. Lyngby, [email protected]
Katsumi MinakataTechnical University of Denmark
Kgs. Lyngby, [email protected]
ABSTRACTMobile robotic telepresence systems are increasingly used to pro-mote social interaction between geographically dispersed people.People with severe motor disabilities may use eye-gaze to control atelepresence robots. However, use of gaze control for navigation ofrobots needs to be explored. This paper presents an experimentalcomparison between gaze-controlled and hand-controlled telepres-ence robots with a head-mounted display. Participants (n = 16) hadsimilar experience of presence and self-assessment, but gaze controlwas 31% slower than hand control. Gaze-controlled robots had morecollisions and higher deviations from optimal paths. Moreover, withgaze control, participants reported a higher workload, a reducedfeeling of dominance, and their situation awareness was signifi-cantly degraded. The accuracy of their post-trial reproduction ofthe maze layout and the trial duration were also significantly lower.
CCS CONCEPTS•Human-centered computing→Accessibility technologies;
KEYWORDSGaze interaction, eye-tracking, telepresence robots, human-robotinteraction, accessibility, assistive technology, head-mounted dis-plays
ACM Reference format:Guangtao Zhang, John Paulin Hansen, and Katsumi Minakata. 2019. Hand-and Gaze-Control of Telepresence Robots. In Proceedings of Communicationby Gaze Interaction, Denver, CO, USA, June 25–28, 2019 (COGAIN@ ETRA’19),8 pages.https://doi.org/10.1145/3317956.3318149
1 INTRODUCTIONEye-tracking technology has matured and become inexpensive.Gaze-tracking sensors can be built into computers, head-mounted
Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected] @ ETRA’19, June 25–28, 2019, Denver, CO, USA© 2019 Copyright held by the owner/author(s). Publication rights licensed to Associa-tion for Computing Machinery.ACM ISBN 978-1-4503-6728-8/19/06. . . $15.00https://doi.org/10.1145/3317956.3318149
displays (HMD) and mobile devices. The main motivation for ourresearch is to explore the use of gaze control when interactingwith telerobots since gaze has shown to be an effective controlmethod in gaze communication for people with profound motordeficits. By use of a telerobot, they may be able to participate inevents at geographically distant locations and they may be giventhe freedom to move around at their own will with gaze steering.In order to evaluate the effectiveness and the challenges of gaze-based human-robot interaction, we compare it to a well-know inputmethod, namely hand-controlled joy-sticks. Joy-sticks would notbe an option for our target group, but by comparing with this, wecan identify metrics and test methods that are able to measuresignificant differences in the design of control principles, user-interfaces and the telerobot form factors.
First we outline previous research in related areas and describeour prototype of a system for gaze-controlled telepresence. Anexperiment with 16 subjects comparing their performance usinggaze and hand is presented. Finally, we discuss on our findings andpotential challenges for the target users to utilize gaze-controlledtelepresence and we consider possible improvement of our currentsystem.
2 RELATEDWORKWhile telerobots are typically controlled with hand input throughjoystick, mouse, or keyboard, the goal of our project is to develop ahands-free telepresence control method, which is simple, and easyto use for people who can only move their eyes [Zhang et al. 2019].Hands-free human-(tele)robot interaction has been demonstratedwith speech [Tsui et al. 2013], brain activity [Leeb et al. 2015],movements of the eyes [Tall et al. 2009], head gesture [Jackowskiand Gebhard 2017], and physiological signals [Wang et al. 2018].
Previous studies have made empirical evaluation of eye-gaze-controlled typing [Majaranta et al. 2009], video games [Isokoskiet al. 2009], driving [Tall et al. 2009], flying a drone [Hansen et al.2014], wheelchair steering [Eid et al. 2016], and robot control [Grasand Yang 2016].
It has not yet been explored, how situation awareness, presence,performance, workload, and subjective experience may be influ-enced by gaze control of telerobots. Early research[Ciger et al. 2004]argued that exclusively control of aircraft cockpits by gaze only isvery unnatural and results in higher workload.
The concept of "telepresence" first appeared in 1980s and cre-ated a vision of remote users experiencing "actually being there"
136 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
COGAIN @ ETRA’19, June 25–28, 2019, Denver, CO, USA G. Zhang et al.
without any noticeable difference between real and transmittedenvironments[Minsky 1980]. Telepresence robots are now gainingincreased attention as a means for remote communication [Tanakaet al. 2014]. The systems combine video-conferencing capabilitieswith a robot vehicle that canmaneuver in remote locations [Rae et al.2013]. These systems are becoming increasingly popular withincertain application domains, e.g. distributed collaboration for ge-ographically distributed teams [Lee and Takayama 2011], shop-ping over distance [Yang et al. 2018], academic conferences [Raeand Neustaedter 2017], children-teacher communication [Tanakaet al. 2014], communication between distance couples [Yang andNeustaedter 2018], and outdoor activities [Heshmat et al. 2018].
The systems have been found to support a sense of presenceat the remote location, mainly because of the remote user’s abil-ity to be mobile [Lee and Takayama 2011]. Previous studies oflong distance family relations supported by telepresence-robot-mediated-communication revealed the importance of telepresencerobot communication which include autonomy, unpredictability,movement as body language, and perspectives [Yang et al. 2017].
A typical 360 video camera is an omnidirectional camera whichcan capture a sphere around the camera as 360 videos. 360 videoscan be viewed with HMDs and tablets[Heshmat et al. 2018] andprovide an omnidirectional view. Viewers can freely change theirfield of view by looking around and get an immersive experience[Tang and Fakourfar 2017]. It has been shown that using 360video view with telepresence robots in indoor settings increases intask efficiency, but also increases difficulty level of use [Johnsonet al. 2015]. However, in 360 view, users in both remote and localenvironments found it more difficult to understand directions andorientation [Tang et al. 2017], and low transmission rates may affectthe quality of the videos.
Achieving a sense of "being there" is considered the biggestchallenge for developing telepresence[Minsky 1980]. The sense canbe described as the degree of immersion one experiences within avirtual or transmitted environment such that it elicits the feelingof existence [Sheridan 1992]. Early research [Howarth and Finch1999] found that lack of presence resulted in motion sickness.
Situation awareness (SA) plays an important role for telepres-ence and for understanding of the environment the telerobot isnavigating through [Endsley 2000]. SA is also considered a primarybasis for performance [Endsley 1995]. Current methods for measur-ing situation awareness in driving settings has been adapted fromaviation. For instance, the Situation Present Awareness Method(SPAM) [Durso et al. 2006] uses real-time probes to measure re-sponse accuracy and response time.
3 A GAZE INTERACTIVE TELEROBOTSYSTEM
Our gaze-controlled telepresence systems [Hansen et al. 2018] werebuilt around a robot platform, with eye tracking in a HMD, anda user interface was developed in Unity (version: 2018.1.6)1. Theplatform leverages on the open source robot operating system(ROS) and its ecosystem, which supports various client platformsand reuse. Several types of robots have been applied in our system,including an off-the-shelf robot (PadBot), developer-oriented robots
1https://unity3d.com [last accessed - 24-01-2019]
(Parallax Arlo), and modified wheelchairs. In this study, a Padbotcould be moved around with gaze control or a joystick. The robotcarries a 360 video camera, and a microphone. A Fove2 HMDtransmits the live video stream from the 360 video camera.
VR HMDwith eye trackers
Telerobots
Gaze control
Live video stream360° video camera
Figure 1: Our gaze interactive interface can serve a rangeof telerobots, including a modified wheelchair, a build-yourself model (Parallax Arlo) and third-party models (Pad-bot).
1
Figure 2: Image of gazeUI that overlays the live video stream.The pink circle (on the floor) shows the gaze courser and isvisible to the user. The red separator lines and direction ar-rows are not show to the user. Mode shifts and overlay inter-faces are done by gazing at icons in the bottom.
The user interface (UI) is connected with the ROS system. Twomodes (parking and driving) can be selected by using the virtual con-trol panel [Hansen et al. 2018]. Parking mode allows the pilot userto get a full 360 view of the local environment via a panning-tool.Driving mode displays a fixed front camera view in low-resolution2https://www.getfove.com [last accessed - 24-01-2019]
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 137
Hand- and Gaze-Control of Telepresence Robots COGAIN @ ETRA’19, June 25–28, 2019, Denver, CO, USA
to minimize delay of the live video stream. In this study we onlyused the driving mode. The UI is an invisible control layout workingon top of the live video stream. Gaze movements are mapped tothe robot’s movements via the UI. The robot turns in the directionthe user is looking. When the driver closes his/her eyes or looksoutside the area of the live video stream, the robot stops moving.Figure 2 shows a screen-shot of the layout. The gaze point (pinkcursor) is visible to the user inside the video steam, indicating theposition from where continuous input are sent as commands tothe telerobot. When looking in the left upper area, the robot turnsleft; in the middle area, it drives forward; in the right upper area,it turns right; in the left lower area, it turns left (spin turn); in theright lower area, it turns right (spin turn). The velocity is controlledby the distance between gaze position and the center position ofthe live video stream (maximum linear velocity: 1.2 m/s).
4 EXPERIMENTS4.1 ParticipantsA total of 16 able-bodied participants (6 females, 10 males, meanage= 29 years) participated in this study. 11 participants had expe-rience with VR glasses, 8 had experience with gaze interaction and2 had experience with telepresence robots.
Figure 3: Two conditions: gaze control (top image) or handcontrol (bottom image).
4.2 ApparatusThe test subjects were sitting in a remote control room (cf. Figure 3).The (FOVE) and a joystick (Microsoft Xbox 360 Controller) wereconnected with a computer with Unity. The computer was con-nected with the telerobot via a wireless network. The FOVE HMDweigh 520 grams and has a resolution of 2560 x 1440 px, renders ata maximum of 70 fps, and has an FOV of 100. During driving, the
live video stream is displayed at the HMD via the Unity software.Gaze tracking is handled by two built-in infrared eye trackers of theFove. Tracking precision is reported to be less than 1 at a 120 Hzsampling rate. It also offers IMU-based sensing of head orientation,and optical tracking of head position.
In the driving room (cf. Figure 4), a telerobot carried a 360camera (Ricoh Theta S)3, a microphone, and two sensors for indoorpositioning. The camera was 1.3 m above the floor. Five ultra-soundsensors (GamesOnTrack)4 were mounted on the wall and with atransmitter placed on top of the telerobot this tracking systemallowed positioning in 3D with a precision of 1 - 2 centimeters.Plastic sticks on the floor were used to mark up the maze trackscovering an area of 20 square meters (length: 5m, width: 4m, cf.Figure 5). Three sheets of A4 paper with a pie-chart was hangingon the wall at different locations to show how far the robot haddriven at that position.
Figure 4: The telerobot driving through the maze.
4.3 MeasuresQuantitative and qualitative measures were used to cover perfor-mance metrics [Steinfeld et al. 2006] in our study. They included:
(1) Log data of the telerobot from both Gamesontrack ultra-sound sensors and from the telerobot’s encoders, includinga timestamp, telerobot’s position (x, y), and velocity.
(2) Log data from the UI, including response time to two types ofon-screen, pop-up queries (see 3 below). Each questionwouldappear only once, in total 4 times for each trial (except anorientation-related query, "Can you tell me which directionyou are facing now"), in order to reduce learning effect.
(3) Responses to the pop-up queries. Based on task analysis andSA theory [Endsley 1995], the queries included perception-related questions (i.e. "What gender was the voice you justheard?", and "Can you tell me which direction you are facingnow? "), comprehension-related queries (i.e. "What kind ofinformation did the person tell you?"), and projection-relatedqueries (i.e. "Can you estimate when you will be finishedwith the task").
(4) A task Load Index (NASA-TLX) [Hart and Staveland 1988])questionnaire was used to collect workload ratings after
3https://theta360.com/) [last accessed - 24-01-2019]4http://www.gamesontrack.dk/ [last accessed - 24-01-2019]
138 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
COGAIN @ ETRA’19, June 25–28, 2019, Denver, CO, USA G. Zhang et al.
each trial (with 6 rating scales, including mental demand,physical demand, temporal demand, performance, effort andfrustration). Each rating scale had 21 gradations.
(5) A Presence Questionnaire [Witmer and Singer 1998] revisedby the UQO Cyberpsychology Lab was also used to rate thefeeling of presence on a 7-point scale after each trial.
(6) Self AssessmentManikin (SAM) [Bradley and Lang 1994]wasused for the participants to report their feelings of pleasure,arousal and dominance on a 5-graded facial pictorial form.
(7) The participants’ response to post-trial questions about esti-mated task duration duration and recollection of the mazelayout, positions of the person who was talking with them,and the number of times they communicated with the per-son. All the response were quantified with percentage ofaccuracy.
Qualitative measures included: (1) Video recorded in the remoteenvironment for post trial analysis; (2) Video recorded of the UnityUI environment (including their gaze point) for post analysis; (3)Feedback provided in post-trial interviews.
4.4 Experimental DesignA within-subjects design was used in this experiment.
There were two groups of independent variables: input method(gaze control & hand control), and order of trials with same method(Trial 1 & Trial 2). Dependent Variables included the participants’SA, presence, workload, performance, post-trial estimation and rec-ollection, self-assessment, and experience of navigating the teler-obot with the control methods.
Each participant used each method to navigate through twomazes. Each participant navigated through four different mazeswith a layout that were of similar length and complexity (cf. Fig-ure 5). The orders in which participants were exposed to each mazewas counterbalanced by a Latin square. Half of the participantsstarted with gaze control, and half of them started with hand con-trol. In total, 64 trials with data collection and observation had beenmade (i.e. 16 participants x 4 trials; two trials with joystick and twotrials with gaze control).
Figure 5: Maze layouts (green: starting point, red: end point).
4.5 ProcedureThe main task was to navigate the telerobot through a maze. Halfof the participants did this first by driving two trials with gazecontrol, and half of them started with two trials using joystickcontrol. Each participant had four trials in total, two for each ofthe control conditions. The maze layouts were similar in terms of
length and difficulty but different in the path structure , cf. Figure 5.The orders of the maze layouts were balanced in a latin squaredesign across participants and control conditions. Before each gazetrial the standard gaze calibration procedure for the FOVE headsetwas conducted.
During the trials the experimenter observed their operation viaan LCD display.When the telerobot passed certain areas in themazeor when a maneuver, e.g. a turn, had been done, a query pop-up inthe control display was prompted by the experimenter. When theparticipants had given a verbal response, the experimenter closedthe query pop-up down and their response time were logged in thesystem.
In the remote driving room, a person was standing at threedifferent positions during the trials. When the telerobot passed by,this person faced the camera and talked to the participants via thetelerobot, providing information related to the remote environment.For instance, when the telerobot reached to the middle of the maze,the person would say, "Hello, I would like to inform you that thestatus of your progress is 50 percent."
After each trial, the participant took off the HMD, and answeredthe set of questionnaires described above (SAM, TLX, Presence,estimation, and recollection). At the very end of the test, the partic-ipant were interviewed shortly with the following questions: "Howdid you feel about the wearing the HMD? Did you experience anysickness, headache, or gaze control problems?" Each session lastedapproximately 65 minutes in total.
5 RESULTS5.1 Performance
(1) Task completion time:With two-way ANOVA and followed-up pairwise compar-isons with a bonferroni correction, we found significant maineffects of input method, F(1,64)= 5.40, p = .023, η2 = 0.083,on the task completion time (driving task only). Neither themain effect of trial order nor the interaction between inputmethod and trial order were statistically significant.The mean task completion time for gaze control was 93.86seconds (SD = 61.51). Hand control was 30.72% faster thangaze control with a mean task completion time of 65.03seconds (SD = 31.43).In addition, we analysed the time between the participant’shad finished answering a pop-up query and started drivingagain. With two-way ANOVA, we found no significant maineffects of the input method, the trial order, or the interactionbetween input method and trial order on the length of thistime slot.
(2) Deviation from optimal path:By comparing the path with its corresponding optimal pathwith the same maze layout, we calculated each path’s root-mean-square deviation (RMSD) value using the followingequation (Pt : a position on the path; Pt : a position on theoptimal path with the shortest distance with Pt .).
RMSD =
∑nt=1(Pt − Pt )
2
n(1)
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 139
Hand- and Gaze-Control of Telepresence Robots COGAIN @ ETRA’19, June 25–28, 2019, Denver, CO, USA
With two-way ANOVA and followed-up pairwise compar-isons with a bonferroni correction, we found significant maineffects of input method, F(1,64) = 20.05, p = .000073,η2 = 0.35,on the deviation from optimal path. Neither the main effectof trial order nor the interaction between input method andtrial order were statistically significant.The mean RMSD for gaze control was 0.40 (SD = 0.90). Handcontrol had 58.75% smaller deviation than gaze control witha mean RMSD of 0.25 (SD = 0.11).
Figure 6: Mean and standard deviation (SD) of task comple-tion time and RMSD with eye control and hand control.
Figure 7: A path plot from a participant using gaze control.One collision (a maze devider hit) is seen in the top rightcorner.
(3) Number of collisions:There were two types of collisions: room wall hits and mazedivider hits. With two-way ANOVA and followed-up pair-wise comparisons with a bonferroni correction, we found
Table 1: Mean percentage of accuracy (%) [Mean (SD)].
Perception Comprehension ProjectionHand 0.86 (0.20) 0.47 (0.51) 0.59 (0.23)Gaze 0.85 (0.25) 0.47 (0.51) 0.36 (0.29)
Table 2: Mean response time (s) to the pop-up queries [Mean(SD)].
Perception Comprehension ProjectionHand 6.00 (2.96) 13.23 (7.94) 22.41 (10.22)Gaze 7.43 (2.67) 13.41 (5.28) 21.89 (8.26)
significant main effects of the input method, F(1,64) = 4.75,p = .033, η2 = 0.073, on number of collisions. Neither themain effect of trial order nor the interaction between inputmethod and trial order were statistically significant.The mean number of collisions for gaze control was 1.68 (SD= 1.73), while hand control had a mean number of collisionsof 0.75 (SD = 1.67).
(4) Task completion:Analysis of path plots of all 64 trials (c.f. Fig.7) and the videorecordings, showed that only 1 trial was not completed (i.e.gaze control in first trial).
5.2 Situation AwarenessThe results from SPAM measures included the participants’ per-centage of accuracy to the SA-related queries, and two types ofresponse time.
A two-way ANOVAs was conducted on the percentage of accu-racy to the projection-related queries and followed up with pairwisecomparisons with a bonferroni correction. With two-way ANOVAs,we found that input methods had significant main effects , F(1,64)=12.36, p = .00084, η2 = 0.17, on the percentage of accuracy to theprojection-related queries.
With two-way ANOVAs, we found no significant effects ofinput methods on the percentage of accuracy to perception- orcomprehension-related queries. Moreover, we found no significanteffect of the trial order on all the above mentioned aspects, and nointeraction of input method and trial order.
The gaze control (M = 0.36, SD = 0.29) yielded lower percentageof accuracy in response to the projection-related SA queries thanthe hand control (M = 0.59, SD = 0.23).
A two-way ANOVA was conducted on the response time (s)to the perception-related queries and followed up with pairwisecomparisons with a bonferroni correction. We found significantmain effects of input method, F(1,64) = 4.01, p = .049, η2 = 0.062,on the response time to the perception-related queries.
Specifically, the gaze control (M = 7.43, SD = 2.67) yielded longerresponse time the perception-related queries than the hand control(M = 6.00, SD = 2.96).
However, with two-way ANOVAs, we found no significant ef-fects of input methods on response time to the comprehension- or
140 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
COGAIN @ ETRA’19, June 25–28, 2019, Denver, CO, USA G. Zhang et al.
projection-related queries. We found no significant effect of trial or-der on response time to all three types of queries, and no interactionof input method and trial order.
Moreover, response time to the pre-query pop-up was analyzed.Specifically, we focused on the participants’ mean response time inthe first trial of each method. In the first trial, the gaze control (M= 3.53, SD = 2.02) yielded longer response time the pre-query thanthe hand control (M = 2.85, SD = 0.76).
5.3 WorkloadBased on the participants’ response to NASA-TLX, 6 two-wayANOVAs were conducted on the 6 measures and followed up withpairwise comparisons with a bonferroni correction. We found thatinput methods had significant main effects , F(1,64)= 5.04, p= .028,η2 = 0.076, on the mental demand; significant main effects, F(1,64)=2.67, p= .030, η2 = 0.042, on the physical demand; significant maineffects, F(1,64)= 8.14, p= .00059, η2 = 0.12, on the effort; significantmain effects, F(1,64)= 6.60, p= .016, η2 = 0.098, on the frustration;significant main effects, F(1,64)= 2.25, p= .019, η2 = 0.035, on theperformance.
Effort
Frustration
Mental Demand
Performance
Physical Demand
Temporal Demand
0 5 10 15
Mean
Asp
ect
Method Eye Hand
Figure 8: Mean and SD of ratings of workload for eye controland hand control.
However, we found no significant effects of input methods onthe Temporal Demand. The gaze control (M = 10.75, SD = 5.14)yielded more Mental Demand than the hand control (M = 7.97, SD =4.65). The gaze control (M = 6.88, SD = 4.46) yielded more PhysicalDemand than the hand control (M = 5.22, SD = 3.52). The gazecontrol (M = 9.69, SD = 4.73) yielded more Effort than the handcontrol (M = 6.47, SD = 4.33). The gaze control (M = 8.41, SD = 3.99)yielded more Frustration than the hand control (M = 5.91, SD = 3.75).
The gaze control (M = 9.41, SD = 4.83) yielded more Performancethan the hand control (M = 7.19, SD = 4.88).
Neither the main effect of trial order nor the interaction betweeninput method between input method and orders were statisticallysignificant.
5.4 Post-trial Estimation and RecollectionWith a two-way ANOVA and a followed up pairwise comparisonswith a bonferroni correction, we found that input methods hadsignificant main effects, F(1,64) = 5.93, p = .018, η2 = .090, onparticipants abiity to draw a correct sketch of the maze they hadjust been driving through.
Gaze control yielded lower percentage of accuracy in mazesketch (M = 58.43, SD = 37.60) than hand control (M = 79.68, SD =30.85).
With two-way ANOVAs, we founded no significant effect ofthe input methods on correct memory of the positions at which aperson had shown up in the maze.
The participants’ duration estimation was quantified (score =|participant’s duration estimation - duration indeed|). With a two-way ANOVAs and a followed up pairwise comparisons with abonferroni correction, we found that input methods had significantmain effects, F(1,64) = 5.80, p = .019, η2 = 0.09, on on their esti-mation of duration of driving. Moreover, The input methods hadsignificant main effects, F(1,64) = 7.55, p = .0079, η2 = 0.11, on ontheir estimation of number of collisions.
Gaze control yielded lower percentage of accuracy (M = 1.21, SD= 0.99) in duration estimation than hand control (M = 2.13, SD =2.03).
5.5 PresenceSix two-way, mixed ANOVAs were conducted on the 6 measuresincluded in the Presence Questionnaire. We found no significanteffect of trial orders, no significant effect of input methods, andno interaction of input method and trial order with scores of thefollowing aspects: realism, quality of interface, possibility to act,self-evaluation of performance, and sounds.
With one of the ANOVAs and followed up pairwise comparisonswith a bonferroni correction, we found that input methods hadsignificant main effects, F(1,64) = 6.00, p = .017, η2 = 0.091, onscores of one aspect, namely possibility to examine. However, wefound no significant effects of trial order on scores of this aspect,and no interaction of input method and trial order.
Specifically, the gaze control (M = 4.35, SD = 0.77) yielded lessmean score of possibility to examine than the hand control (M =4.91, SD = 2.21).
5.6 Self-AssessmentThe participants’ pre- and post-trial self-assessment of pleasure,arousal and dominance were analyzed. Post-trial value minus pre-trial value indicated the changes during a trial.
With a two-way ANOVAs and followed up with pairwise com-parison with a bonferroni correction, we found significant maineffects of input method, F(1,64) = 13.10, p = .00067, η2 = 0.182,on the changes in the dominance aspect. However, we found nosignificant effects of trial order, and no interaction of input method
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 141
Hand- and Gaze-Control of Telepresence Robots COGAIN @ ETRA’19, June 25–28, 2019, Denver, CO, USA
and trial order. Specifically, the gaze control (M = -0.43, SD = 1.11)yielded a feeling of reduced dominance, while the hand controlyielded a feeling of increased dominance (M = 0.43, SD = 0.80).With two-way ANOVAs, we found no significant effects of inputmethods on the changes in the pleasure aspect or the arousal as-pect. Neither the main effect of order nor the interaction betweeninput method between input method and orders were statisticallysignificant. Finally, we found no significant effect of order of trialson the aspect, and no interaction of input method and trial order.
5.7 Path AnalysisGaze-controlled telerobot’s paths and hand-controlled telerobot’spathswere compared (cf. Figure 9).When turning the gaze-controlledtelerobot at the corner, users usually kept bigger distance to themaze markers than when using hand-controlled telerobot. Thisphenomenon was most obvious in a 180 degree turning.
Figure 9: Visualisation of gaze-controlled telerobots’ paths(left) and hand-controlled telerobots’ paths (right).
5.8 User commentsAt the end of the experiment, each participant was asked to sharetheir observations. There were a few common themes appearing inthe comments.
Four participants specifically mentioned that they preferred thegaze control. One typical feedback was "fun but hard to control".Eight participants mentioned that they preferred the hand control.Here are some typical examples of what they said about the controlmethods:
"I .... prefer the gaze but it is harder to control whenit moves forward. The controller is more natural ....""Joystick is much easier, very responsive. I had nevertried VR before, so ... first time sickness was a distract-ing factor for me.""With the joystick there is the possibility of stoppingthe movement with a button, for example to lookaround and making better choices for the next move-ments. In gazing mode, I couldn’t do that and everytime I was looking somewhere to gain informationabout the environment, the robot was moving and Iwas losing control. I feel the necessity, also in gazingmode, for a move and stop switch.""... the VR-glasses remain heavy and they have to sitlow on my head to be able to calibrate. A bit to lowin terms of comfort". "
"The VR headset is heavy and uncomfortable to wearfor more than 5 minutes... "
6 DISCUSSIONA main observation was that telepresence robots could actuallybe controlled by gaze. All the participants were able to use gazecontrol to finish the trials, except for one. Achieving the feeling ofpresence is a key issue and the biggest challenge for telepresence[Minsky 1980]. Adding gaze control to a telepresence system hadsignificant impact on one aspects of presence, namely the feelingof "possibility to exam".
There are still serious challenges for the use of gaze driving inteleoperation when in comes to performance and workload, whichare both of greath importance for human-robot interaction [Stein-feld et al. 2006]. Gaze control of robots are clearly more difficultthan hand control in all the aspects we measured. Novice usersobviously need more practice with gaze-control of telerots, sincethey are unfamiliar with both gaze interaction and telerobots. Con-sequently, our future research plan is to study the possible effects ofpractice on performance. Moreover, sensors for collision avoidanceand an interface map view may be considered. In particularly, wewill improve the interface for gaze control to allow for more freeexamination of the environment without moving the telerobot atthe same time.
Challenges of gaze control in performance were also reflectedin the difference of the participants’ level of situational awarenessbetween two input methods. The differences in their responsesto our projection-related queries between the two control meth-ods were obvious, which indicates a higher SA when using handcontrol. We also observed a difference in mean response time tothe projection-queries between two control methods. For example,when the query "Can you estimate when you will be finished withthe task" appeared, participants using gaze control needed moretime to think about thier answer.
With gaze-controlled telerobots, the self-reported workload werehigher on all six measures, and the measures using the SPAMmethod also found a higher response time to a pre-question beforeeach query (i.e. "Are you ready to answer a question?"). The partic-ipants needed more response time to switch from the navigationtask to a mental task.
The self-assessments indicated that feelings of dominance weresmaller when using gaze control, but interestingly, the feelings ofpleasure and arousal increased. This was consistent with commentslike "fun but hard", to the experience with gaze control. When usinggaze control, the participants usually kept bigger distances fromobstacles to avoid collision. This was consistent with the lowerself-reported feeling of dominance.
The interview also suggested severe problems with wearing aHMD for a longer time. In our future study, we will examine if thereis a difference between using a HMD and a monitor.
There were several imitation in this study. First and foremost,the participants only had a limited number of trials. Consideringthat gaze interaction were completely new to most of them, thelarge difference we found between hand and gazer control may bereduced with more training. Due to space constraints, the size ofthe maze were rather small. If participants had driven for a longer
142 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
COGAIN @ ETRA’19, June 25–28, 2019, Denver, CO, USA G. Zhang et al.
time, made more turns and tested several different layouts thanthe four we used, a training effect might also have become moreclear. In fact, we saw no training effect in our present experiment(i.e. differences between first and second trial). The duration ofthe navigation task was relatively short, and the participants werefrequently interrupted by pop-up queries. In our future studies weintend to reduce the number of queries, but still maintain some,since some of them seems to be quite sensitive when measuringSA.
7 CONCLUSIONThis study demonstrated the possibility to control a telerobot withgaze. Compared to hand control the performance and subjectiveexperience were significantly lower. We have presented a set ofmeasures and we have developed a maze-based test method thatmay be considered as a common ground for future research inalternative control principles for telerobot control.
ACKNOWLEDGMENTSThis research has been supported by the China Scholarship Counciland the Danish Bevica Foundation. Alexandre Alapetite, ZhongyuWang, Antony Nestoridis, and Martin Thomsen contributed to thesystem development.
REFERENCESMargaret M Bradley and Peter J Lang. 1994. Measuring emotion: the self-assessment
manikin and the semantic differential. Journal of behavior therapy and experimentalpsychiatry 25, 1 (1994), 49–59.
Jan Ciger, Bruno Herbelin, and Daniel Thalmann. 2004. Evaluation of gaze track-ing technology for social interaction in virtual environments. In Proc. of the 2ndWorkshop on Modeling and Motion Capture Techniques for Virtual Environments(CAPTECH’04). 1–6.
Francis T Durso, M Kathryn Bleckley, and Andrew R Dattel. 2006. Does situationawareness add to the validity of cognitive tests? Human Factors 48, 4 (2006),721–733.
Mohamad A Eid, Nikolas Giakoumidis, and Abdulmotaleb El-Saddik. 2016. A NovelEye-Gaze-Controlled Wheelchair System for Navigating Unknown Environments:Case Study With a Person With ALS. IEEE Access 4 (2016), 558–573.
Mica R Endsley. 1995. Measurement of situation awareness in dynamic systems. Humanfactors 37, 1 (1995), 65–84.
Mica R Endsley. 2000. Direct Measurement of Situation Awareness: Validity and Useof SAGAT. In Situation Awareness Analysis and Measurement, M. R. Endsley & D. J.Garland (Ed.). Lawrence Erlbaum Associates, Mahwah NJ, 147–173.
Gauthier Gras and Guang-Zhong Yang. 2016. Intention recognition for gaze controlledrobotic minimally invasive laser ablation. In Intelligent Robots and Systems (IROS),2016 IEEE/RSJ International Conference on. IEEE, 2431–2437.
John Paulin Hansen, Alexandre Alapetite, I Scott MacKenzie, and Emilie Møllenbach.2014. The use of gaze to control drones. In Proceedings of the Symposium on EyeTracking Research and Applications. ACM, 27–34.
John Paulin Hansen, Alexandre Alapetite, Martin Thomsen, Zhongyu Wang, KatsumiMinakata, and Guangtao Zhang. 2018. Head and gaze control of a telepresencerobot with an HMD. In Proceedings of the 2018 ACM Symposium on Eye TrackingResearch & Applications. ACM, Article 82.
Sandra G Hart and Lowell E Staveland. 1988. Development of NASA-TLX (Task LoadIndex): Results of empirical and theoretical research. In Advances in psychology.Vol. 52. Elsevier, 139–183.
Yasamin Heshmat, Brennan Jones, Xiaoxuan Xiong, Carman Neustaedter, AnthonyTang, Bernhard E Riecke, and Lillian Yang. 2018. Geocaching with a Beam: SharedOutdoor Activities through a Telepresence Robot with 360 Degree Viewing. InProceedings of the 2018 CHI Conference on Human Factors in Computing Systems.ACM, 359.
PA Howarth and M Finch. 1999. The nauseogenicity of two methods of navigatingwithin a virtual environment. Applied Ergonomics 30, 1 (1999), 39–45.
Poika Isokoski, Markus Joos, Oleg Spakov, and Benoît Martin. 2009. Gaze controlledgames. Universal Access in the Information Society 8, 4 (2009), 323.
Anja Jackowski and Marion Gebhard. 2017. Evaluation of hands-free human-robotinteraction using a head gesture based interface. In Proceedings of the Companion
of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. ACM,141–142.
Steven Johnson, Irene Rae, Bilge Mutlu, and Leila Takayama. 2015. Can you see menow?: How field of view affects collaboration in robotic telepresence. In Proceedingsof the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM,2397–2406.
Min Kyung Lee and Leila Takayama. 2011. Now, I have a body: Uses and social normsfor mobile remote presence in the workplace. In Proceedings of the SIGCHI conferenceon human factors in computing systems. ACM, 33–42.
Robert Leeb, Luca Tonin, Martin Rohm, Lorenzo Desideri, Tom Carlson, and José del RMillán. 2015. Towards independence: a BCI telepresence robot for people withsevere motor disabilities. Proc. IEEE 103, 6 (2015), 969–982.
Päivi Majaranta, Ulla-Kaija Ahola, and Oleg Špakov. 2009. Fast gaze typing with anadjustable dwell time. In Proceedings of the SIGCHI Conference on Human Factors inComputing Systems. ACM, 357–360.
Marvin Minsky. 1980. Telepresence. Omni 2, 9 (1980), 44–52.Irene Rae and Carman Neustaedter. 2017. Robotic Telepresence at Scale. In Proceedings
of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 313–324.Irene Rae, Leila Takayama, and Bilge Mutlu. 2013. The influence of height in robot-
mediated communication. In Proceedings of the 8th ACM/IEEE international confer-ence on Human-robot interaction. IEEE Press, 1–8.
Thomas B Sheridan. 1992. Musings on telepresence and virtual presence. Presence:Teleoperators & Virtual Environments 1, 1 (1992), 120–126.
Aaron Steinfeld, Terrence Fong, David Kaber, Michael Lewis, Jean Scholtz, Alan Schultz,and Michael Goodrich. 2006. Common metrics for human-robot interaction. InProceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction.ACM, 33–40.
Martin Tall, Alexandre Alapetite, Javier San Agustin, Henrik HT Skovsgaard,John Paulin Hansen, Dan Witzner Hansen, and Emilie Møllenbach. 2009. Gaze-controlled driving. In CHI’09 Extended Abstracts on Human Factors in ComputingSystems. ACM, 4387–4392.
Fumihide Tanaka, Toshimitsu Takahashi, Shizuko Matsuzoe, Nao Tazawa, andMasahiko Morita. 2014. Telepresence robot helps children in communicatingwith teachers who speak a different language. In Proceedings of the 2014 ACM/IEEEinternational conference on Human-robot interaction. ACM, 399–406.
Anthony Tang and Omid Fakourfar. 2017. Watching 360 videos together. In Proceedingsof the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 4501–4506.
Anthony Tang, Omid Fakourfar, Carman Neustaedter, and Scott Bateman. 2017. Collab-oration in 360 Videochat: Challenges and Opportunities. Technical Report. Universityof Calgary.
Katherine M Tsui, Kelsey Flynn, Amelia McHugh, Holly A Yanco, and David Kontak.2013. Designing speech-based interfaces for telepresence robots for people withdisabilities. In Rehabilitation Robotics (ICORR), 2013 IEEE International Conferenceon. IEEE, 1–8.
Ker-Jiun Wang, Hsiao-Wei Tung, Zihang Huang, Prakash Thakur, Zhi-Hong Mao, andMing-Xian You. 2018. EXGbuds: Universal Wearable Assistive Device for DisabledPeople to Interact with the Environment Seamlessly. In Companion of the 2018ACM/IEEE International Conference on Human-Robot Interaction. ACM, 369–370.
Bob GWitmer and Michael J Singer. 1998. Measuring presence in virtual environments:A presence questionnaire. Presence 7, 3 (1998), 225–240.
Lillian Yang, Brennan Jones, CarmanNeustaedter, and Samarth Singhal. 2018. ShoppingOver Distance through a Telepresence Robot. Proceedings of the ACM on Human-Computer Interaction 2, CSCW, Article 191 (2018).
Lillian Yang and Carman Neustaedter. 2018. Our House: Living Long Distance witha Telepresence Robot. Proceedings of the ACM on Human-Computer Interaction 2,CSCW, Article 190 (2018).
Lillian Yang, Carman Neustaedter, and Thecla Schiphorst. 2017. Communicatingthrough a telepresence robot: A study of long distance relationships. In Proceedingsof the 2017 CHI Conference Extended Abstracts on Human Factors in ComputingSystems. ACM, 3027–3033.
Guangtao Zhang, John Paulin Hansen, Katsumi Minakata, Alexandre Alapetite, andZhongyu Wang. 2019. Eye-Gaze-Controlled Telepresence Robots for People withMotor Disabilities. In 2019 14th ACM/IEEE International Conference on Human-RobotInteraction (HRI). IEEE, 574–575.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 143
B.4 Accessible Control of Telepresence Robots based onEye Tracking
Authors: Guangtao Zhang, and John Paulin Hansen
In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications(ETRA 2019)
DOI: 10.1145/3314111.3322869
144 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Accessible Control of Telepresence Robots based on EyeTracking
Guangtao ZhangTechnical University of Denmark
Kgs. Lyngby, [email protected]
John Paulin HansenTechnical University of Denmark
Kgs. Lyngby, [email protected]
ABSTRACTGaze may be a good alternative input modality for people withlimited hand mobility. This accessible control based on eye trackingcan be implemented into telepresence robots, which are widelyused to promote remote social interaction and providing the feelingof presence. This extended abstract introduces a Ph.D. researchproject, which takes a two-phase approach towards investigatinggaze-controlled telepresence robots. A system supporting gaze-controlled telepresence has been implemented. However, our cur-rent findings indicate that there were still serious challenges withregard to gaze-based driving. Potential improvements are discussed,and plans for future study are also presented.
CCS CONCEPTS•Human-centered computing→Accessibility technologies;
KEYWORDSGaze interaction, eye tracking, telepresence robots, human-robotinteraction, accessibility, assistive technologyACM Reference format:Guangtao Zhang and John Paulin Hansen. 2019. Accessible Control of Telep-resence Robots based on Eye Tracking. In Proceedings of 2019 Symposium onEye Tracking Research and Applications, Denver, CO, USA, June 25–28, 2019(ETRA ’19), 3 pages.https://doi.org/10.1145/3314111.3322869
1 INTRODUCTIONTelepresence robots have become useful communication tools forpeople who are physically prevented from participating in events[Neustaedter et al. 2016]. According to [World Health Organiza-tion 2011], more than 190 million individuals are suffering fromsevere disabilities. Efficient social communication may be extremelydifficult and frustrating for some of them due to motor control dif-ficulties. Enabling accessible control of telepresence robots maybring new possibilities and potential benefits to them. Telerobotsare typically controlled with hands, but a few previous studies havealso demonstrated accessible hands-free control methods based onspeech [Tsui et al. 2013], brain activity [Leeb et al. 2015], and gaze[Tall et al. 2009].
Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).ETRA ’19, June 25–28, 2019, Denver, CO, USA© 2019 Copyright held by the owner/author(s).ACM ISBN 978-1-4503-6709-7/19/06.https://doi.org/10.1145/3314111.3322869
2 PROBLEM STATEMENTS AND OBJECTIVESThe goal of our project is to develop a hands-free telepresencecontrol method, which is simple, and easy to use for people whocan onlymove their eyes, in order to improve their social interactionand quality of communication [Zhang et al. 2019]. Gaze can be usedas accessible control method of telepresence robots for people withprofound motor deficits. However, it needs to be explored how thiscontrol may influence users, and how to improve it.
In the first phase of our research, studies have been conductedbased on our gaze-controlled telepresence system[Hansen et al.2018]. Observation in a pilot study [Zhang et al. 2018] suggestedthat users were impacted by task complexity on their performance,situation awareness (SA), and subjective experience rating, whendriving a gaze-controlled telerobot with gaze control.
3 APPROACHWe then aimed to evaluate the effectiveness and the challenges ofthe gaze control in an experimental comparison. We hypothesizedthat there were differences in users’ SA, presence, performance,workload, and subjective experience between a control conditionwith gaze and a control condition with hands, when wearing avirtual reality head-mounted display (VR HMD) A within-subjectsdesign was used in the experiment with a total of 16 able-bodiedparticipants. The test subjects were sitting in a remote control room.The HMD (FOVE) and a joystick (Microsoft Xbox 360 Controller)were connected with a computer with Unity. The computer wasconnected to the telerobot via a wireless network. In the drivingroom, a telerobot carried a 360 camera,and a microphone. Thecamera was 1.3 m above the floor. Five ultra-sound receivers weremounted on the wall and a transmitter placed on top of the telerobotto track its position with an accuracy of approximately 1 cm. Plasticsticks on the floor were used to mark up the maze track that coveredan area of 5 x 4m. Three sheets of A4 paper with a pie-chart hungon the wall to show how far the robot had driven at that position.
Each participant used gaze and hand control to navigate throughtwo mazes with both control methods. In total four different mazeswith a layout that were of similar length and complexity was usedfor the experiment. The order by which participants were exposedto each maze was counterbalanced.In total, 64 trials was conducted.There were two groups of independent variables: input method(gaze control & hand control), and order of trials with the samemethod (Trial 1 & Trial 2). Dependent Variables included the partic-ipants’ SA, presence, workload, performance, post-trial estimationand recollection, self-assessment, and experience with the controlmethods.
The main task was to navigate the telerobot through a maze. Halfof the participants did the first two trials using gaze control, and
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 145
ETRA ’19, June 25–28, 2019, Denver, CO, USA G. Zhang & J.P. Hansen
half of them did the first two trails using joystick control. Beforeeach gaze trial, a gaze calibration procedure for the headset wasconducted. The experimenter observed their operation via an LCDdisplay. When the telerobot passed certain areas in the maze orwhen a maneuver, e.g. a turn, had been done, a query pop-up inthe control display was prompted by the experimenter. When theparticipants had given a verbal response, their response time wasrecorded in the system. In the remote driving room, a person wasstanding at three different positions during the trials. When thetelerobot passed by, this person faced the camera and talked tothe participants via the telerobot, providing information relatedto the remote environment. Log data from Unity and the teler-obot, participants’ response to questionnaires and pop-up querieswere collected. At the end of the experiment, the participants wereinterviewed.
4 PRELIMINARY RESULTSOur main observation was that telepresence robots could be con-trolled by gaze. All the participants were able to finish the trialsusing gaze control. However, our results also suggested that thereare still serious challenges for users of gaze control. When com-paring gaze control with hand control, statistical analysis withtwo-way ANOVAs showed that participants had similar experienceof presence and self-assessment, but gaze control was 31% slowerthan hand control. Gaze-controlled robots had more collisions andhigher deviations from optimal paths. Moreover, with gaze control,participants reported a higher workload, a reduced feeling of dom-inance, and their situation awareness was significantly degraded.The accuracy of their post-trial reproduction of the maze layoutand the trial duration were also significantly lower. These aspectswere is of great importance to human-robot interaction [Steinfeldet al. 2006].
5 PLANS FOR FUTUREWORKAddressing the challenges, more features can be added to the system,e.g. collision avoidance and an interface map view. The interfacefor gaze control also needs to be improved to allow for more free ex-amination of the environment without moving the telerobot at thesame time. Most importantly, novice users need more practice withthe gaze-controlled telerobots, in order to master this unfamiliarcontrol method. Besides practices with the robots in real scenarios,training of gaze control in simulation-based environments (e.g. VRenvironments) might be a potential solution. VR provides totallyimmersive environments and is widely used in training for real taskscenarios, e.g. with mining [Tichon and Burgess-Limerick 2011], formedical skill training [Izard and Méndez 2016; Reznek et al. 2002],and skill training for people with intellectual disabilities [Brownet al. 2016].
In the next phase of our plan, we aim to investigate potentialimpacts of gaze-control training in VR. A VR simulation environ-ment (cf. Figure 1) has been build for practice with a virtual gaze-controlled telerobots. A between-group design with 32 participantsis planned in our forthcoming study. We hypothesize that userswho are tested in a real driving scenario show no difference in theirperformance, situation awareness and workload, between havingbeen trained with the telerobot in a real driving task and having
Figure 1: Control panels for teleoperation in a VR trainingenvironment (left) and in a real scenario (right). The smallpinks circles show the gaze course. In the drivingmode, gazemovements are mapped to the virtual (left) or real (right)robot’s movements. The live streams show the view of thetelerobots.
been trained with a virtual telerobot in VR. There will be two groupsof independent variables: training types (virtual robot in VR realrobot in reality), training environments (same or different as thelayout tested in). Dependent variables include the participants’ SA,workload, performance, eye behaviour, post-trial estimation andrecollection, self-assessment, and subjective experience with thecontrol methods.
The test person will be seated in the lab and wearing a FOVEheadset. The test person is then introduced to the devices and task:to remotely drive a gaze-controlled robot around an obstacle course,interacting with a live person, and reaching to the track goal. Afterthis first assessment, the test person will be given training in VRor in reality for five trials. After the five training sessions, the testperson is asked to do a remotely real driving of the gaze-controlledrobot again. During the first and last driving task, SA queries andsaccade test appear as pop-ups. When finishing the experiment,there will be an interview.
The main test is to navigate the telerobot through a maze in alab twice. In between, each participant has will be given the samenumber of training trials.
There will be four training conditions, given to eight participantseach: they are same layout in VR as the final test layout, a differentlayout in VR from the final test layout, same layout in reality, anda different layout in reality. In the driving room, a person willintroduce himself to the participants and inform them on thier nexttask via the telerobot.
ACKNOWLEDGMENTSWe would like to thank the China Scholarship Council, and theDanish Bevica Foundation for financial supports of this work.
146 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Accessible Control of Telepresence Robots based on Eye Tracking ETRA ’19, June 25–28, 2019, Denver, CO, USA
REFERENCESRoss Brown, Laurianne Sitbon, Lauren Fell, Stewart Koplick, Chris Beaumont, and
Margot Brereton. 2016. Design insights into embedding virtual reality content intolife skills training for people with intellectual disability. In Proceedings of the 28thAustralian Conference on Computer-Human Interaction. ACM, 581–585.
John Paulin Hansen, Alexandre Alapetite, Martin Thomsen, Zhongyu Wang, KatsumiMinakata, and Guangtao Zhang. 2018. Head and gaze control of a telepresencerobot with an HMD. In Proceedings of the 2018 ACM Symposium on Eye TrackingResearch & Applications. ACM, Article 82.
Santiago González Izard and Juan Antonio Juanes Méndez. 2016. Virtual realitymedical training system. In Proceedings of the Fourth International Conference onTechnological Ecosystems for Enhancing Multiculturality. ACM, 479–485.
Robert Leeb, Luca Tonin, Martin Rohm, Lorenzo Desideri, Tom Carlson, and José del RMillán. 2015. Towards independence: a BCI telepresence robot for people withsevere motor disabilities. Proc. IEEE 103, 6 (2015), 969–982.
Carman Neustaedter, Gina Venolia, Jason Procyk, and Daniel Hawkins. 2016. To Beamor not to Beam: A study of remote telepresence attendance at an academic confer-ence. In Proceedings of the 19th acm conference on computer-supported cooperativework & social computing. ACM, 418–431.
Martin Reznek, Phillip Harter, and Thomas Krummel. 2002. Virtual reality and simula-tion: training the future emergency physician. Academic Emergency Medicine 9, 1(2002), 78–87.
Aaron Steinfeld, Terrence Fong, David Kaber, Michael Lewis, Jean Scholtz, Alan Schultz,and Michael Goodrich. 2006. Common metrics for human-robot interaction. InProceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction.ACM, 33–40.
Martin Tall, Alexandre Alapetite, Javier San Agustin, Henrik HT Skovsgaard,John Paulin Hansen, Dan Witzner Hansen, and Emilie Møllenbach. 2009. Gaze-controlled driving. In CHI’09 Extended Abstracts on Human Factors in ComputingSystems. ACM, 4387–4392.
Jennifer Tichon and Robin Burgess-Limerick. 2011. A review of virtual reality as amedium for safety related training in mining. Journal of Health & Safety Research& Practice 3, 1 (2011), 33–40.
Katherine M Tsui, Kelsey Flynn, Amelia McHugh, Holly A Yanco, and David Kontak.2013. Designing speech-based interfaces for telepresence robots for people withdisabilities. In Rehabilitation Robotics (ICORR), 2013 IEEE International Conferenceon. IEEE, 1–8.
World Health Organization. 2011. World report on disability. Geneva: WHO (2011).Guangtao Zhang, John Paulin Hansen, Katsumi Minakata, Alexandre Alapetite, and
Zhongyu Wang. 2019. Eye-Gaze-Controlled Telepresence Robots for People withMotor Disabilities. In 2019 14th ACM/IEEE International Conference on Human-RobotInteraction (HRI). IEEE, 574–575.
Guangtao Zhang, Katsumi Minakata, Alexandre Alapetite, Zhongyu Wang, MartinThomsen, and John Paulin Hansen. 2018. Impact of task complexity on driving agaze-controlled telerobot. In Abstracts of the Scandinavian Workshop on AppliedEye Tracking (SWAET 2018) (Journal of Eye Movement Research), Daniel Barratt,Raymond Bertram, and Marcus Nyström (Eds.), Vol. 11. Frederiksberg, Denmark,30.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 147
B.5 Enabling Realtime Measurement of Situation Awarenessin Robot Teleoperation with a Headmounted Display
Authors: Guangtao Zhang, Katsumi Minakata, and John Paulin Hansen
In Proceedings of the the 50th Nordic Ergonomics and Human Factors Society Conference 2019 (NES 2019)
DOI: 10.11581/dtu:00000061 (Proceedings)
148 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
50th Nordic Ergonomics and Human Factors Society Conference 2019
Enabling Real-Time Measurement of Situation Awareness in Robot Teleoperation with a Head-mounted Display
Guangtao ZHANG, Katsumi MINAKATA, John Paulin HANSEN
Management Engineering, Technical University of Denmark
Abstract: Situation awareness plays an important role in robot teleoperation tasks, which are
essential in digitalisation and automation. Head-mounted displays have unique advantages for
virtual reality and augmented reality environments. We present our approach to enabling real-
time measurement of situation awareness in robot teleoperation with a head-mounted display.
Test results showed that it provided more reliable data for situation awareness measurement
than the post-trial subjective measure.
Keywords: situation awareness, robot teleoperation, head-mounted display.
1. Introduction
Operation of a mobile robot by a person from a distance has been applied across a variety of
domains. Robot teleoperation has potentials to become an essential part in digitalization and
automation of future work. Situation awareness (SA) is the ability to perceive elements within a
volume of space, be able to comprehend the meaning of these elements and be able to predict the
status of these elements in the future (Endsley 2000). It is the perception, comprehension, and
projection of information relevant to the user regarding their immediate environment, which plays
an important role in teleoperation tasks. It is also a primary basis for operators' performance
(Endsley 1995). The present research investigates how to enable real-time measures of SA in
robot teleoperation. In a robot teleoperation project (Hansen et al. 2018), a telepresence robot can
be teleoperated by a user wearing a head-mounted display (HMD). The user-interface for
teleoperation is presented as Augmented Reality (AR). This interface supports multimodal control
by gaze, head, and hands for a variety of use-cases, including disabled people bound to bed that
can drive a telerobot with head or gaze pointing only. Existing techniques for evaluation of SA,
e.g., SART (Taylor 1990), and SA-SWORD (Snow and Reising 2000), based on subjective
measures, may reflect subjective preferences rather than SA itself. Post-test questionnaires only
capture SA at the end of the task, where early misperceptions or temporary confusions may be
forgotten.
Real-time probes with objective measures may overcome the above-mentioned problems.
However, challenges still exist in the implementation of such measures for robot teleoperation
with an HMD. For example, when operators do not prioritize their primary task, then the
secondary task (i.e., SA queries) is known to interfere with the primary task performance (Pierce
2012). Detailed analysis of tasks and SA are required for implementing this measure in
teleoperation, and task-relevant queries need to be designed based on SA theories. In the present
work, we examined the use of real-time queries that we have developed for research in
teleoperation.
2. Methodology
We adapted the Situation Present Assessment Method (SPAM), which was originally developed
for air traffic controllers (Durso et al. 1998), to robot teleoperation. Perception-related queries
focused on the participants' ability to perceive relevant elements in the environment, e.g.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 149
50th Nordic Ergonomics and Human Factors Society Conference 2019
orientation, voice, presence, and position of a person present in the remote environment.
Comprehension-related queries were used to investigate the operator's understanding of external
signs in the room, i.e., a pie chart indicating their current progress, and of oral information about
their progress status. Projection-related queries examined their ability to estimate the future status
based on their perception and comprehension of the current situation in the environment, i.e.,
estimate the time or distance to finish the task.
A total of 16 participants participated in our experiment (Zhang et al. 2019). Each participant
teleoperated a robot in 2 trials with hand control and 2 trials with gaze control. An HMD was
used to provide a full 360-degrees view when turning the head around. The participants' task was
to drive a telerobot through a maze marked on the floor in a remote room. The progress pie-charts
were displayed on the walls of the room, and a person talked to them via the telerobot. The
experimenter manually prompted a SPAM query when the robot passed certain areas in the maze
or when a maneuver, e.g. a turn, had been done. When the participant passed graphical progress
information on the wall, a query about comprehension was given. Queries appeared as a pop-up
text box in the HMD (see figure 1 ). All of the queries were initiated with an introductory question:
“Are you ready to answer a question?” This question was only used to measure response time,
and the only possible answer the operator could give was clicking “Yes”. Then the actual query
appeared. Once the query was answered orally, the participant's response time was again
measured and the answer was written by the experimenter. After each of the four trials, a
subjective workload questionnaire (NASA-TLX), and questions about their estimation of the total
distance they had driven, task time duration, and their recollection of collisions with the maze
barriers, and persons in the maze. In order to compare two methods of SA measures, a subjective
SA questionnaire (SART) were also included in the post-trial questionnaires.
Fig. 1 A user wearing an HMD, and teleoperating a remote telepresence robot (left). Via the
HMD display, he can see a Pop-up with a query about orientation (right).
3. Research outcomes
Log data, video recordings from the room, and screen recordings captured during the experiments
were analysed. The analysis included the real-time objective measure, post-trial subjective
measures, and accuracy of estimation and recollection. Workload-analysis was based on
responses to the subjective workload questionnaire, and the two types of response times recorded
in real-time measured workload and SA, respectively.
Data from real-time objective measure (SPAM) were analysed with ANOVA and followed up
pairwise comparison with a Bonferroni correction. The results showed that the control method
had significant impacts on the participants’ SA. The hand control method yielded a higher level
of SA than the gaze control.
150 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
50th Nordic Ergonomics and Human Factors Society Conference 2019
Our results also supported comparative analyses of two measurement approaches (SPAM and
SART). With SPAM, we found a correlation of the response times to the introductory question
from SPAM with the NASA TLX responses. With ANOVA and followed up a pairwise
comparison with a Bonferroni correction, we found that control method had significant impacts
on the response times to the introductory question, and the NASA TLX responses. When using
the hand control, the participants self-reported lower level of workload, and needed less response
time to answer the introductory questions.
A correlation between the real-time objective SA measure (SPAM) and the post-trial SA measure
(SART) was also found. With ANOVAs and followed up pairwise comparisons with a Bonferroni
correction, we found that the control method had significant impacts on the participants’
performance. The participants had a higher percentage of accuracy in post-trial recollection and
estimation when using hands compared to gaze. Distance estimation was closer to the actual
operation for the real-time queries relative to the post-trial distance estimations.
3. Conclusion
A real-time measurement of SA in robot teleoperation with a head-mounted display has been
implemented. With a small data size in our test (n = 16), it provided more reliable data and results
than the post-trial subjective SA questionnaire.
References
Durso, F. T., Hackworth, C. A., Truitt, T. R., Crutchfield, J., Nikolic, D., & Manning, C. A.
(1998). Situation awareness as a predictor of performance for en route air traffic
controllers. Air Traffic Control Quarterly, 6(1), 1-20.
Endsley, M. R. (2000). Direct measurement of situation awareness in simulations of dynamic
systems: Validity and use of SAGAT. In Endsley MR, Garland DJ, eds. Situation awareness
analysis and measurement. Mahwah, NJ: Lawrence Erlbaum Associates, 2000:147–74.
Endsley, M. R. (1995). Measurement of situation awareness in dynamic systems. Human
factors, 37(1), 65-84.
Hansen, J. P., Alapetite, A., Thomsen, M., Wang, Z., Minakata, K., & Zhang, G. (2018). Head
and gaze control of a telepresence robot with an HMD. In Proceedings of the 2018 ACM
Symposium on Eye Tracking Research & Applications (ETRA ’18). ACM, Article 82.
Pierce, S. R. (2012). The effect of spam administration during a dynamic simulation. Human
Factors, 54, 838–848.
Snow, M. P., & Reising, J. M. (2000). Comparison of two situation awareness metrics: SAGAT
and SA-SWORD. In Proceedings of the Human Factors and Ergonomics Society Annual
Meeting (Vol. 44, No. 13, pp. 49-52). Sage CA: Los Angeles, CA: SAGE Publications.
Taylor, R. M. (1990). Situation awareness rating technique (SART): the development of a tool
for aircrew systems design. In Situational Awareness in Aerospace Operations (Chapter 3).
France: Neuilly sur-Seine, NATO-AGARD-CP-478.
Zhang, G., Hansen, J. P., & Minakata, K. (2019). Hand- and Gaze-Control of Telepresence
Robots. In Proceedings of the 11th ACM Symposium on Eye Tracking Research &
Applications. ACM, Article 70.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 151
B.6 A Virtual Reality Simulator for Training Gaze Control ofWheeled Telerobots
Authors: Guangtao Zhang, and John Paulin Hansen
In Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology(VRST 2019)
DOI: 10.1145/3359996.3364707
152 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
A Virtual Reality Simulator for Training Gaze Control ofWheeled Tele-Robots
Guangtao [email protected]
Technical University of DenmarkKgs. Lyngby, Denmark
John Paulin [email protected]
Technical University of DenmarkKgs. Lyngby, Denmark
ABSTRACTPeople who cannot use their hands may use eye-gaze to interactwith robots. Emerging virtual reality head-mounted displays (HMD)have built-in eye-tracking sensors. Previous studies suggest thatusers need substantial practice for gaze steering of wheeled robotswith an HMD. In this paper, we propose to apply a VR-based simu-lator for training of gaze-controlled robot steering. The simulatorand preliminary test results are presented.
CCS CONCEPTS•Human-centered computing→ Virtual reality; Interactive sys-tems and tools; Accessibility.
KEYWORDSVirtual reality, human-robot interaction, eye tracking, simulator,training, head-mounted display, gaze interaction, tele-robotsACM Reference Format:Guangtao Zhang and John Paulin Hansen. 2019. A Virtual Reality Simulatorfor Training Gaze Control of Wheeled Tele-Robots. In 25th ACM Symposiumon Virtual Reality Software and Technology (VRST ’19), November 12–15,2019, Parramatta, NSW, Australia. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3359996.3364707
1 INTRODUCTIONRecent advances in mixed, augmented and virtual reality (VR) pro-vide new options for human-robot interaction (HRI) [6]. Robotsteering is an essential for a range of wheeled robots, for instance,telepresence robots [4]. Tele-robots allow immobile people to movearound while lying in a bed, and thereby participate remotely insocial activities [8].
Alternative control methods for robots have been studied pre-viously, for instance, brain-computer interface [1] and speech [4].Head-mounted displays (HMDs) for VR are becoming widely avail-able and affordable. Gaze tracking sensors are built into severalrecent HMD models. The increased accuracy of eye-tracking equip-ment makes it feasible to utilize this technology for explicit controltasks with robots [5]. Using eye-tracking in HMDs for steering ofwheeled robots has additional advantages compared to traditional
Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected] ’19, November 12–15, 2019, Parramatta, NSW, Australia© 2019 Association for Computing Machinery.ACM ISBN 978-1-4503-7001-1/19/11. . . $15.00https://doi.org/10.1145/3359996.3364707
displays, since it offers an immersive experience (i.e. a sense ofpresence), flexibility in where it can be applied, including frombed or at out-door locations, and robustness towards uncontrolledhead movements. However, results from a previous study showedthat navigating a gaze-controlled telepresence robot with an HMDis quite a challenge and control exclusively by gaze increases thecognitive workload [7]. Consequently, novices need training in asafe environment where they can acquire the needed skills.
Simulators may be used as a cost-effective and safe solution forthe acquisition of skills to operate a robot [3]. VR-based simulationhas been used extensively in other robotic domains, for instance toimprove medical skills [2] and robotic tele-operation [5]. However,training of eye-tracking-based robot steering with a VR-based sim-ulator has not yet been explored, so it is still unclear how to designthe simulator and what the learning effects might be.
In this paper we present a simulator built for this purpose andour interface for gaze control of tele-robots (c.f. figure 1) togetherwith some preliminary results of a test including 32 participants.
Figure 1: A gaze-interactive user interface that overlaysthe live video stream from a real environment. In the VR-simulator, it overlays a real-time stream of a virtual environ-ment instead. The pink circle on the floor shows the gazecursor, visible to the user. The blue arrows and shaded ar-eas show the gaze-steering interface, but they are not visi-ble to the user. Above the gaze-steering area there is a rear-view mirror. Mode shifts and overlay typing interfaces areselected by gazing at icons in the bottom.
2 IMPLEMENTATIONOur system consists of three parts: (i) a controller for the operator,(ii) a robotic platform for use in reality, (iii) and a VR-based robot
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 153
VRST ’19, November 12–15, 2019, Parramatta, NSW, Australia G. Zhang & J. P. Hansen
and environment simulator. Figure 2 outlines the system architec-ture and the components. The robot controller in Unity1 processesmovement instructions based on the operators’ gaze location, andsends gaze steering commands to the virtual or real robot.
OperatorControl panel
Remote maze
Live stream
Control commands
Virtual maze
RobotVirtual Camera Ricoh Theta S camera
Remote reality(Real environment)
Simulator (VR environment)
Virtual robot
Camera view
Eye tracking
Eye tracking data
VR HMD with built-in eye
trackers
Unity-ROS-Bridge
(Rigidbody in Unity)
VirtualUnityController
(GameObject in Unity)
Figure 2: Overview of the system. A person can steer a realtele-robot in a remote environment or a virtual robot in aVR simulator.
A user wearing an HMD (model FOVE2) navigates a wheeled tele-robot or a virtual robot by gazing in the direction in which the robotshould drive [7]. The control panel, including an optional overlaykeyboard, was built in Unity. Figure 1 shows the control panel. Theshared controller module ensures that the control mechanism isthe same for both the real and the virtual robot with regard tomapping of eye-movements and the control commands. The realrobot carries a camera, transmitting a live video stream to the userfrom the remote reality. In the simulator, the live stream is generatedfrom a pre-rendered 3D model of the same room as that which thereal robot is driving in.
We use a maze for our driving tests [7], marked by white roomdividers on the floor. In the simulator, a virtual room with the samemaze layout had been modeled. In reality, our robotic system buildsupon an open-source Robot Operating System (ROS) architecture
1https://unity.com [last accessed - September 2019]2https://www.getfove.com [last accessed - September 2019]
that we have developed for a range of wheeled tele-robots [8]. Thetele-robots were modeled with the same essential features (e.g.velocity, shape and size) as the real robot. We took pictures of e.g.the floor maze markers and the walls, and used them as texturesin the model. Real sounds of e.g. collision and wheel rotation wererecorded and used as sound effects in the simulation. A shakingeffect from collision with the maze dividers or the wall was alsoincluded in the simulation, approximating how this would look inreality.
3 PRELIMINARY RESULTSWe have conducted an experiment to investigate potential learningeffects. A total of 32 participants took part in a study to examinewhether a group of subjects tested in a real driving scenario (n = 16)would show significant differences in their performance, situationawareness and workload, when compared to a group (n = 16) thathad been trained with the tele-robot in a real environment.
Preliminary results indicate that performance were improvedafter training for both groups of subjects, and there were no signif-icant difference between training in reality and training in the VRsimulator.
ACKNOWLEDGMENTSThis research has been supported by the China Scholarship Counciland the Bevica Foundation. Alexandre Alapetite, Zhongyu Wang,Antony Nestoridis, and Martin Thomsen contributed to the systemdevelopment. Oliver Repholtz Behrens and Sebastian HedegaardHansen assisted in conducting the experiment.
REFERENCES[1] Robert Leeb, Luca Tonin, Martin Rohm, Lorenzo Desideri, Tom Carlson, and Jose
del R Millan. 2015. Towards independence: a BCI telepresence robot for peoplewith severe motor disabilities. Proc. IEEE 103, 6 (2015), 969–982.
[2] EoinMacCraith, James C Forde, andNiall F Davis. 2019. Robotic simulation trainingfor urological trainees: a comprehensive review on cost, merits and challenges.Journal of robotic surgery (2019), 1–7.
[3] Luis Pérez, Eduardo Diez, Rubén Usamentiaga, and Daniel F García. 2019. Industrialrobot control and operator training using virtual reality interfaces. Computers inIndustry 109 (2019), 114–120.
[4] Katherine M Tsui, Kelsey Flynn, Amelia McHugh, Holly A Yanco, and DavidKontak. 2013. Designing speech-based interfaces for telepresence robots forpeople with disabilities. In 2013 IEEE 13th International Conference on RehabilitationRobotics (ICORR). IEEE, 1–8.
[5] Ginger S Watson, Yiannis E Papelis, and Katheryn C Hicks. 2016. Simulation-based environment for the eye-tracking control of tele-operated mobile robots. InProceedings of the Modeling and Simulation of Complexity in Intelligent, Adaptiveand Autonomous Systems 2016 (MSCIAAS 2016) and Space Simulation for PlanetarySpace Exploration (SPACE 2016). Society for Computer Simulation International, 4.
[6] Tom Williams, Daniel Szafir, Tathagata Chakraborti, and Elizabeth Phillips. 2019.Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI).In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI).IEEE, 671–672.
[7] Guangtao Zhang, John Paulin Hansen, and Katsumi Minakata. 2019. Hand- andgaze-control of telepresence robots. In Proceedings of the 11th ACM Symposium onEye Tracking Research & Applications. ACM, 70: 1– 8.
[8] Guangtao Zhang, John Paulin Hansen, Katsumi Minakata, Alexandre Alapetite,and Zhongyu Wang. 2019. Eye-Gaze-Controlled Telepresence Robots for Peoplewith Motor Disabilities. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 574–575.
154 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
B.7 People with Motor Disabilities Using Gaze to ControlTelerobots
Authors: Guangtao Zhang, and John Paulin Hansen
In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHI 2020)
DOI: 10.1145/3334480.3382939
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 155
Figure 1: One of the participantscommunicates with his friends via atelepresence robot.
People with Motor Disabilities UsingGaze to Control Telerobots
Guangtao ZhangTechnical University of DenmarkKgs. Lyngby, [email protected]
John Paulin HansenTechnical University of DenmarkKgs. Lyngby, [email protected]
Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).CHI ’20 Extended Abstracts, April 25–30, 2020, Honolulu, HI, USA.© 2020 Copyright is held by the author/owner(s).ACM ISBN 978-1-4503-6819-3/20/04.http://dx.doi.org/10.1145/3334480.3382939
AbstractTelerobots may give people with motor disabilities accessto education, events and places. Eye-gaze interaction withthese robots is an option when hands are not functional.Gaze control of telerobots has not yet been evaluated bypeople from this target group. We conducted a field studywith five users in a care-home to investigate their prefer-ences and challenges when driving telerobots via our gaze-controlled robotic telepresence system. We used a Wizardof Oz method to explore gaze and speech interaction, andexperience prototyping to consider robot designs and typesof displays (e.i. monitors versus head-mounted displays).
Author KeywordsHuman-robot interaction, gaze interaction, telepresence,accessibility, motor disabilities, assistive technology, in-clusion, robot-mediated communication, navigation, head-mounted displays, experience prototyping, Wizard of Oz.
CCS Concepts•Human-centered computing → Accessibility; Accessi-bility systems and tools;
IntroductionTelerobots have several potential usages for people withmotor disabilities (MD) [26]. Recent development of telep-resence robots [11] may provide people with MD a sense of
CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA
156 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
presence within a remote environment. Several manufac-turers offer wifi-based telerobots, e.g. Double Robotics andPadbot (see Fig. 2).
Figure 2: The two robots appliedin the field study: a) A Padbotrobot (left) that we attached with a360 degree camera (but without amonitor showing the users face); b)A Double robot (right) with a liveface video.
Robotic telepresence systems [32] have become increas-ingly popular for collaboration between geographicallydistributed teams[12], at academic conferences[23], forchildren-teacher communication[29], for relationships oflong-distance couples [36], for seniors with mobility impair-ments[27], and for outdoor activities [8]. However, peoplewith MD have been ignored when studying telerobot inter-action [34]. Previous studies on assistive technology em-phasized the importance of independence for people withMD, who often preferred to retain as much control author-ity as possible[18]. So while fully autonomous telpresencerobots have been suggested in previous work, for instanceautomatically following a person in a remote environment[4], it is an open question as to level of autonomy peoplewith MD would prefer.
Alternative interaction with telerobotsKeyboard and mouse or joysticks are most commonly usedfor telerobot systems. Alternative vehicle- and wheelchair-control methods have been suggested for people with MD,for instance based on brain-computer-interaction (BCI) [30,31, 13, 2, 21], speech [33, 34], gaze interaction [5, 39, 20],and fMRI recording of covert visuospatial attention [1].
Brain-computer interaction [30] may be used by people withsevere motor neuron diseases such as Amyotrophic LateralSclerosis (ALS), even when they can not move any partof their body - a so-called Locked-In Syndrome. Some ofthe BCI-methods, however, require substantial training andset-up, since caps and gel-based electrodes can be difficultto apply. Also, information transfer with BCI systems arequite low; for example [9] reported a throughput of 0.05-1.44 bits/s.
Speech interfaces are popular for mobile phones and smartspeakers. They have been suggested for telerobot navi-gation also. Unfortunately, some people with MD have im-paired speech. Low-level direction guiding by speech canbe cumbersome as a user may have to repeat the samecommand multiple times (e.g. left, left, left), or rely on pre-cise timing [33].
Gaze interaction has been used by people with MD formore than twenty years, mainly for communication andpointer-input [16]. Throughput values for gaze have beenreported to be 2.55 bits/s [17]. Gaze-controlled navigationof robots was first studied by Tall et al. in 2009 [28]. Sincethen, gaze tracking technology has become inexpensiveand accurate. However, it remains a challenge to designinterfaces that only respond to intentions, and not just eyemovements. Head input has shown to be more precise andless erroneous than gaze [17, 6] but it may also be tiring touse for longer time.
Besides a traditional set-up with monitors, head-mounteddisplays (HMD) are now being tested for telerobots with a360 degree camera mounted on the telerobot [5, 39] withgaze- or head- movements changing the field of view [10,5]. Gaze tracking sensors are built into some recent HMD´s.The increased availability of eye-tracking equipment makesit feasible to utilize this technology for explicit control taskswith robotics [35]. Hence, we have enabled gaze control oftelepresence robots with an HMD in previous studies [39,38]. The key findings were that after training, able-bodiedparticipants could effectively navigate telerobots by gaze.VR-simulations also showed to be a safe and efficient train-ing tool for gaze navigation [37]. However, like other stud-ies, e.g. [20], we did not involve people with MD. So it isimportant now to engage people with MD in the design ofgaze-controlled telepresence robots and explore future use-
CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 157
cases with them. We initiated this process with five partici-pants in a field test presented in this paper.
Figure 3: A participant wearing anHMD with a built-in eye tracker.
Figure 4: A participant using theDouble robot in front of a screenand the Tobii eye trackers. In thiscase the experimenter wasstanding behind the user andemulating his gaze movements byuse of a joystick, applying a Wizardof Oz technique.
Field studyOur study applied experience prototyping [3] and the Wiz-ard of Oz technique [24]. The goal was to observe the useand effectiveness of various interfaces and control meth-ods with our target users, rather than to compare a rangeof complete robotic systems. We therefore used the Wizardof Oz technique for some of our scenarios because of itsadvantages in early explorations, where complex systemintelligence can be delivered by a human assistant insteadof being fully implemented. Also, without an open API to theDouble robots it would be a demanding task to establish adirect gaze control.
According to principles suggested by [25] when conduct-ing accessibility research, our study-plan and consent formwere sent to the caregivers before the study. Observationsand conversations during the study were noted by the ex-perimenter, supported by records of the participants’ spon-taneous verbalisation, system log files, screen recordings,and room video recordings for post-hoc analysis.
ParticipantsFive people (age 21 -55 years; 3 men and 2 women) withdifferent levels of motor impairments participated. They livein a care-home together with 90 people who use wheelchairsfor daily mobility. Some of the residents use gaze interac-tion for communication and some use speech commandsfor smart-home control. All of our five participants haveimpaired manual activity, for instance limited gripping, fin-gering or holding ability, due to cerebellum disorders orcerebral palsy. None of them have cognitive impairmentsor literacy difficulties, but one of them has impaired speech
functions. Three of them have experienced VR using anHMD and one has experienced a telerobot.
ProcedureParticipants conducted the study one by one. After greet-ings and a brief introduction, we collected demographic in-formation, including experience with gaze control, VR, tele-robots and wheelchair control. For the one individual whois non-verbal, this information was provided by a care giver.In particular, participants were informed that if they felt un-comfortable at any time they should stop immediately. Thenwe showed a demo video of one of our earlier telepresencerobot projects [5] with a person controlling a telerobot by hisgaze while laying in bed. Two different telerobots (see fig.2) were standing next to the table we were sitting at.
Initially, we conducted an interview focusing on their expec-tations and visions of possible usage of telerobots for theirdaily life activities. Then followed the two experience proto-typing sessions as described below. We presented a rangeof options to them, including types of displays (HMD versusscreen), independence levels (independent versus assistedby others), and control methods (gaze, speech or hand).
Gaze navigationIn the first experience prototyping session, we used ourgaze-controlled robotic telepresence system (see Fig. 5).An HMD (FOVE) was connected with a computer runningUnity (see Fig. 3). We first gave the participants the possi-bility to train navigating of a virtual robot in VR by conduct-ing six trials, similar to the procedure used in [37]. Whenusing the VR simulator for training of gaze control, the sim-ulated "live" video stream was generated from the VR en-gine. The point-of-view and view height were similar to thereal telerobot and so were the virtual robot´s maneuver-ability. Both the virtual and the real robot (see next section)were navigated in a room of 4 x 5 m.
CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA
158 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Then they got to try a real telerobot. A Padbot with a 360degree RICOH THETA S camera was controlled via a wire-less network through a Raspberry Pi (see Fig. 2). We in-troduced two ways of driving the telerobots, namely onemode termed "invisible control" [38] and the other termed"way-point control" (see Fig. 6). The way-point control wassimilar to the control mode found in the Double 3 model,where users click on the floor to point at the next locationfor the robot to drive. In our case, way-points were markedby dwelling with gaze at the floor for a set dwell time.
Gaze control UI
VR HMDwith eye trackers
Telerobots
360° video camera
Figure 5: System architecture ofour gaze-controlled telepresencerobots with an HMD.
Participants were asked to use their gaze to navigate thetelerobot, wearing a FOVE HMD with build-in eye track-ing. They were instructed to drive around a large tablestanding in the middle of the room and say hello to a per-son they would meet while driving around it. They did thistwice - once using the "invisible control" which was a directdrive-where-you-look steering principle, and once using theway-point method (see Fig. 6). Afterwards, they answeredquestions about how confident they felt using gaze controland about their preference for one of the two gaze controlmethods.
Wizard of Oz navigation by speech and gazeIn the second experience prototyping session, we pre-sented a Double robot (see Fig. 2) running the Double webapplication in a Chrome browser on a laptop (HP elitebookwith a 15" monitor) connected to the robot via wireless net-work.
Two levels of assistance were experienced by the partici-pants. First they could ask a helper to navigate the robotfrom the corner of the room to their position. Then we gavethe participants an open instruction of...“Your next task is tomove the robot outside the door. You can just tell the robotwhat you would like it to do....” This was done to provide apossibility to imagine the level of intelligence they would ex-
pect for this type of interaction. In response, they gave dif-ferent kinds of commands, for instance, simply “move out-side”; or “find the door”, and then “go outside”; or just sim-ple direction commands like “left, left, stop”. Their speechcommands were actually executed by the experimenterstanding behind them with a joystick. Participants were notinformed that their speech commands were manually per-formed by the experimenter.
Then they were asked to use gaze control. An eye tracker(Tobii Eye trackers 4C) was used for tracking the user´sgaze point and shown on top of the live video stream fromthe Double robot (see Fig. 4). Again, steering of the Doublewas performed in a Wizard of Oz set-up by an experimenterstanding behind the participant manually following the usersgaze point with a joystick (Xbox 360 controller ). The Doublerobot was driven from the experiment room through a corri-dor to the canteen, passing the reception of the care-home- a total distance of around 40 m.
We finally asked participants to try driving the telerobotaround by simple speech commands - for those 4 partici-pants who could do so. Four types of commands were pos-sible: Left, right, stop and go. The experimenter executedtheir commands with his joystick behind their back. Threeparticipants also tried to use hand control in the end.
After the sessions, we talked about their experiences andasked questions like: What would you like to use a telerobotfor? How would you like to control the robot? How wouldyou like to communicate with people you meet when drivinga telerobot?
ObservationsUser expectationsAll of our participants assumed the robots would be easy touse before they had tried them, and their initial expectations
CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 159
were that the telerobots would be fun like a computer game,exciting to drive just with the eyes, a new way of being me.One participant was a bit nervous to use the robots, andasked a helper to hold her hand during the test.
At this early stage, it was difficult for most of them to imag-ine a use case that they could relate to. Nevertheless, twoof them envisioned that they could use telerobots to goshopping by talking with a care giver or a shop assistantvia the robot, or just look at the price labels themselves.Once the participants had experienced the telerobots theygot a much more clear vision of potential usage, mostlyfocusing on social interaction and possible inclusion in so-ciety. For instance, one participant envisioned that he couldget a job in a warehouse driving robot trucks remotely orcleaning floors with a floor cleaning machine. Another par-ticipant expressed a wish for the telerobot to have an armthat could do things... Similar applications have been ad-dressed in previous research papers also, for instance, theuse of telepresence robots for shopping [36] and for remotework [7].
Invisible control
Waypoint control
Figure 6: The two types of gazecontrol UI: invisible control (top),where the user drives in thedirection of sight (indicated by apurple circle), and way-pointcontrol (bottom), where the usermarks the next way-point by adwell click on the floor (red circle).
Control methodsAutonomous telepresence robots [4] might seem like agood solution for our target users. However, fully-autonomoussystems may increase workload if users lack trust in theautomated system, especially when the underlying mech-anism of automation is not clear to them [19]. None of ourusers preferred to have the telerobot driven for them, ei-ther by other people nor by a fully-autonomous "intelligent"system. This is in line with previous observations [14] thatour target users prefer to retain control authority. Therefore,semi-autonomous robots seem to be a more viable solution,using intelligent systems to assist in problematic situationsonly, and sensors to warn when obstacles are detected.However, this is a very complex issue that requires more
research addressing differences in user needs, informationtransfer rates, differences in robot design, and differencesin control principles - to name some of the factors that mayimpact performance and user experience.
When we asked our participants about their feelings of in-dependence using the categories suggested by [22], (inour terms being able to control robots on your own, beingable to maintain personal mobility in a remote place andbeing confident doing so), all of the participants stated thatthey had the same feeling of independence when using thedifferent control methods (gaze, speech and hand). Sinceboth the gaze and speech were sometimes controlled byanother person in the two Wizard of Oz sessions this is notconclusive. However, when asked which input they likedmost, all of the participants except one preferred gaze. Inparticular, the individual who was unable to give speechcommands seemed very excited when using gaze interac-tion - she made a lot of gestures with a smile on her face.One participant was not able to get a sufficiently good gazecalibration with the HMD. The other four participants quicklygained confidence with this input method when driving inthe VR simulator.
We had great expectations for the way-point control method.However, most of our participants found it very difficult touse and one of them did not finish driving around the table.Two of them mentioned that setting way-points forced themto look too far down and prevented them from attending topeople and looking around in the room, c.f. Fig. 7. Hence,in future designs, we will focus on better ways to mark away-point on the floor by gaze.
HardwareComparing the Padbot (without a face monitor) and theDouble robot, some of the participants stated they preferredthat "...people could see you the same way you can see
CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA
160 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
them" and that the Double is a robot "... where I could bewith my family and they could see me and I could see andtalk to them. Interestingly, none of them commented onthe option of having the 360 degree view provided by thePadbot, and only rarely did they make use of it. Instead ofturning their head when wearing a HMD, they simply turnedthe robot to change their field of view.
HMDs hold potential advantages for telerobot interaction,for instance improved immersion and natural head controlof field of view. However, two participants preferred to usethe screen, rather than the HMD. One of them explainedthat he did not like the HMD because it would prevent himfrom being completely independent, since he would alwaysneed assistance to strap it on and adjust its position. An-other participant could not achieve a sufficiently good gazecalibration with the HMD but he had no problems to getcalibrated with a screen-based eye-tracker. Another impor-tant reason to consider a screen-based display instead ofHMD´s is that it allows users to show their face on the re-mote screen and thus engage in facial communication. Thisis not possible when the face is covered by a HMD. Some ofthese problems may be due to current limitations of HMD´sand hopefully better solutions (e.g. smart glasses) will be-come available in the near future.
Figure 7: A participant using theway-point control to navigate atelerobot. He commented that thisforced him to look too far down andprevented him from attending topeople and to see the environment.
Presence and ExperienceWe frequently asked the drivers where are you now ? andthe answers to this question were always their remote po-sition (e.g. in the corridor, reception, or canteen) - not theiractual physical location (e.i. in the experiment room). Thisindicates a strong sense of telepresence. Our final - andmaybe strongest - impression from this field study was theexcitement we observed among all the participants andamong their fellow residents and staff in the care-home(see Fig. 1). A staff member even hugged the telerobot
when she saw who was driving it. The participant whocould not speak, laughed and used her hand gestureswhen communicating with friend she met on her way. Over-all, the participants’ assessment of their experiences withtelerobots were very positive, with typical statements like"...super fun, easy to use", and even "proud of myself".
ConclusionBased on new insights provided by our target users, we areconfident that several additional use-cases may be uncov-ered, and accessibility of human-telerobot interaction maybe improved, if participants get an opportunity to familiar-ize with this technology and become involved in the designprocess. Their engagement fully confirmed Liu´s point [15]that providing ways to give and not simply receive help is animportant way to support social interaction and integration.
AcknowledgementsThanks to the people at Jonstrupvang care-home who par-ticipated in this study. The research has been supported bythe China Scholarship Council and the Bevica Foundation.Thanks to Astrid Kofod Trudslev and Nils David Rasamoelfor assisting with the study.
REFERENCES[1] Patrik Andersson, Josien PW Pluim, Max A Viergever,
and Nick F Ramsey. 2013. Navigation of atelepresence robot via covert visuospatial attentionand real-time fMRI. Brain topography 26, 1 (2013),177–185.
[2] Gloria Beraldo, Morris Antonello, Andrea Cimolato,Emanuele Menegatti, and Luca Tonin. 2018.Brain-Computer Interface meets ROS: A roboticapproach to mentally drive telepresence robots. In2018 IEEE International Conference on Robotics andAutomation (ICRA). IEEE, 1–6.
CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 161
[3] Marion Buchenau and Jane Fulton Suri. 2000.Experience prototyping. In Proceedings of the 3rdconference on Designing interactive systems:processes, practices, methods, and techniques.424–433.
[4] Akansel Cosgun, Dinei A Florencio, and Henrik IChristensen. 2013. Autonomous person following fortelepresence robots. In 2013 IEEE InternationalConference on Robotics and Automation. IEEE,4335–4342.
[5] John Paulin Hansen, Alexandre Alapetite, MartinThomsen, Zhongyu Wang, Katsumi Minakata, andGuangtao Zhang. 2018. Head and gaze control of atelepresence robot with an HMD. In Proceedings of the2018 ACM Symposium on Eye Tracking Research &Applications. Article 82.
[6] John Paulin Hansen, Kristian Tørning, Anders SewerinJohansen, Kenji Itoh, and Hirotaka Aoki. 2004. Gazetyping compared with input by head and hand. InProceedings of the 2004 symposium on Eye trackingresearch & applications. 131–138.
[7] John Paulin Hansen, Astrid Kofod Trudslev, Sara AmdiHarild, Alexandre Alapetite, and Katsumi Minakata.2019. Providing access to VR through a wheelchair. InExtended Abstracts of the 2019 CHI Conference onHuman Factors in Computing Systems. 1–8.
[8] Yasamin Heshmat, Brennan Jones, Xiaoxuan Xiong,Carman Neustaedter, Anthony Tang, Bernhard ERiecke, and Lillian Yang. 2018. Geocaching with abeam: Shared outdoor activities through atelepresence robot with 360 degree viewing. InProceedings of the 2018 CHI Conference on HumanFactors in Computing Systems. 1–13.
[9] Elisa Mira Holz, Johannes Höhne, Pit Staiger-Sälzer,Michael Tangermann, and Andrea Kübler. 2013.Brain–computer interface controlled gaming:Evaluation of usability by severely motor restrictedend-users. Artificial intelligence in medicine 59, 2(2013), 111–120.
[10] Dhanraj Jadhav, Parth Shah, and Henil Shah. 2018. AStudy to design VI classrooms using Virtual Realityaided Telepresence. In 2018 IEEE 18th InternationalConference on Advanced Learning Technologies.IEEE, 319–321.
[11] Annica Kristoffersson, Silvia Coradeschi, and AmyLoutfi. 2013. A review of mobile robotic telepresence.Advances in Human-Computer Interaction 2013(2013).
[12] Min Kyung Lee and Leila Takayama. 2011. "Now, Ihave a body" uses and social norms for mobile remotepresence in the workplace. In Proceedings of theSIGCHI conference on human factors in computingsystems. 33–42.
[13] Robert Leeb, Serafeim Perdikis, Luca Tonin, AndreaBiasiucci, Michele Tavella, Marco Creatura, AlbertoMolina, Abdul Al-Khodairy, Tom Carlson, and José dRMillán. 2013. Transferring brain–computer interfacesbeyond the laboratory: successful application controlfor motor-disabled users. Artificial intelligence inmedicine 59, 2 (2013), 121–132.
[14] Robert Leeb, Luca Tonin, Martin Rohm, LorenzoDesideri, Tom Carlson, and Jose del R Millan. 2015.Towards independence: a BCI telepresence robot forpeople with severe motor disabilities. Proc. IEEE 103,6 (2015), 969–982.
CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA
162 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
[15] Peng Liu, Xianghua Ding, and Ning Gu. 2016. “HelpingOthers Makes Me Happy” Social Interaction andIntegration of People with Disabilities. In Proceedingsof the 19th ACM Conference on Computer-SupportedCooperative Work & Social Computing. 1596–1608.
[16] Päivi Majaranta and Andreas Bulling. 2014. Eyetracking and eye-based human–computer interaction.In Advances in physiological computing. Springer,London, 39–65.
[17] Katsumi Minakata, John Paulin Hansen, I ScottMacKenzie, Per Bækgaard, and Vijay Rajanna. 2019.Pointing by gaze, head, and foot in a head-mounteddisplay. In Proceedings of the 11th ACM Symposiumon Eye Tracking Research & Applications. Article 50.
[18] Paul D Nisbet. 2002. Who’s intelligent? Wheelchair,driver or both?. In Proceedings of the InternationalConference on Control Applications, Vol. 2. IEEE,760–765.
[19] Raja Parasuraman, Thomas B Sheridan, andChristopher D Wickens. 2008. Situation awareness,mental workload, and trust in automation: Viable,empirically supported cognitive engineeringconstructs. Journal of Cognitive Engineering andDecision Making 2, 2 (2008), 140–160.
[20] Alexey Petrushin, Giacinto Barresi, and Leonardo SMattos. 2018a. Gaze-controlled Laser Pointer Platformfor People with Severe Motor Impairments: PreliminaryTest in Telepresence. In 2018 40th AnnualInternational Conference of the IEEE Engineering inMedicine and Biology Society. IEEE, 1813–1816.
[21] Alexey Petrushin, Jacopo Tessadori, Giacinto Barresi,and Leonardo S Mattos. 2018b. Effect of a Click-Like
Feedback on Motor Imagery in EEG-BCI andEye-Tracking Hybrid Control for Telepresence. In 2018IEEE/ASME International Conference on AdvancedIntelligent Mechatronics (AIM). IEEE, 628–633.
[22] Parvaneh Rabiee. 2013. Exploring the relationshipsbetween choice and independence: experiences ofdisabled and older people. British Journal of SocialWork 43, 5 (2013), 872–888.
[23] Irene Rae and Carman Neustaedter. 2017. Robotictelepresence at scale. In Proceedings of the 2017 CHIConference on Human Factors in Computing Systems.313–324.
[24] Laurel D Riek. 2012. Wizard of Oz studies in HRI: asystematic review and new reporting guidelines.Journal of Human-Robot Interaction 1, 1 (2012),119–136.
[25] Dianne Rios, Susan Magasi, Catherine Novak, andMark Harniss. 2016. Conducting accessible research:including people with disabilities in public health,epidemiological, and outcomes studies. Americanjournal of public health 106, 12 (2016), 2137–2144.
[26] Thomas B Sheridan. 1989. Telerobotics. Automatica25, 4 (1989), 487–507.
[27] Rachel E Stuck, Jordan Q Hartley, Tracy L Mitzner,Jenay M Beer, and Wendy A Rogers. 2017.Understanding attitudes of adults aging with mobilityimpairments toward telepresence robots. InProceedings of the Companion of the 2017 ACM/IEEEInternational Conference on Human-Robot Interaction.293–294.
CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 163
[28] Martin Tall, Alexandre Alapetite, Javier San Agustin,Henrik HT Skovsgaard, John Paulin Hansen,Dan Witzner Hansen, and Emilie Møllenbach. 2009.Gaze-controlled driving. In CHI’09 Extended Abstractson Human Factors in Computing Systems. ACM,4387–4392.
[29] Fumihide Tanaka, Toshimitsu Takahashi, ShizukoMatsuzoe, Nao Tazawa, and Masahiko Morita. 2014.Telepresence robot helps children in communicatingwith teachers who speak a different language. InProceedings of the 2014 ACM/IEEE internationalconference on Human-robot interaction. 399–406.
[30] Luca Tonin, Tom Carlson, Robert Leeb, and Josédel R Millán. 2011. Brain-controlled telepresence robotby motor-disabled people. In 2011 Annual InternationalConference of the IEEE Engineering in Medicine andBiology Society. IEEE, 4227–4230.
[31] Luca Tonin, Robert Leeb, Michele Tavella, SerafeimPerdikis, and José del R Millán. 2010. The role ofshared-control in BCI-based telepresence. In 2010IEEE International Conference on Systems, Man andCybernetics. IEEE, 1462–1466.
[32] Katherine M Tsui, Munjal Desai, Holly A Yanco, andChris Uhlik. 2011. Exploring use cases fortelepresence robots. In 2011 6th ACM/IEEEInternational Conference on Human-Robot Interaction(HRI). IEEE, 11–18.
[33] Katherine M Tsui, Kelsey Flynn, Amelia McHugh,Holly A Yanco, and David Kontak. 2013. Designingspeech-based interfaces for telepresence robots forpeople with disabilities. In 2013 IEEE 13thInternational Conference on Rehabilitation Robotics(ICORR). IEEE, 1–8.
[34] Katherine M Tsui, Eric McCann, Amelia McHugh,Mikhail Medvedev, Holly A Yanco, David Kontak, andJill L Drury. 2014. Towards designing telepresencerobot navigation for people with disabilities.International Journal of Intelligent Computing andCybernetics 7, 3 (2014), 307.
[35] Ginger S Watson, Yiannis E Papelis, and Katheryn CHicks. 2016. Simulation-based environment for theeye-tracking control of tele-operated mobile robots. InProceedings of the Modeling and Simulation ofComplexity in Intelligent, Adaptive and AutonomousSystems 2016 (MSCIAAS 2016) and Space Simulationfor Planetary Space Exploration (SPACE 2016). 1–7.
[36] Lillian Yang, Carman Neustaedter, and TheclaSchiphorst. 2017. Communicating through atelepresence robot: A study of long distancerelationships. In Proceedings of the 2017 CHIConference Extended Abstracts on Human Factors inComputing Systems. 3027–3033.
[37] Guangtao Zhang and John Paulin Hansen. 2019. AVirtual Reality Simulator for Training Gaze Control ofWheeled Tele-Robots. In 25th ACM Symposium onVirtual Reality Software and Technology. Article 49.
[38] Guangtao Zhang, John Paulin Hansen, and KatsumiMinakata. 2019a. Hand- and gaze-control oftelepresence robots. In Proceedings of the 11th ACMSymposium on Eye Tracking Research & Applications.Article 70.
[39] Guangtao Zhang, John Paulin Hansen, KatsumiMinakata, Alexandre Alapetite, and Zhongyu Wang.2019b. Eye-gaze-controlled telepresence robots forpeople with motor disabilities. In 2019 14th ACM/IEEEInternational Conference on Human-Robot Interaction.IEEE, 574–575.
CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA
164 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
B.8 Exploring EyeGaze Wheelchair ControlAuthors: Jacopo Mattia de Araujo, Guangtao Zhang, John Paulin Hansen, and SadasivanPuthusserypady
In Proceedings of the ACM Symposium on Eye Tracking Research & Applications (ETRA2020)
DOI: 10.1145/3379157.3388933
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 165
Exploring Eye-Gaze Wheelchair ControlJacopo de Araujo
Technical University of DenmarkKgs. Lyngby, Denmark
John Paulin HansenTechnical University of Denmark
Kgs. Lyngby, [email protected]
Guangtao ZhangTechnical University of Denmark
Kgs. Lyngby, [email protected]
Sadasivan PuthusserypadyTechnical University of Denmark
Kgs. Lyngby, [email protected]
ABSTRACTEye-gaze may potentially be used for steering wheelchairs or robotsand thereby support independence in choosing where to move. Thispaper investigates the feasibility of gaze-controlled interfaces. Wepresent an experiment with wheelchair control in a simulated, vir-tual reality (VR) driving experiment and a field study with fivepeople using wheelchairs. In the VR experiment, three control in-terfaces were tested by 18 able-bodied subjects: (i) dwell buttons fordirection commands on an overlay display, (ii) steering by continu-ous gaze point assessment on the ground plane in front of the driver,and (iii) waypoint navigation to targets placed on the ground plane.Results indicate that the waypoint method had superior perfor-mance, and it was also most preferred by the users, closely followedby the continuous-control interface. However, the field study re-vealed that our wheelchair users felt uncomfortable and excludedwhen they had to look down at the floor to steer a vehicle. Hence,our VR testing had a simplified representation of the steering taskand ignored an important part of the use-context. In the discussion,we suggest potential improvements of simulation-based design ofwheelchair gaze control interfaces.
CCS CONCEPTS• Human-centered computing → Usability testing; Interac-tion design;User interface design; Graphical user interfaces; Vir-tual reality; • Computing methodologies → Motion path plan-ning.
KEYWORDSgaze interaction, eye-tracking, gaze control, vehicle control, wheelchair,virtual reality, simulation, human-robot interaction, locked-in syn-drome, amyotrophic lateral sclerosis, ALS/MND
Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected] ’20 Adjunct, June 2–5, 2020, Stuttgart, Germany© 2020 Association for Computing Machinery.ACM ISBN 978-1-4503-7135-3/20/06. . . $15.00https://doi.org/10.1145/3379157.3388933
ACM Reference Format:Jacopo de Araujo, John Paulin Hansen, Guangtao Zhang, and SadasivanPuthusserypady. 2020. Exploring Eye-Gaze Wheelchair Control. In Sympo-sium on Eye Tracking Research and Applications (ETRA ’20 Adjunct), June2–5, 2020, Stuttgart, Germany. ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3379157.3388933
1 INTRODUCTIONPeople with severe damage to the central nervous system or motorneuron diseases such as amyotrophic lateral sclerosis (ALS) maylose motor control and in rare and extreme cases get a so-calledlocked-in syndrome (LIS)[Kiernan et al. 2011]. LIS patients are lim-ited to functional use of their brain and eyes only. Gaze interactionand brain computer interfaces (BCI) have both been suggested aswheelchair-control methods for LIS patients and people with othercomplex motor challenges [Arai and Mardiyanto 2011; Barea et al.2003; Bartolein et al. 2008; Eid et al. 2016; Elliott et al. 2019; Leebet al. 2007; Lin et al. 2006; Matsumotot et al. 2001]. Gaze interactionis less expensive than BCI, with data rates orders of magnitudesfaster. They are also more operable and convenient to use for vehi-cle control than BCI is [et Al 2019; Sebastian Halder and Kansaku2018; William W. Abbott 2011].
Gaze interactions are commonly used for communication byLIS-patients [Spataro et al. 2014], as they cannot use their voice andhands. The main challenge for safe and efficient gaze control inter-faces is unintentional selection. This gives rise to the problem ofdistinguishing intended control actions from non-intended, a prob-lem commonly referred to as the Midas touch dilemma [HachemA. Lamti and Hugel 2019; Jacob 1991]. Advances in sensor technol-ogy and artificial intelligence (AI) may partly overcome this, forinstance, by intelligent collision avoidance [Eid et al. 2016; Leamanand La 2017], but it is an open issue on how to design the userinterface for semi-autonomous gaze-controlled vehicles.
Obstacle courses in simulated environments are frequently usedfor testing the usability of alternative wheelchair control. VR hasalso been applied for training and assessment of wheelchair controlmethods [Buxbaum et al. 2008; Harrison et al. 2000, 2002]. Designof gaze interfaces for navigating in VR has been studied extensively(e.g. [Stellmach and Dachselt 2012; Tanriverdi and Jacob 2000]).Gaze control of wheelchairs has been evaluated in VR [Ktena et al.2015] as well as in real physical environments [Matsumotot et al.2001; Wästlund et al. 2010]. Gaze-controlled robots has also beenstudied in VR (e.g. [Watson et al. 2016; Zhang and Hansen 2019])
166 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
ETRA ’20 Adjunct, June 2–5, 2020, Stuttgart, Germany Jacopo et al.
Figure 1: Experiment setup with the view from the head-mounted gaze-tracking display shown on the monitor. Theparticipant turns to the right by dwelling at an overlay key(indicated by a red progress circle)
Figure 2: Example of an VR obstacle course from the exper-iment. Pictures on the walls are used for measures of situa-tional awareness, and the red cylinder marks the end-goal
and in real environments (e.g. [Leaman and La 2017; Tall et al.2009]).
Virtual simulation offers advantages compared to a full-scalephysical test environment namely low cost, easy configurability,low risk of damage, high repeatability, and portability [Ktena et al.2015; Talone et al. 2013]. However, it may also ignore importantpart of the driving situation. We therefore decided to investigatethe feasibility of VR for the preliminary testing of gaze-controlledwheelchair interfaces. Since VR is a model that may simplify partsof the steering task and ignore some of the use-context compared toa real-life situation, we conducted a follow-up field study with fivepeople with motor disabilities, i.e. our target group. The main pur-pose was to improve the fidelity of simulation-based developmentof gaze-control of wheelchairs and robots.
2 VRWHEELCHAIR GAZE INTERFACEThe wheelchair simulation developed for our study places the sub-ject in a virtual wheelchair while equipped with a head-mounteddisplay (HMD) that has gaze tracking built into it. Figure 1 showsthe experimental setup with a user wearing the headset as wellas the VR world shown on the monitor in front of him. Figure 2shows a birds-eye view of the obstacle courses. The images on thewalls were used to measure situational awareness. The red cylindermarks the goal. The Unity 3D code for our experiment are availableonline1.
Three different gaze-controlled interfaces were tested in ourstudy.
2.1 Overlay dwell-button interfaceThe overlay interface has nine buttons yielding 16 commands witha 500 ms gaze dwell activation. Feedback on the dwell time acti-vation is provided by a circular progress icon. Buttons are usedfor selecting combinations of motor torques and steering anglesthat are represented by arrow figures on the buttons. The forwardbutton has three intensities of motor torque, and the left/right but-tons has three levels of steering angles. Once a control action isexecuted, it will continue on this torque until there is a reset onthe center-button (equivalent to a neutral-gear), or until it getsoverwritten by another command. The brake button is a big buttonin the lower hemisphere and is marked with a red contour. Whenthe vehicle stands still, the brake button will make the vehicle goin reverse. Visual inspection of the environment can only be doneoutside the overlay or with short fixations below the dwell-timethreshold (see Figure 3).
Figure 3: Overlay control interface with referring red ar-rows and explaining text boxes. (For illustrative purposes,the original transparent gray borders of the GUI have beenhighlighted in yellow)
2.2 Continuous-control interfaceWith this , the steering is done by measuring the gaze point inter-section with the ground plane in relation to the driver’s currentposition. This vector’s depth and horizontal values are mappedthrough a linear and exponential function to output motor torque1https://github.com/wheelchairdrivingsimulation/WheelchairDrivingSimulation
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 167
Exploring Eye-Gaze Wheelchair Control ETRA ’20 Adjunct, June 2–5, 2020, Stuttgart, Germany
and a steering angle, respectively. The mapping functions weredefined by upper and lower thresholds as well as slope, which werethen fine-tuned through a pilot study with three subjects. Further-more, a graphic user interface (GUI) area for braking and reversing(with rotation) is provided in the upper hemisphere; the brake but-ton makes the wheelchair reverse providing it’s already standingstill. Below the brake and reverse buttons, there is an orientationspace that resets any control action. This space can be used fororientation purposes and to stop the continuous control. All fourareas are marked with a box and transparent text (see Figure 4).
Figure 4: Continuous-control interface with referring red ar-rows and text boxes. (For illustrative purposes, the originaltransparent gray borders of the GUI have been highlightedin yellow)
2.3 Semi-autonomous waypoint interfaceBy utilizing a navigation and path finding module included in theVR software to direct the motion of the wheelchair along the fastestroute to a waypoint, the interface is semi-autonomous. Waypointsare set on the ground plane by a 750 ms dwell-activation with aprogress circle for feedback. This interface has an overlay GUI inthe upper hemisphere for rotation and braking. As in the previouscase, the brake button reverses if the wheelchair is standing still.Orientation can be done by avoiding looking on the interactiveground-plan or by gazing for less than the dwell-time (see Figure5).
3 METHODS3.1 ParticipantsThe experiments were conducted with 18 subjects of which 13were male. The mean age was 31 years ± 11.5. The participantswere recruited from our university and from our personal network.The subjects had little or no experience with VR, and none hadexperience with gaze interaction. Two subjects did not complete allthe control interface conditions, and their data were excluded fromthe analysis. Four of the subjects used their own glasses inside theHMD, and no issues with this were reported.
Figure 5: waypoint control interface with referring red ar-rows and text boxes. (For illustrative purposes, the originaltransparent gray borders of the GUI have been highlightedin yellow)
3.2 ApparatusA FOVE virtual reality head-mounted display was used for theexperiments. The HMD provides inertial measurement unit (IMU)orientation tracking of field of view (FOV) along with IR-basedbinocular gaze tracking. The eye-tracking sampling rate is 120Hz fps and the manufacturers indicate a tracking accuracy to be< 1 degree of the visual angle. The WQHD OLED display has aresolution of 2560 x 1440 with a frame rate of 70 fps. The spansup to 100 degrees horizontally. The environment was developedin Unity (ver. 0.16.0), a real-time development platform for 2D, 3D,VR, and AR. The simulation was run on a NVIDIA GTX 1080 grapiccard.
3.3 ProcedureUpon arrival, participants were informed of the purpose and theprocedure of the experiments and then asked to sign an informedconsent form. They were then given an introduction to the controlinterfaces and the test courses, supported by a short video tutorial.The participants were informed of the main task of the test courses,specifically to reach the marked end-goal of the course as fast andwith as few collisions as possible. Additionally, they were informedof a secondary task of test courses 4 and 6, being to rememberthe images appearing on the wall. This was done to measure theirsituational awareness while driving.
Before starting the experiments, the participants completed aquestionnaire and were seated comfortably. The HMDwas attachedand gaze tracking was calibrated. The order of control interfaces (i.e.overlay, continuous, and waypoint) as well as the course conditions(difficulty levels 1-3) were determined by the experimental design(see Section 3.4). One training session was given for each interfacecondition prior to running the five test courses used for analysis.The participants were tasked with completing all difficulty levelswith each interface condition before testing a new interface. Eachcourse was marked completed when a participant parked at theend-goal (the large red cylinder). After completion of each of thesessions, the experimenter noted the number of images that theparticipant remembered correctly. In between the testing of each
168 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
ETRA ’20 Adjunct, June 2–5, 2020, Stuttgart, Germany Jacopo et al.
interface, the participants were encouraged to take a 10-minutebreak. After testing all interfaces, the participants completed apost-experimental questionnaire, ranking the interfaces in terms ofintuitiveness, control, and reliability. They were also asked whichinterface they preferred. The full experiment took between one anda half to two hours per participant.
3.4 DesignThe experiment used a 3 x 3within-subject designwith three controlinterfaces (overlay, continuous, and waypoint) and three difficultylevels (easy, medium, and hard). Each subject was to completea training course with no obstacles followed by five test coursescomprised of the three interfaces. A Latin square design was utilizedfor randomizing the order of the control interfaces among theparticipants, while the levels of difficulty were completed in a fixedorder by each of the participants.
3.4.1 Easy conditions. : Courses 2 and 3 were the easy conditions.They had two fixed obstacles, 14 turns, and wide pathways. Bothcourses were expected to be completed three times to explore po-tential learning effects. This was only done for the easy condition,as the courses were relatively short (thus not pushing time con-straints).
3.4.2 Medium condition. : Course 5 was the only course with amedium level of difficulty. It was designed with 10 turns and narrowpathways without barriers that would protect the wheelchair fromfalling off the road. This course incorporated a single obstacle thattested the driver’s ability to brake as a reactive response to a road-block spawning in front of the driver. The roadblock disappearedafter 3 seconds, clearing the path toward the end-goal.
3.4.3 Hard conditions. : Courses 4 and 6 had the most difficult con-ditions. Like the medium-level course, they each had reactive brak-ing episodes to respond to two obstacles. The courses had a total of39 turns and were designed with narrow pathways. These courseshad 10 images placed on the walls throughout the course, alongwith a secondary assignment to remember those. These courseswere longer compared to the easy and medium ones. Five sligthlydifferent versions of courses 4 and 6 were made with different im-ages posted on the walls to measure situational awareness and/orwith different positioning of the spawning roadblocks for testingreactive braking. The different course versions presented under themedium and hard conditions were determined by a Latin squaredesign, changing the order for each participant and ensuring thatno course version was encountered twice.
As explained above, the second and third courses were repeatedthree times, providing a total of 27 observations per participant(i.e., 3 repetitions of the 2 easy conditions + 1 medium condition +2 hard conditions = 9, then x 3 for each interface = 27).
The dependent variables were task time (i.e., time to completea trial), number of collisions, and situational awareness. Since col-lisions were not possible under the semi-autonomous waypointinterface, that utilized an anti-collision feature in the VR-software,theywere onlymeasured for the overlay and the continuous-controlinterface conditions.
4 RESULTS4.1 Task TimeThe grand mean for task time per trial was 155.3 seconds (s). Forcomparison, an expert user with standard game navigation ona keyboard had a grand mean of 40.5 s. From fastest to slowest,the means were 82.2 s (waypoint), 136.6 s (continuous), and 246.9(overlay). In terms of difficulty level, the easy condition (M = 74.81,SD = 51.81) yielded lower mean task times compared to the mediumcondition (M = 179.68, SD = 117.16) and the hard condition (M =223.51, SD = 150.94).
A repeated analysis of variance (ANOVA) showed that the maineffect on task time was significant for interface (F (2, 150) = 92.091,p < 0.0001), and difficulty level (F (4, 150) = 61.476, p < 0.0001). Also,the interaction effect was significant for interface by difficulty level(F (8, 150) = 7.0401, p < 0.0001). (See figure Figure 6).
Post-hoc comparisons using Tukey’s honest significant differ-ence (HSD) test indicated that the interface waypoint (M = 82.2, SD= 47.97) was significantly faster than the continuous (M = 136.6, SD= 105.75), that was significantly faster than the overlay (M = 246.9,SD = 159.26).
Figure 6: The interaction between interface condition andlevel of difficulty on task time.
4.2 CollisionsThe waypoint interface was excluded from this analysis since thecollision-avoidance feature automatically prevented all collisions.The grandmeanwas 2.5 collisions per trial. Themain effect of the in-terfaces (F (1, 75) = 26.1, p = 0.0001) indicated that the continuous (M= 1.6, SD = 2.090) had significantly fewer collisions than the overlay(M = 3.3, SD = 3.2). A repeated ANOVA test showed that the maineffect of the difficulty level (F (4, 75) = 20.619, p < 0.0001) indicatedsignificantly more collisions during the hard conditions, comparedto the two other conditions. Post-hoc comparisons confirmed thiswas the case for all interfaces.(See Figure 7).
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 169
Exploring Eye-Gaze Wheelchair Control ETRA ’20 Adjunct, June 2–5, 2020, Stuttgart, Germany
Figure 7: Track plots from subject 13. Left: Overlay dis-play with four collisions and 133.7 s trail time . Right:Continuous-control interface with no collisions and 36.8 strail time
4.3 Situational AwarenessSituational awareness was measured by the number of images asubject could recall upon completion of the level. There were atotal of 10 images of recognizable objects placed throughout theobstacle course, and 75 % of the subjects could remember up to 7images regardless of the control interface. The grand mean was5.3 images. A paired-samples t-test was conducted to comparethe numbers of images remembered between the waypoint (M =6.2, SD = 3.06), continuous (M = 5.23, SD = 5.08) and overlay(M = 4.6, SD = 5.01). There was no significant difference betweenthe overlay and continuous control (t(29) = −1.5006,p = 0.1442),but there was a significant difference between the waypoint andboth the overlay (t(29) = −3.7882,p = 0.0007) and the continuous(t(29) = −2.1433,p = 0.0406) control.
4.4 User evaluationsThe user evaluation of the interfaces was conducted by a post-experimental questionnaire assessing the level of intuitiveness,control, and reliability of the interfaces on a Likert scale of 1-10.The grand mean was 6.56. From highest to lowest, the means were7.63 (waypoint), 6.78 (continuous), and 5.28 (overlay). A repeatedANOVA test found significant differences among the interfaces(F (2, 96) = 71.48, p <0.0001). In terms of intuitiveness, the waypoint(M = 7.47, SD = 1.81) yielded significantly higher ratings than over-lay (M = 5.88, SD = 2.03), with no significant differences betweenthe continuous (M = 6.77, SD = 1.64) and any of the two otherinterfaces. In terms of control, the waypoint (M = 7.53, SD = 1.38)and the continuous (M = 7.06, SD = 1.14) both yielded significantlyhigher ratings than the overlay (M = 4.77, SD = 2.31). In terms ofreliability, the waypoint (M = 7.88, SD = 1.36) yielded significantlyhigher ratings than the continuous (M = 6.53, SD = 1.51) andoverlay (M = 5.18, SD = 1.94). There were no significant differ-ences among the categories or interaction between categories andinterfaces.
Subjects were finally asked to rank the three control interfacesagainst each other. From a total of 16 completed rankings, ninesubjects ranked thewaypoint interface first compared to six subjects
Figure 8: Subjective ratings of the interface design in termsof intuitiveness, control and reliability
who preferred the continuous-control while only one participanthad a preference for the overlay-control.
4.5 Summary of VR experimentThe semi-autonomous waypoint control method with anti-collisionoutperformed the two other interfaces in terms of performance,situational awareness, and user preference.
The continuous-control interface performed similar to the way-point interface under easy conditions, but fell behind under mediumand hard conditions, implying shortcomings of the continuous inter-face’s ability to perform cautious and accurate control. The overlayinterface with GUI control performed the worst in test and onsubjective ratings.
5 FIELD OBSERVATIONSWe also did a field study in a long-term care facility with 5 people,who use wheelchairs to compensate for motor disabilities caused bycerebral palsy or other neurological conditions. The observationswere conducted as prototype experience sessions with differenttelerobots and control methods (hand, voice, and gaze). (See [Zhangand Hansen 2020] for further details). Here, we will report on onlyusing gaze control of a telerobot by a continuous control interfaceand a waypoint interface, both much like those one used in the VRexperiment. In the field study, the participants were controlling atelerobot, not a virtual wheelchair. The telerobot is also a wheeleddevice and the motion characteristics are much like those of thevirtual wheelchair.
5.1 ProcedureThe participants experienced gaze control in two different set-ups:Using the FOVE HMD with gaze pointing and steering with theirgaze on a monitor by use of a Tobii 4C gaze tracker. One of theparticipants could only be calibrated for the monitor set-up notthe FOVE. The monitor condition was done using a Wizard of Ozmethod [Salber and Coutaz 1993]. The experimenter stood behind
170 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
ETRA ’20 Adjunct, June 2–5, 2020, Stuttgart, Germany Jacopo et al.
the participant and moved a pointer to the locations of the user’sgaze-point, which was shown on a monitor in front of them.
Figure 9: Testing gaze controlled navigation in a long-termcare facility. Participants complained that looking to thefloor-plan felt uncomfortable andmade it difficult to see thesurroundings.
The four participants who could use the FOVE were offeredtraining sessions in a VR environment where they would then drivethe robot around in a room about the size of the room they were in.The participants were first trained to navigate a virtual robot in VRby conducting six trials. When using our VR simulator applicationfor training, the simulated live video stream was generated fromour 3D modeling in Unity. Once participants felt comfortable withthe setup, they were asked to try driving a real robot. The twotypes of telerobots used were a modified PadBot with a 360-degreecamera and a Double robot. Both of these would stream live videofrom the telerobot camera to the user’s interface, on top of whichthe gaze navigation was done. Our application provided users withtwo ways of driving the telerobots, namely through continuousinterface and through the waypoint control. In our case, waypointscould be marked with gaze dwelling at the floor for a set time.
The task was to drive around a table in the room, noticing whatwas on the table and greeting a person theywouldmeet on their way.All of the participants were able to do this using the continuous
Figure 10: A participant driving a Double tele-robot in frontof a screen with an tracker. The experimenter was standingbehind the user and emulating his gaze movements by useof a joystick, applying a Wizard of Oz simulation of gazeinteraction
interface but three of them had great difficulty doing the sametask using the waypoint interface. In fact, only one person wasable to drive all around the table using waypoint. This person wasnotably slower in driving with waypoints, and both he and the othersubjects had several collisions with the table. We observed that theparticipants only moved with very short waypoint settings becausethey were afraid to loose control on a long leg. This also meant thatthey had to look far down on the ground, something that one ofthem remarked felt uncomfortable. Two other participants noticedthat looking down made it difficult to observe the surroundingsand engage in a social interaction (i.e., greeting the person theymet). (See Figure 9).
5.2 ObservationsWhen we asked the participants which of the gaze interactionmethods they liked the most, the unanimous answer was the remotesetup with a monitor (Figure 10). There were several reasons forthis. First, it allowed them to attend to the people in the room whiledriving and to show their face uncovered at the telerobot monitor.Second, one participants found it limiting that he would not beable to put on the HMD himself or adjust it if needed. The remotesetting, he imagined, would allow him to just drive up in frontof the monitor and start using the telerobot without the need ofassistance. Finally, some of the participants expected that the HMDwould be uncomfortable to wear for an extended time.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 171
Exploring Eye-Gaze Wheelchair Control ETRA ’20 Adjunct, June 2–5, 2020, Stuttgart, Germany
Figure 11: Testing the "GazeDriver" in a wheelchair simula-tor. The wheel-turns are transformed to movements in a VRgame environment shown on the bigmonitor in front of theuser. The user looks through the 3 circles above the trackerto steer left, straight,n or right.
Figure 12: Driving with the "GazeDriver" in a real world feltmuch like driving it in the simulator. The green light indi-cate that the user is making a right turn because he looksthrough the right circle
6 DISCUSSIONOur experiment in the VR environment found superior performanceand user preference for the semi-autonomous waypoint method.However, the wheelchair users in our field study did not like to beforced to look down on the floor to use this method. They only triedthis method with a HMD set-up and they might have felt different, ifthe waypoint method had been used in a remote gaze tracking setupthat would allow them to attend the surroundings. The trainingsessions applied a continuous-control interface. The participantsmay have felt more comfortable with the waypoint interface if they
had been trained with this also. Without training, they tended toset the waypoints very close to each other, which forced them tolook down a lot. The semi-autonomous functions of the waypointinterface that prevented the wheelchair from colliding was muchappreciated by the participants in the VR experiment. However,collision avoidance in a complex and dynamic environment is anarea open to research [Wang et al. 2013] and it is not likely that the"perfect" avoidance system in the VR environment will be availablefor real-world wheelchair navigation soon. The participants in thelong-term care facility collided with obstacles such as chairs. Thisnever happened in the VR setting where the courses were cleared,except for the few spawning roadblocks. Finally, they were drivinga telerobot with a slight delay on the wireless transmissions ofcommands, while the VR world responded immediately to gazeinputs. All of these differences point to the importance of not onlytesting navigation interfaces in VR but also in a real-world settinginvolving target users.
We have built a platform where turns of the wheels are trans-ferred into steering commands in a game world. This world mayboth be explored in full 3D wearing a HMD or on a monitor lo-cated in front of the wheelchair user. The simulator provides ahardware-in-the-loop realism with regard to the wheelchairs motorand control system and a realistic vibration of the turning wheels[Hansen et al. 2019]. We have used this platform to test a commer-cial product, "GazeDriver" in front of a large monitor. This productuses a Tobii remote gaze tracker, and the user looks through threesmall circles above the tracker to steer left, straight, or right. Look-ing outside the circles makes the wheelchair stop. This simulationset-up allowed the user to be able to attend the full environmentvia the peripheral vision while driving with gaze. With a largemonitor set-up the tests worked well and driving felt much likereal gaze-driving. (See Figures 11 and 12.)
7 CONCLUSIONThis paper presented our ongoing work with the exploration ofalternative control of wheelchairs. We conducted an experimentin VR that resulted in significantly better performance and higheruser preference for a semi-autonomous waypoint gaze interfacecompared to a continuous-control and an overlay display. A fieldstudy revealed that VR testing had a simplified representation of thesteering task, especially with regard to collision avoidance, and alsoignored important part of the use context, for instance the feelingof being excluded behind the HMD. Finally, we suggested using awheelchair platform for amore realistic user experience where hard-ware-in-the-loop simulations preserves the true characteristics ofthe wheelchair motion.
ACKNOWLEDGMENTSThanks to the people at Jonstrupvang Long-term Care home whoparticipated in this study. The research was supported by the ChinaScholarship Council and the Bevica Foundation. Thanks also toAstrid Kofod Trudslev and Nils David Rasamoel for assisting withthe study, and to Martin Sølvmose for providing the "GazeDriver".
REFERENCESKohei Arai and Ronny Mardiyanto. 2011. A prototype of electric wheelchair controlled
by eye-only for paralyzed user. Journal of Robotics and Mechatronics 23, 1 (2011),
172 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
ETRA ’20 Adjunct, June 2–5, 2020, Stuttgart, Germany Jacopo et al.
66.Rafael Barea, Luciano Boquete, Luis Miguel Bergasa, Elena López, and Manuel Mazo.
2003. Electro-oculographic guidance of a wheelchair using eye movements codifi-cation. The International Journal of Robotics Research 22, 7-8 (2003), 641–652.
Christian Bartolein, Achim Wagner, Meike Jipp, and Essameddin Badreddin. 2008.Easing wheelchair control by gaze-based estimation of intended motion. IFACProceedings Volumes 41, 2 (2008), 9162–9167.
Laurel J Buxbaum, Mary Ann Palermo, Dina Mastrogiovanni, Mary Schmidt Read,Ellen Rosenberg-Pitonyak, Albert A Rizzo, and H Branch Coslett. 2008. Assessmentof spatial attention and neglect with a virtual wheelchair navigation task. Journalof Clinical and Experimental Neuropsychology 30, 6 (2008), 650–660.
Mohamad A Eid, Nikolas Giakoumidis, and Abdulmotaleb El Saddik. 2016. A noveleye-gaze-controlled wheelchair system for navigating unknown environments:case study with a person with ALS. IEEE Access 4 (2016), 558–573.
Michael A Elliott, Henrique Malvar, Lindsey L Maassel, Jon Campbell, Harish Kulkarni,Irina Spiridonova, Noelle Sophy, Jay Beavers, Ann Paradiso, Chuck Needham, et al.2019. Eye-controlled, power wheelchair performs well for ALS patients. Muscle &nerve 60, 5 (2019), 513–519.
Md. Fahim Bhuyain et Al. 2019. Design and development of an EOG-based system tocontrol electric wheelchairs for people suffering from quadriplegia or quadriparesis.International Conference on Robotics,Electrical and Signal Processing Techniques 0(2019), 0.
Mohamed Moncef Ben Khelifa Hachem A. Lamti and Vincent Hugel. 2019. Cerebraland gaze data fusion for wheelchair navigation enhancement: case of distractedusers. Robotica 37 (2019), 246–263.
John Paulin Hansen, Astrid Kofod Trudslev, Sara Amdi Harild, Alexandre Alapetite,and Katsumi Minakata. 2019. Providing access to VR through a wheelchair. InExtended Abstracts of the 2019 CHI Conference on Human Factors in ComputingSystems. 1–8.
A Harrison, G Derwent, A Enticknap, F Rose, and E Attree. 2000. Application of virtualreality technology to the assessment and training of powered wheelchair users.In Proceedings of the 3rd International Conference Disability, Virtual Reality andAssociated Technologies.
A Harrison, G Derwent, A Enticknap, FD Rose, and EA Attree. 2002. The role ofvirtual reality technology in the assessment and training of inexperienced poweredwheelchair users. Disability and rehabilitation 24, 11-12 (2002), 599–606.
Robert JK Jacob. 1991. The use of eye movements in human-computer interactiontechniques: what you look at is what you get. ACM Transactions on InformationSystems (TOIS) 9, 2 (1991), 152–169.
Matthew C Kiernan, Steve Vucic, Benjamin C Cheah, Martin R Turner, Andrew Eisen,Orla Hardiman, James R Burrell, and Margaret C Zoing. 2011. Amyotrophic lateralsclerosis. The lancet 377, 9769 (2011), 942–955.
Sofia Ira Ktena, William Abbott, and A Aldo Faisal. 2015. A virtual reality platform forsafe evaluation and training of natural gaze-based wheelchair driving. In 2015 7thInternational IEEE/EMBS Conference on Neural Engineering (NER). IEEE, 236–239.
Jesse Leaman and Hung Manh La. 2017. A comprehensive review of smart wheelchairs:past, present, and future. IEEE Transactions on Human-Machine Systems 47, 4 (2017),486–499.
Robert Leeb, Doron Friedman, Gernot R Müller-Putz, Reinhold Scherer, Mel Slater, andGert Pfurtscheller. 2007. Self-paced (asynchronous) BCI control of a wheelchair invirtual environments: a case study with a tetraplegic. Computational intelligenceand neuroscience 2007 (2007).
Chern-Sheng Lin, Chien-Wa Ho, Wen-Chen Chen, Chuang-Chien Chiu, and Mau-Shiun Yeh. 2006. Powered wheelchair controlled by eye-tracking system. OpticaApplicata 36 (2006).
Yoshio Matsumotot, Tomoyuki Ino, and Tsukasa Ogsawara. 2001. Development ofintelligent wheelchair system with face and gaze based interface. In Proceedings10th IEEE International Workshop on Robot and Human Interactive Communication.ROMAN 2001 (Cat. No. 01TH8591). IEEE, 262–267.
Daniel Salber and Joëlle Coutaz. 1993. Applying the wizard of oz technique to the studyof multimodal systems. In International Conference on Human-Computer Interaction.Springer, 219–230.
Kouji Takano Sebastian Halder and Kenji Kansaku. 2018. Comparison of Four ControlMethods for a Five-Choice Assistive Technology. Ieej Transactions on Electrical andElectronic Engineering 13 (2018), 1795–1803.
Rosella Spataro, Maria Ciriacono, Cecilia Manno, and Vincenzo La Bella. 2014. Theeye-tracking computer device for communication in amyotrophic lateral sclerosis.Acta Neurologica Scandinavica 130, 1 (2014), 40–45.
Sophie Stellmach and RaimundDachselt. 2012. Designing gaze-based user interfaces forsteering in virtual environments. In Proceedings of the Symposium on Eye TrackingResearch and Applications. ACM, 131–138.
Martin Tall, Alexandre Alapetite, Javier San Agustin, Henrik HT Skovsgaard,John Paulin Hansen, Dan Witzner Hansen, and Emilie Møllenbach. 2009. Gaze-controlled driving. In CHI’09 Extended Abstracts on Human Factors in ComputingSystems. ACM, 4387–4392.
Andrew Talone, Thomas Fincannon, David Schuster, Florian Jentsch, and Irwin Hudson.2013. Comparing physical and virtual simulation use in UGV research: lessonslearned from HRI research with two test beds. In Proceedings of the Human Factors
and Ergonomics Society Annual Meeting, Vol. 57. SAGE Publications Sage CA: LosAngeles, CA, 2017–2021.
Vildan Tanriverdi and Robert JK Jacob. 2000. Interacting with eye movements invirtual environments. In Proceedings of the SIGCHI conference on Human Factors inComputing Systems. ACM, 265–272.
Chao Wang, Alexey S Matveev, Andrey V Savkin, Tuan Nghia Nguyen, and Hung TNguyen. 2013. A collision avoidance strategy for safe autonomous navigation of anintelligent electric-powered wheelchair in dynamic uncertain environments withmoving obstacles. In 2013 European Control Conference (ECC). IEEE, 4382–4387.
Erik Wästlund, Kay Sponseller, and Ola Pettersson. 2010. What you see is where yougo: testing a gaze-driven power wheelchair for individuals with severe multipledisabilities.. In ETRA, Vol. 10. 133–136.
Ginger S Watson, Yiannis E Papelis, and Katheryn C Hicks. 2016. Simulation-basedenvironment for the eye-tracking control of tele-operated mobile robots. In Pro-ceedings of the Modeling and Simulation of Complexity in Intelligent, Adaptive andAutonomous Systems 2016 (MSCIAAS 2016) and Space Simulation for Planetary SpaceExploration (SPACE 2016). 1–7.
A. Aldo Faisal William W. Abbott. 2011. Ultra-low cost eyetracking as an high-information throughput alternative to BMIs. BMC Neuroscience 12 (2011), 103.
Guangtao Zhang and John Paulin Hansen. 2019. Accessible control of telepresencerobots based on eye tracking. In Proceedings of the 11th ACM Symposium on EyeTracking Research & Applications. 1–3.
Guangtao Zhang and John Paulin Hansen. 2020. People with Motor Disabilities UsingGaze to Control Telerobots. In Extended Abstracts of the 2020 CHI Conference onHuman Factors in Computing Systems. 1–9.
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 173
C Appendix DocumentC.1 Experiment 2 (Training)C.1.1 ParticipantsA total of 32 ablebodied participants (15 females, 17 males) participated in the experiment. The mean age of the participants was 28.19 (SD = 7.31). Each participant wascompensated with a gift card valued 100 DKK for their participation. 22 participants hadexperience with VR glasses, mostly for entertainment. 10 participants had experiencewith gazecontrolled devices. 6 participants had experience with mobile telepresencerobots. 6 participants had VR sickness to some degree.
C.1.2 Experimental DesignA betweengroup design was used in this experiment. There were three group of independent variables: 1) training status (before or after training); 2) training type (with a realrobot or with a virtual robot in a VR simulator); 3) maze layout (whether the maze layoutfor training is the same or different from the real teleoperation used for final testing).
Dependent variables included the participants performance, workload, SA, posttrial recollection, estimation, selfassessment, and experience of navigating the telerobot.
C.1.3 ConditionsThere were in total 4 test conditions.
1. Pretrial + 5 x VR training with same layout + final trial;
2. Pretrial + 5 x VR training with different layout + final trial;
3. Pretrial + 5 x Reality training with same layout + final trial;
4. Pretrial + 5 x Reality training with different layout + final trial.
C.1.4 ApparatusThe experiment was conducted in a lab. The test subjects were sitting outside the roomwhere the telerobot was located. FOVE headset was connected with a computer withUnity. The computer was connected with the telerobot via a wireless network. The headset has a resolution of 2560 X 1440 px, redered at a maximum of 70 fps, and its field ofviewwas 100 degree. A laptop with limesurvey on local server, connected withGamesOnTracks (GOT) sensors.
In the driving room (inside the lab), three GOT sensors were mounted on the wall. Atelerobot carried a 360 degree camera (Ricoh Theta S), a microphone, and two sensorsfor indoor positioning connected with the GOT sensors. White plastic sticks on the floorwere used as maze separators
C.1.5 ProcedureParticipants performed the experiment in a single session lasting around 60 minutes.Each participant signed a consent form at the beginning, their demographics were collected with a questionnaire and a SAM questionnaire. The main task for the participantswas navigation. First, the participants navigate the telerobot through a maze in a room(pretrial)., Afterwards, they navigated for five times for training purpose (training session).The training environment for half of the participants was VR, and it was reality for half ofthe participants. After training, they navigated the telerobot again (final trial). Regarding
174 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
the maze layout in the pretrial and final trial, the layout was the same with the training forhalf of the participants. The two maze layouts were similar in terms of length and difficulty.Before each trial and the training session, the standard eye calibration procedure for theheadset was conducted. Before starting the pre and final trial, they were informed thatthey were facing to provide information needed for answering one of the situation awareness queries. During each trial, a saccade test and two SPAMqueries were activatedmanually.
C.1.6 Results (Charts)Besides the chart (task completion time) (Fig. 6.4) shown in Section 6, the other chartsare presented here.Performance
DIFF SAME
Trial1 Trial2 Trial1 Trial2
0
2
4
6
Trial
Num
ber o
f Col
lisio
ns
Type
REAL
VR
Collision times
Figure C.1: Number of collisions (training effects)
Workload
DIFF SAME
Trial1 Trial2 Trial1 Trial2
5
10
15
Trial
Sco
re
Type
REAL
VR
NASA−TLX:effort
Figure C.2: NASATLX: effort (training effects)
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 175
DIFF SAME
Trial1 Trial2 Trial1 Trial2
4
8
12
16
Trial
Sco
reType
REAL
VR
NASA−TLX:Frustration
Figure C.3: NASATLX: frustration (training effects)
SA (SPAMbased Popup)
DIFF SAME
Trial1 Trial2 Trial1 Trial2
2
3
Trial
Sco
re
Type
REAL
VR
Response time (preliminary question)
Figure C.4: SA: RS to the preliminary question (training effects)
176 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Saccade Test
DIFF SAME
Trial1 Trial2 Trial1 Trial2
250
300
Trial
Late
ncy
(ms)
Type
REAL
VR
Latency of the first correct saccade
Figure C.5: Latency of the first correct saccade (training effects)
Self assessment SAM
DIFF SAME
Trial1 Trial2 Trial1 Trial2
2
3
4
5
Trial
Sco
re
Type
REAL
VR
SAM:Pleasure
Figure C.6: SAM Pleasure (training effects)
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 177
DIFF SAME
Trial1 Trial2 Trial1 Trial2
2
3
4
Trial
Sco
reType
REAL
VR
SAM:Dominance
Figure C.7: SAM dominance (training effects)
Confidence
DIFF SAME
Trial1 Trial2 Trial1 Trial2
2
3
4
Trial
Sco
re
Type
REAL
VR
Confidence
Figure C.8: Level of confidence (training effects)
C.1.7 Results (Table)
178 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Dependent variable Independet variable Statictical value
P-value Effect (mean/SD)/(median/mean rank)
Test type
Training enivornement F(1,60)=6.69 p<0.05 eta^2=0.076 Reality= 2.25(SD=2.67)VR= 3.46(SD=3.30)
3-way ANOVA
Trial order F(1,60)=20.95 p<0.001 eta^2=0.239 Pretrial= 4.50(SD=3.20)Final trial= 1.13(SD=1.56)
3-way ANOVA
Maze layout Different = 3.06(SD=3.39)Same= 2.56(SD=2.63)
3-way ANOVA
Trial order F(1,60)=34.91 p<0.001 eta^2=0.366 Pretrial= 3.59(SD=0.50)Final trial= 2.58(SD=0.65)
3-way ANOVA
Training enivornement Reality= 3.10(SD=0.87)VR= 3.07(SD=0.63)
3-way ANOVA
Maze layout Different = 3.11(SD=0.73)Same= 3.06(SD=0.81)
3-way ANOVA
Trial order F(1,60)=36.76 p<0.001 eta^2=0.377 Pretrial= 4.58(SD=0.65)Final trial= 3.59(SD=0.58)
3-way ANOVA
Training enivornement Reality= 4.11(SD= 0.87)VR= 4.06(SD=0.69)
3-way ANOVA
Maze layout Different = 4.14(SD=0.73)Same= 4.04(SD=0.85)
3-way ANOVA
Maze layout VR= 3.62(SD=3.42)= 1.87(SD=2.68)
2-way ANOVA
Training enivornement F(1,29) = 11 p<0.01 Eta^2 = 0,2536 Final trial= 1.18(SD=1.75)= 4.31(SD=3.49)
2-way ANOVA
Maze layout Same= 2.87(SD=2.21)= 1.37(SD=1.31)
2-way ANOVA
Training enivornement Final trial= 1.43(SD=1.20)= 2.81(SD=2.31)
2-way ANOVA
Maze layout VR= 2.06(SD=1.84)= 0.75(SD=1.18)
2-way ANOVA
Training enivornement F(1,29)=16.69 p<0.01 Eta^2 = 0,3065 Same= 0.5(SD=0.81)= 2.31(SD=1.81)
2-way ANOVA
Maze layout Final trial= 1.37(SD=1.62)= 1.43(SD=2.27)
2-way ANOVA
Training enivornement VR= 0.68(SD=1.07)= 2.12(SD=2.36)
2-way ANOVA
Maze layout Same= 1.62(SD=2.06)= 0.87(SD=1.02)
2-way ANOVA
Training enivornement = 0.5(SD=0.81)= 2.0(SD=1.93)
2-way ANOVA
Maze layout Different = 1.73(SD=1.21)Same= 1.44(SD=0.68)
2-way ANOVA
Training enivornement F(1,29) = 17.37 p<0.001 Eta^2 = 0,371 Reality= 1.09(SD=0.77)VR= 2.08(SD=0.93)
2-way ANOVA
Maze layout Different = 1.05(SD=0.47)Same= 1.10(SD=0.51)
2-way ANOVA
Training enivornement f(1,29) = 12.51 p<0.01 Eta^2 = 0,3 Reality= 0.81(SD=0.47)VR= 1.33(SD=0.34)
2-way ANOVA
Maze layout Different = 0.98(SD=0.50)Same= 0.88(SD=0.39)
2-way ANOVA
Training enivornement F(1,29)=13.67 p<0.001 Eta^2 = 0,318 Reality= 0.68(SD=0.36)VR= 1.1(SD=0.39)
2-way ANOVA
Maze layout Different = 0.97(SD=0.55)Same= 0.88(SD=0.50)
2-way ANOVA
PerformanceNumber of collisions
Task completion time
Total drive time
Training session 1 collisions
Training session 2 collisions
Training session 3 collisions
Training session 4 collisions
Training session 5 collisions
No significance
No significance
No significance
No significance
No significance
No significance
No significance
Fails homogenity
Training session
No significance
No significance
No significance
No significance
No significance
Training session 1 task completion time
Training session task completion time
Training session 3 task completion time
Training session 4 task completion time
No significance
No significance
No significance
No significance
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 179
Training enivornement F(1,29)=7.73 p<0.01 Eta^2 = 0,2089 Reality= 0.69(SD= 0.34)VR= 1.16(SD=0.57)
2-way ANOVA
Maze layout Different = 0.95(SD=0.48)Same= 0.85(SD=0.41)
2-way ANOVA
Training enivornement F(1,29)=36.78 p<0.001 Eta^2 = 0,553 Reality= 0.58(SD=0.25)VR= 1.22(SD=0.35)
2-way ANOVA
Trial order Pretrial= 12.23(SD=5.48)Final trial= 9.93(SD=4.93)
3-way ANOVA
Training enivornement Reality= 11.09(SD=5.36)VR= 11.07(SD=5.32)
3-way ANOVA
Maze layout Different = 11.96(SD=5.59)Same= 10.20(SD=4.92)
3-way ANOVA
Trial order Pretrial= 7.36(SD=4.75)Final trial= 7.03(SD=4.97)
3-way ANOVA
Training enivornement Reality= 6.65(SD=4.53)VR= 7.82(SD=5.15)
3-way ANOVA
Maze layout Different = 8.03(SD=4.83)Same= 6.36(SD=4.75)
3-way ANOVA
Trial order Pretrial= 10.43(SD=4.09)Final trial= 9.06(SD=4.46)
3-way ANOVA
Training enivornement Reality= 9.93(SD=4.53)VR= 9.53(SD=4.09)
3-way ANOVA
Maze layout Different = 9.8(SD=4.5)Same= 9.7(SD=4.12)
3-way ANOVA
Trial order F(1,60)=7.88 p<0.01 eta^2=0.108 Pretrial= 9.20(SD=4.66)Final trial= 6.03(SD=3.59)
3-way ANOVA
Training enivornement Reality= 7.18(SD=3.83)VR= 8.10(SD=5.05)
3-way ANOVA
Maze layout F(1,60)=4.16 p<0.05 eta^2=0.056 Different = 6.86(SD=4.08)Same= 8.36(SD=4.69)
3-way ANOVA
Trial order F(1,60)=8.87 p<0.01 eta^2=0.128 Pretrial= 12.13(SD=4.01)Final trial= 9.00(SD=4.63)
3-way ANOVA
Training enivornement Reality= 10.34(SD=5.02)VR= 10.82(SD=4.09)
3-way ANOVA
Maze layout Different = 10.13(SD=4.82)Same= 11.00(SD=4.36)
3-way ANOVA
Trial order F(1,60)=8.22 p<0.01 eta^2=0.118 Pretrial= 9.63(SD=5.16)Final trial= 6.2(SD=4.23)
3-way ANOVA
Training enivornement Reality= 8.40(SD=4.77)VR= 7.35(SD=5.27)
3-way ANOVA
Maze layout Different = 8.10(SD=5.05)Same= 7.73(SD=5.01)
3-way ANOVA
Trial order F=(1,50)=5.19 p<0.05 eta^2=0.091 Pretrial= 5.31(SD=3.07)Final trial= 9.29(SD=6.33)
3-way ANOVA
Training enivornement Reality= 8.15(SD=6.25)VR= 6.81(SD=4.57)
3-way ANOVA
Maze layout Different = 8.16(SD=5.69)Same= 6.80(SD=5.20)
3-way ANOVA
Trial order Pretrial= 6.60(SD=3.77)Final trial= 7.80(SD=2.97)
3-way ANOVAComprehension response time
Situational awareness (SPAM)
Mental demand
Physical demand
Temporal demand
Overall performance
Effort
Frustration
No significance
No significance
No significance
No significance
No significance
No significance
No significance
No significance
No significance
No significance
No significance
No significance
Perception response time
No significance
No significance
No significance
No significance
No significance
Workload (NASA-TLX)
Training session 4 task completion time
Training session 5 task completion time
No significance
180 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Training enivornement Reality= 6.83(SD= 2.25)VR= 7.64(SD=4.18)
3-way ANOVA
Maze layout Different = 7.37(SD=3.17)Same= 7.14(SD=3.63)
3-way ANOVA
Trial order Pretrial= 10.67(SD=4.00)Final trial= 9.01(SD=4.21)
3-way ANOVA
Training enivornement Reality= 10.12(SD=4.01)VR= 9.46(SD=4.34)
3-way ANOVA
Maze layout Different = 10.94(SD=3.84)Same= 8.70(SD=4.22)
3-way ANOVA
Trial order F(1,50)=19.94 p<0.001 eta^2=0.215 Pretrial= 3.37(SD=1.62)Final trial= 2.34(SD=1.01)
3-way ANOVA
Training enivornement Reality= 2.70(SD=0.97)VR= 2.92(SD=1.74)
3-way ANOVA
Maze layout Different = 2.78(SD=0.89)Same= 2.85(SD=1.78)
3-way ANOVA
Trial order Chi-square testTraining enivornement Chi-square testMaze layout Chi-square testTrial order Chi-square testTraining enivornement Chi-square testMaze layout Chi-square testTrial order Chi-square testTraining enivornement Chi-square testMaze layout Chi-square test
Trial order Pretrial= 2.05(SD=1.62)Final trial= 1.50(SD=2.06)
3-way ANOVA
Training enivornement Reality= 1.58(SD=1.03)VR= 1.99(SD=2.5)
3-way ANOVA
Maze layout Different = 1.67(SD=1.65)Same= 1.88(SD=2.07)
3-way ANOVA
Trial order F(1,58)=11.996 p<0.01 eta^2=0.165 Pretrial= 0.63(SD=0.64)Final trial= 0.72(SD=1.05)
3-way ANOVA
Training enivornement Reality= 0.67(SD=0.67)VR= 0.69(SD=1.06)
3-way ANOVA
Maze layout Different = 0.75(SD=0.94)Same= 0.61(SD=0.79)
3-way ANOVA
Trial order F(1,57) = 18,67 p<0.001 eta^2=0.249 Pretrial= 2.90(SD=0.71)Final trial= 3.80(SD=0.89)
Training enivornement Reality= 3.33(SD=0.88)VR= 3.37(SD=0.97)
Maze layout Different = 3.39(SD=0.89)Same= 3.31(SD=0.96)
Trial order Pretrial= 73.0(SD=14.83)Final trial= 66.16(SD=17.50)
3-way ANOVA
Training enivornement Reality= 71.40(SD=16.02)VR= 67.50(SD=16.9)
3-way ANOVA
Maze layout Different = 69.83(SD=13.80)Same= 69.33(SD=18.97)
3-way ANOVA
Trial order Pretrial= 69.00(SD=15.99)Final trial= 62.83(SD=18.36)
3-way ANOVA
Answer projection question
Comprehension response time
Projection response time
Recollection|Duration - estimated
duration|
|Collisions- estimated collisions|
Estimated confidence
No significance
No significance
No significance
No significance
No significance
no significance
no significance
No significance
No significance
Saccade test results
Average response time of preliminary question
Answer perception question
Answer comprehension question
Number of successful trials
Number of correct trials
No significanceNo significanceNo significanceNo significanceNo significanceNo significanceNo significanceNo significanceNo significance
No significance
No significance
No significance
No significance
No significance
No significance
No significance
No significance
No significance
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 181
Training enivornement Reality= 67.65(SD=16.89)VR= 63.92(SD=17.96)
3-way ANOVA
Maze layout Different = 65.50(SD=15.33)Same= 66.33(SD=19.42)
3-way ANOVA
Trial order Pretrial= 300.15(SD=33.53)Final trial= 277.36(SD=43.67)
Maze layout Reality= 291.0(SD=46.72)VR= 286.19(SD=32.04)
Training enivornement 3-way ANOVAMaze layout F(1,29)=6.33 p<0.05 eta ^2= 0.179 2-way ANOVATraining enivornement 2-way ANOVATrial order Different =
293.12(SD=36.48)Same= 279.9(SD=42.35)
3-way ANOVA
Training enivornement Reality= 288.17(SD=45.78)VR= 284.61(SD=32.24)
3-way ANOVA
Maze layout Different = 285.80(SD=32.76)Same= 287.21(SD=46.26)
3-way ANOVA
Trial order Pretrial= 9.17(SD=1.37)Final trial= 9.54(SD=1.69)
3-way ANOVA
Training enivornement Reality= 9.39(SD=1.68)VR= 9.32(SD=1.38)
3-way ANOVA
Maze layout Different = 9.32(SD=1.62)Same= 9.39(SD=1.47)
3-way ANOVA
Trial order Mean rank Pretrial= 27.85 mean rank
Final trial= 1,38541666666667/ Median Pretrial=4
Median Final trial =4
Mann-Whitney U test
Training enivornement U = 670, Z = 2.2 p<0.05 r = 0.275 Mean rank Reality= 34.40 mean rank VR=
26.03/ Median Reality=4 Median VR
=4
Mann-Whitney U test
Maze layout Mean rank Different = 28.66 mean rank
Same= 32.33/ Median Different =4
Median Same =4
Mann-Whitney U test
Trial order U = 300, Z = -3 p<0.01 r = 0.375 Mean rank Pretrial= 24.41 mean rank
Final trial= 36.58/ Median Pretrial=3
Median Final trial =4
Mann-Whitney U test
Training enivornement Mean rank Reality= 32.65 mean rank VR=
28.03/ Median Reality=4 Median VR
=3
Mann-Whitney U test
Maze layout Mean rank Different = 29.41 mean rank
Same= 31.58/ Median Different =3 Median Same =3.5
Mann-Whitney U test
Trial order Mean rank Pretrial= 31.4 mean rank Final trial= 29.6/ Median Pretrial=4 Median
Final trial =3.5
Mann-Whitney U test
Latency of first saccade - interaction effect
F(1,56)=5,271 p<0,05 eta ^2= 0.073
3-way ANOVA
Latency of first saccade pretrial
Latency of first correct saccade
Amplitude any correct saccade
Number of correct trials
No significance
No significance
No significance
No significance
SAM
Self assesed arousal
Self assesed pleasure
Self assessed dominance
No significance
No significance
No significance
No significance
No significance
No significance
No significance
No significance
No significanceNo significance
No significance
182 Gazecontrolled Telepresence: Accessibility, Training and Evaluation
Training enivornement Mean rank Reality= 32.46 mean rank VR=
28.25/ Median Reality=4 Median VR
=4
Mann-Whitney U test
Maze layout Mean rank Different Mann-Whitney U Trial order Mann-Whitney U
testTraining enivornement U = 197, Z =
2.17p<0.01 r= 0.375 Mann-Whitney U
testMaze layout Mann-Whitney U Trial order Mann-Whitney U
testTraining enivornement Mann-Whitney U
testMaze layout U = 66.5, Z = -
2.51p<0.05 r = 0.444 Mann-Whitney U
test
Number of correct saccades on pretrial
Performance group F(1,14) = 3,913 p<0,1 η^2 = 0.2185 Best= 70.00(SD=15.11)Worst= 46.87(SD=29.391)
1-way ANOVA
Amplitude of first wrong saccade on pretrial
Performance group F(1,7)=12,53 p<0,001 η^2 = 0.641 Best= 3.02(SD=0.67)Worst= 7.39(SD=2.02)
1-way ANOVA
Reaction time 2. projection related question
Performance group F(1,9) = 6,589 p<0,05 η^2 = 0,351 Best= 7.57(SD=2.28)Worst= 12.05(SD=5.42)
1-way ANOVA
Reactiontime preliminary question 1. projection related question
Performance group F(1,9) = 6,163 p<0,05 η^2 = 0.406 Best= 1.83(SD=0.45)Worst= 9.84(SD=15.65)
1-way ANOVA
Latency any correct saccade pretrial
Performens group U= 4, Z = -2,777 p<0,01 r = 1.694 Mean rank Best= 5 mean rank Worst= 12/ Median best=273.84 Median worst =346.00
Mann-Whitney U test
Latency any correct saccade final trial
Performens group U= 12, Z = -2,1 p<0,05 r = 0.525 Mean rank Best= 6 mean rank Worst= 11/ Median best=266.85 Median worst =323.83
Mann-Whitney U test
Latency first correct saccade pretrial
Performens group U= 2,5, Z = -2,95
p<0,01 r = 0.73 Mean rank Best= 4 mean rank Worst= 12.18/ Median best=290.41 Median worst =313.33
Mann-Whitney U test
NASA effort pretrial Performens group U= 13, Z = -2 p<0,05 r = 0.501 Mean rank Best= 6 mean rank Worst= 10.87/ Median best=9.5 Median worst =14.5
Mann-Whitney U test
NASA Physical after training
Performens group U= 11,5, Z = -2,16
p<0,05 r = 0.541 Mean rank Best= 5 mean rank Worst= 11.06/ Median best=4.5 Median worst =10.5
Mann-Whitney U test
NASA Physical final trial Performens group U= 5,5, Z = -2,78
p<0,01 r = 0.697 Mean rank Best= 5 mean rank Worst= 11.81/ Median best=4.5 Median worst =10
Mann-Whitney U test
Self assessed pleasure after pretrial
Performens group U= 11,5, Z = -2,161
p<0,05 r = 0.541 Mean rank Best= 1 mean rank Worst= 5.5/ Median best=4.5 Median worst =3.5
Mann-Whitney U test
Self assessed pleasure after final trial
Performens group U= 56, Z = 2,63 p<0,05 r = 0.659 Mean rank Best= 1 mean rank Worst= 5.5/ Median best=4.5 Median worst =3
Mann-Whitney U test
8 best vs. 8 worst performers
Sel assessed pleasure after training
No significance
No significanceNo significance
No significanceNo significanceSelf assessed dominance
after training
Self assesed arousal
No significance
Gazecontrolled Telepresence: Accessibility, Training and Evaluation 183