Socially Assistive Robot HealthBot: Design, Implementation, and Field Trials

12
1056 IEEE SYSTEMS JOURNAL, VOL. 10, NO. 3, SEPTEMBER 2016 Socially Assistive Robot HealthBot: Design, Implementation, and Field Trials Chandimal Jayawardena, Senior Member, IEEE, I-Han Kuo, Elizabeth Broadbent, and Bruce A. MacDonald, Senior Member, IEEE Abstract—Socially assistive robotics is an important emerging research area. Socially assistive robotics is challenging as it is required to move robots out of laboratories and industrial settings to interact with ordinary human beings as peers, which requires social skills. The design process usually requires multidisciplinary research teams, which may comprise subject matter experts from various domains such as robotics, systems integration, medicine, psychology, gerontology, social and cognitive sciences, and neuro- science, among many others. Unlike most other robotic applica- tions, socially assistive robotics faces some unique software and systems integration challenges. In this paper, the HealthBot robot architecture, which was designed to overcome these challenges, is presented. The presented architecture was implemented and used in several field trials. The details of the field trials are presented, and lessons learned are discussed with field trial results. Index Terms—Robot programming, socially assistive robots, software architecture, software engineering for robotics. I. I NTRODUCTION R OBOTICS research in the past was mainly concentrated on control problems. In recent years, socially assistive robotics has emerged as a key robotics research area [1], [2]. This is a result of increasing needs of people for various kinds of help and the increasing abilities of robots. In particular, while the median age of the world population is increasing [3] and the numbers of health professionals and caregivers are decreasing [4], there is a huge demand for robots that can provide personal and health services. Socially assistive robotics is challenging as it is required to move robots out of laboratories and industrial settings to interact with ordinary human beings as peers, which requires social skills. Designing successful socially assistive robots usually requires multidisciplinary research teams, which may comprise subject matter experts (SMEs) from various do- mains such as robotics, medicine, psychology, gerontology, social and cognitive sciences, and neuroscience, among many others [1], [2]. Manuscript received October 31, 2013; revised March 8, 2014; accepted May 17, 2014. Date of publication August 5, 2014; date of current version August 23, 2016. C. Jayawardena and I-H. Kuo are with the Department of Computing, Unitec Institute of Technology, Auckland 1142, New Zealand (e-mail: cjayawardena@ unitec.ac.nz). E. Broadbent and B. A. MacDonald are with the University of Auckland, Auckland 1142, New Zealand. Digital Object Identifier 10.1109/JSYST.2014.2337882 The work presented in this paper comes out of a large project (HealthBots Project of the University of Auckland, Auckland, New Zealand) to evaluate socially assistive robots for helping older people [5]–[8]. The design of the HealthBot robot pre- sented in this paper derives from the authors’ experience in deploying socially assistive robotic software over five years for a number of long field trials in a retirement village with older people. The design was also influenced by the authors’ experience and lessons learned in creating such software to be reliable and robust enough for long field trials conducted by a group of 22 researches from a number of disciplines, including robotics engineering, computer science, health care, and psychology. Previous research shows that involvement of SMEs, rapid prototyping, customizability, and addressing special end-user needs are key software challenges [6] in the development of socially assistive robots. These are briefly discussed below. a) Involvement of SMEs: SMEs are professionals with expert knowledge in a particular domain. For example, doctors, nurses, caregivers, health psychologists, and health care researchers are the SMEs in the health care domain. Inputs of SMEs are mandatory to provide a successful robotic solution in the application domain. In traditional software development approaches, SMEs are mainly in- volved in requirements gathering and validation phases but are largely excluded in the development phase. In agile approaches, SMEs are heavily involved in the soft- ware design and the development phase, but still, the programming is done by programmers. In our approach, we tried to extend the involvement of SMEs to the extent of co-developing with them or having them authoring an application by themselves. b) Rapid prototyping: As a result of the unsolved research questions, essentially, there is a trial-and-error step in de- velopment, which is managed by extensive field trials and stakeholder feedback. Therefore, the ability to rapidly ad- just the robot’s behavior in applications is very important. c) Customizability: This is related to rapid prototyping. Customizability enables the inclusion of real-time feed- back from the SMEs, pilot groups, end users, and other stakeholders while reducing the introduction of new bugs and minimizing additional software testing. The software architecture should be flexible enough to accommodate customization modules that can be modified to address new findings, suggestions, and new requirements, even during testing and deployment phases. 1937-9234 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Transcript of Socially Assistive Robot HealthBot: Design, Implementation, and Field Trials

1056 IEEE SYSTEMS JOURNAL, VOL. 10, NO. 3, SEPTEMBER 2016

Socially Assistive Robot HealthBot: Design,Implementation, and Field Trials

Chandimal Jayawardena, Senior Member, IEEE, I-Han Kuo, Elizabeth Broadbent, andBruce A. MacDonald, Senior Member, IEEE

Abstract—Socially assistive robotics is an important emergingresearch area. Socially assistive robotics is challenging as it isrequired to move robots out of laboratories and industrial settingsto interact with ordinary human beings as peers, which requiressocial skills. The design process usually requires multidisciplinaryresearch teams, which may comprise subject matter experts fromvarious domains such as robotics, systems integration, medicine,psychology, gerontology, social and cognitive sciences, and neuro-science, among many others. Unlike most other robotic applica-tions, socially assistive robotics faces some unique software andsystems integration challenges. In this paper, the HealthBot robotarchitecture, which was designed to overcome these challenges, ispresented. The presented architecture was implemented and usedin several field trials. The details of the field trials are presented,and lessons learned are discussed with field trial results.

Index Terms—Robot programming, socially assistive robots,software architecture, software engineering for robotics.

I. INTRODUCTION

ROBOTICS research in the past was mainly concentratedon control problems. In recent years, socially assistive

robotics has emerged as a key robotics research area [1], [2].This is a result of increasing needs of people for various kindsof help and the increasing abilities of robots. In particular, whilethe median age of the world population is increasing [3] and thenumbers of health professionals and caregivers are decreasing[4], there is a huge demand for robots that can provide personaland health services.

Socially assistive robotics is challenging as it is requiredto move robots out of laboratories and industrial settings tointeract with ordinary human beings as peers, which requiressocial skills. Designing successful socially assistive robotsusually requires multidisciplinary research teams, which maycomprise subject matter experts (SMEs) from various do-mains such as robotics, medicine, psychology, gerontology,social and cognitive sciences, and neuroscience, among manyothers [1], [2].

Manuscript received October 31, 2013; revised March 8, 2014; acceptedMay 17, 2014. Date of publication August 5, 2014; date of current versionAugust 23, 2016.

C. Jayawardena and I-H. Kuo are with the Department of Computing, UnitecInstitute of Technology, Auckland 1142, New Zealand (e-mail: [email protected]).

E. Broadbent and B. A. MacDonald are with the University of Auckland,Auckland 1142, New Zealand.

Digital Object Identifier 10.1109/JSYST.2014.2337882

The work presented in this paper comes out of a large project(HealthBots Project of the University of Auckland, Auckland,New Zealand) to evaluate socially assistive robots for helpingolder people [5]–[8]. The design of the HealthBot robot pre-sented in this paper derives from the authors’ experience indeploying socially assistive robotic software over five yearsfor a number of long field trials in a retirement village witholder people. The design was also influenced by the authors’experience and lessons learned in creating such software tobe reliable and robust enough for long field trials conductedby a group of 22 researches from a number of disciplines,including robotics engineering, computer science, health care,and psychology.

Previous research shows that involvement of SMEs, rapidprototyping, customizability, and addressing special end-userneeds are key software challenges [6] in the development ofsocially assistive robots. These are briefly discussed below.

a) Involvement of SMEs: SMEs are professionals with expertknowledge in a particular domain. For example, doctors,nurses, caregivers, health psychologists, and health careresearchers are the SMEs in the health care domain.Inputs of SMEs are mandatory to provide a successfulrobotic solution in the application domain. In traditionalsoftware development approaches, SMEs are mainly in-volved in requirements gathering and validation phasesbut are largely excluded in the development phase. Inagile approaches, SMEs are heavily involved in the soft-ware design and the development phase, but still, theprogramming is done by programmers. In our approach,we tried to extend the involvement of SMEs to the extentof co-developing with them or having them authoring anapplication by themselves.

b) Rapid prototyping: As a result of the unsolved researchquestions, essentially, there is a trial-and-error step in de-velopment, which is managed by extensive field trials andstakeholder feedback. Therefore, the ability to rapidly ad-just the robot’s behavior in applications is very important.

c) Customizability: This is related to rapid prototyping.Customizability enables the inclusion of real-time feed-back from the SMEs, pilot groups, end users, and otherstakeholders while reducing the introduction of new bugsand minimizing additional software testing. The softwarearchitecture should be flexible enough to accommodatecustomization modules that can be modified to addressnew findings, suggestions, and new requirements, evenduring testing and deployment phases.

1937-9234 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

JAYAWARDENA et al.: SOCIALLY ASSISTIVE ROBOT HEALTHBOT 1057

d) Addressing special end-user needs: Since socially assis-tive robots are for people, the ability to cater for individualrequirements is very important. From the software pointof view, customizability is required, individually for eachuser and, in some cases, under the user’s direct control.

When socially assistive robots are in wide use and betterunderstood, there will still be a need for rapid prototyping andcustomization, since the requirements for robots in real-worldsituations will be varied and changing.

In order to overcome the above challenges, three key featuresmust be embedded in a robot software framework.

1) Ability to develop software for a given scenario within ashort period of time: If this feature is available, it is pos-sible to develop prototypes required for demonstrationsand feedback sessions. It also enables developing robotapplications within a short period of time and start fieldtrials. This feature solves the issue (b) mentioned above.On the other hand, if a particular user needs a completelynew or customized robotic application, it can be deliveredeasily and quickly, thus solving the issue (d) above.

2) Ability to develop software for a given scenario withas little coding as possible: This feature is required forincreasing the involvement of SMEs, who are not nec-essarily familiar with programming. This feature enablesSMEs to be involved in development or to become co-developers since the programming has become simpli-fied. This features solves the issue (a) mentioned above.

3) Ability to change the software easily and quickly withoutintroducing bugs with the view of changing the robot be-havior: This feature is essential for customizing the robotbehavior, so that small changes do not trigger a time-consuming full software testing process. Then SMEs cansee the effects of small changes immediately and in thefield. This features solves issue (c). On the other hand,if an end user needs to introduce changes to the robotbehavior to suit his needs, these changes can be made on-the-fly without introducing bugs, thus solving the issue(d) mentioned above.

The architecture of the HealthBots robot was carefully de-signed by including these features with the view of overcomingthe difficulties mentioned above (a to c).

Our team spent considerable time working together withprogrammers and SMEs to define the robot behavior for trials inthe real older care environment. Our vision for the future wide-scale deployment of socially assistive robots involves SMEs inolder care and other healthcare environments, defining the robotbehavior, not at the low level, but at the level of defining therobot’s involvement in the workflow at the organization. Thispaper is a step in that direction.

The rest of this paper is organized as follows. Section IIpresents the details of related work. Section III presents anoverview of the robot design. Section IV presents the imple-mentation details of the design in detail. Section V discussesthe effects of the design on the software development process.Section VI discusses some case studies and results. Finally,Section VII presents the summary and conclusion of this paper.

II. RELATED WORK

There have been many attempts in the past research to designa markup language to describe the behavior or interaction flowof a robot (e.g., Honda [9], [10], Robovie [11], and Nao [12])or a computer agent in multimedia applications [13]. In [9],Kushida et al. described the development of an XML-basedmarkup language (MPML-HR) to represent a bipedal robot’s(Honda ASIMO) physical action, movement, and gestures forpresentations with presentation slides on a projected screen [9].This language, namely, MPML-HR, allows a person with mini-mum understanding of the underlying systems in the humanoidrobot to script a robot’s actions in reference to a set of presen-tation slides. A further extension, MPML-HR ver 3.0 enabledthe possibility of scripting of robot’s reaction to verbal speechthrough the use of a speech recognizer.

In [11] and [14], Kanda et al. and Glas et al. separatelypresented two different methods of scripting the behavior ofa robot called Robovie. Kanda et al. in [11] presented anearlier development on Robovie as a research platform forapplications in human environment. In his work, Robovie’sbehavior is divided into modules of behavior sequences that arebuilt through smaller elemental behavior modules that includespeech, gesture, and gaze. For a longer behavior sequence, thework included introduced a tool that allows the execution orderof these predefined modules to be linked up in a network-like diagram. However, this work has not yet considered theinvolvement of experts from other areas who are essential forhealthcare service applications or social assistive robotics. Theexperiment described in [11] demonstrated Robovie as a guiderobot giving directions to people in an indoor environment.

In [14], Glas et al. pointed out that research in roboticapplications is often small in scale and involves only a groupof highly experienced engineers and programmers. For real-world applications, there is a need for collaboration betweenexperts of different areas. In this extension of Kanda’s workon Robovie, he also defined the roles of programmers andinteraction designers in a typical robotic application [14]. Asa step toward involving domain experts and SMEs in the designof robotic applications, Glas et al. described a new interactiondesign framework in [14]. In the framework, it clearly definedfour different abstraction layers in the robot’s control architec-ture and their intended developers/designers. For example, itallows nonprogrammers to work on the application layer anddesign Robovie’s interaction flow (or behavior sequence) byusing and configuring a range of predefined behavior modulesand decision blocks. In an experiment, the framework wasevaluated by 32 pairs of a nonprogrammer and a programmerwho work in parallel on the different layers. The aim of eachteam is to come up with a working interaction flow on Robovieto greet customers, explain briefly features of some products,and respond to some questions in a laptop shop. The resultshowed that most teams using the interaction design frameworkcould complete better interaction flow on Robovie in a shortertime. It also provided some evidence to the benefits of having aparallel workflow between interaction designers (nonprogram-mers) and programmers for development of service roboticapplications.

1058 IEEE SYSTEMS JOURNAL, VOL. 10, NO. 3, SEPTEMBER 2016

Fig. 1. Overview of the design.

In comparison, the work presented in [9], [10], and [14]shares similar motivation and goals to the work presented inthis paper. Both [9] and [10] have an aim to develop a languagethat is independent to a robot’s underlying system configurationand setup and aim to allow nonprogrammers to control andscript the robot behavior in an application. Glas’ work in [14]shares similar goals by dividing Robovie’s control architec-ture into different layers and assigning different roles for thelayers. Taking a step further to realize robotic applicationsin healthcare/eldercare domains in the real world, this paperfurther explores the requirements from domain experts/SMEs inhealthcare domains, e.g., health psychologist and gerontologist,for a language that facilitates their participation in the designof robot’s interactive behavior in applications. In addition tothat, this work implemented and deployed the language [robotbehavior description (RBD)] and healthcare service applica-tions written using the language to experiment on multiplerobots in user trials of longer terms. The deployment and theuser trials posed much higher requirements on the stabilityof the language, the feasibility of the development method,and the robustness of the applications developed. In the usertrial presented in this paper, RBD was used in customizationfor end users (older people in a retired village) and rapidprototyping with a group of SMEs to test out different ideas andscenarios. In this paper, we used RBD to handle a wider rangeof applications. For example, the blood pressure measurementsapplication alone consisted of 38 states/screens. (The states in-clude introduction, video-based instruction, error handling, anddelivering results.) The development presented is larger in scalecompared with the application or test cases developed in [10](six screens explaining four places in Tokyo) and the shoppingscenario developed in about 3.5 h by participants in [14].

III. OVERVIEW OF THE DESIGN

The three key features mentioned in Section I were intro-duced to the robot design, primarily by separating the robotbehavior from its execution. Fig. 1 gives a simplified view ofthis design.

The robot design includes three key features:

1) separation of RBD and behavior execution;2) definition of the robot behavior as a combination of robot

actions, events, user inputs, and audiovisual output;

3) behavior description language for describing the robotbehavior as a finite-state machine (FSM).

These three features are further explained below.

A. Separation of RBD and the Behavior Execution

The robot behavior is the complete range of actions thatthe robot can make in conjunction with the environment.In the HealthBot robot design, the following were included inthe robot behavior:

1) things displayed on the screen (parts of the GUI such astext, buttons, images, movies, etc.);

2) actions that the robot can perform, e.g., producing speech,body movement, and navigation;

3) events that the robot can receive;4) reactions to incoming events;5) background actions, i.e., things that the robot can do

transparent to the user.

Behavior description is done using an XML-based language,which has the features of a domain-specific language. Althoughit has not been perfected in the current version of the robot,it is powerful enough to define the robot behavior for a givenscenario.

In the software design, the RBD is completely isolatedfrom the behavior execution. The Behavior Execution Engine(BEE) is responsible for generating the robot behavior as perthe RBD. The BEE and the collection of distributed softwarecomponents shown in Fig. 1 form the core software of the robotapplication (a component is a piece of software that implementsa robotic functionality, e.g., navigation, face detection, andtext-to-speech generation). However, they do not contain anyinformation about the robot behavior, i.e., what the robot issupposed to do in a given application scenario. On the otherhand, RBD contains all the information pertaining to a givenapplication scenario. Therefore, to develop a new application tosuit a given scenario, or to customize an existing application,changes to the core software (BEE + distributed components)are not necessary unless new action or sensing ability is added.This approach provides the following benefits.

1) Since modifications to the core software are not necessaryto develop a new robotic application, the core softwarecan be well tested and made highly reliable.

2) The same core software can be reused on multiplerobots to deliver completely different applications, with-out much software development effort.

3) Since the RBD is much simpler than a computer program,it can be authored and modified by someone without anyprogramming knowledge (for example, by an SME).

4) New behaviors can be defined rapidly since programmingis not involved and the need for full software testing isreduced.

5) Customization of the robot behavior can be made at anytime (even at deployment with end users) by editingthe RBD.

The syntax of the RBD, implementation of the BEE, andimplementation of distributed components are implementationspecific and do not constitute the core concept described

JAYAWARDENA et al.: SOCIALLY ASSISTIVE ROBOT HEALTHBOT 1059

Fig. 2. State behavior.

above. The implementation-specific details are explainedin Section IV.

B. Definition of the Robot Behavior as aCombination of Robot Actions, Events, User Inputs, andAudiovisual Output

Usually, in robotic applications, the GUI is not included inthe robot behavior design. Instead, the focus is on the robotbehavior such as path planning, navigation, and other actions.However, in most socially assistive robotic applications, theGUI is a dominant part of the robot behavior; user experienceis highly dependent on the robot’s audiovisual output. In mostsocially assistive robots available in the market, touch screensare used as the main mode of interaction [15].

Therefore, to have effective human–robot interaction, robotactions, user inputs, as well as changes in the GUI should besynchronized to produce a coherent interaction experience. Thisfeature is particularly needed in designing assistive robots forthe healthcare domain [6]. Therefore, in the HealthBot robot,the robot behavior was defined including all these features.

C. Behavior Description Language for Describing theRobot Behavior as an FSM

RBD defines the robot behavior as an FSM. Fig. 2 depictsa state in the FSM. A state description includes visual output(GUI), expected events, robot actions, and speech. The structureof RBD is described in Section IV.

IV. IMPLEMENTATION DETAILS

A. Overview of the Robot Architecture

The robot architecture can be viewed as a layered architec-ture, which consists of three logical layers: hardware, middle-ware, and behavior layers.

The hardware layer consists of all the hardware, low-leveldrivers, and the operating systems. The middleware layer con-sists of distributed software components and the middlewareframework, as well as the BEE. The behavior layer consists ofthe RBD.

The function of the hardware and middleware layers shouldbe familiar to robotic researchers as this is a common way of de-veloping robotic applications using robot software frameworks.

In most robot applications, only these two logical layers exist,whereas in the HealthBot architecture, a third logical layer wasdefined.

A detailed view of these layers is given in Fig. 3. Functionsand interdependence of these layers are described in detailbelow.

B. Hardware Layer

The mobile platform used to develop the HealthBot robotwas developed by Yujin Robot Company, Korea (see Fig. 4).This platform consists of a differential drive mobile platform,two single board computers, a laser range finder, sonar sensors,microphones, speakers, a touch screen mounted on an actuatedhead, a camera, and Universal Serial Bus ports. This robotcomes with a proprietary software platform. To develop theHealthBot robot, both the hardware and the software wereextended.

As shown in Fig. 3, the hardware layer consists of twodifferent types of hardware: proprietary and nonproprietary.Proprietary hardware is the hardware associated with the basicrobot platform. This comes from the robot hardware manufac-turer. Nonproprietary hardware is the hardware added by theHealthBot research team to add various functionalities (e.g.,blood pressure meters, blood oxygen saturation meter, bloodglucose meter, cameras, and microphones).

C. Middleware Layer

The HealthBot robot used a proprietary middleware frame-work known as ROCOS, which was developed by Yujin RobotCompany. This is a result of a research collaboration that theUniversity of Auckland has with Yujin Robot. The conceptsand implementation strategies discussed in this paper are notdependent on the features of the middleware framework andcan be extended to other middleware frameworks. The mid-dleware framework and the software components developed bythe robot manufacturer are shown as the proprietary softwareplatform in Fig. 3. Everything outside the shaded area in Fig. 3was developed by the HealthBots research team.

As shown in the figure, distributed software componentscommunicate with each other or with the proprietary softwareplatform, through the ROCOS middleware. In addition, somesoftware components can directly control relevant hardwaredevices.

1) BEE: BEE is yet another software component in the mid-dleware layer with a special responsibility. It reads and parsesthe RBD XML files and executes the FSM that is containedin the RBD. By executing the FSM, BEE coordinates all theother components in the middleware layer, thus generating thebehavior specified in the RBD.

2) Distributed Software Components: Each software com-ponent has a well-defined behavior and an interface writtenadhering to the specifications of the ROCOS middleware. BEEcan send messages to these components and can receive eventsfrom them. Only a few components are shown in Fig. 3 due tospace limitations.

1060 IEEE SYSTEMS JOURNAL, VOL. 10, NO. 3, SEPTEMBER 2016

Fig. 3. Overview of the architecture.

Fig. 4. HealthBot robot.

All software components are independent from each other,and the service provided by the complete collection of softwarecomponents is the feature set of the robot. In the current version,the HealthBot robot has the following feature.

1) Dynamic screen generation: As defined by the currentstate in RBD, the screen layout and components aredynamically created [6].

2) Text-to-speech general using a New Zealand Englishvoice [16].

3) Speech recognition [17].4) Face detection.5) User identification.6) Invoking services provided by distributed components:

blood pressure/blood oxygen saturation/blood glucose/pulse rate/arterial stiffness/pulse rate measurement, enter-tainment (video clips, photos, songs).

7) Receiving events sent by distributed components.8) Send/receive messages to/from the proprietary software

components: navigation.9) Invoke third-party applications: Skype, brain fitness

applications.10) Falls detection: through a ZigBee network.11) Medication reminding [18].

Some of the above features have been implemented usingseveral components. For example, “falls detection” has beenimplemented using several software components running on anexternal server. These software components are connected toa ZigBee network and to wearable accelerometer devices onusers. In the HealthBot project, the above features were devel-oped by different subgroups and later integrated to deliver thefinal robotic application. The architecture shown in Fig. 3 al-lows different groups to develop software components indepen-dently and to integrate later. Most of the above are independentresearch areas, and details are not within the scope of this paper.

JAYAWARDENA et al.: SOCIALLY ASSISTIVE ROBOT HEALTHBOT 1061

Fig. 5. RBD.

The software components (see Fig. 3) are distributed on therobot and on some external platforms. For example, softwarecomponents of “face detection,” “user identification,” “fallsdetection,” and “medication reminding” run on an externalservers.

D. Behavior Layer

The behavior layer consists of a number of behavior descrip-tion files. The relationship between RBD and BEE is shown inFig. 3. In the current implementation, an XML-based methodwas used to describe the RBD. However, the format of the RBDis not important for the key design concepts presented in thispaper. Therefore, any other suitable representation may be usedfor RBD, with modifications to the RBD parser in BEE.

RBD describes the robot behavior as an FSM. Fig. 5 showsthe format of RBD in the current implementation. Screen,background actions, and expected events are defined for eachstate. The complete RBD is a collection of states defined in thesame fashion, thus representing an FSM.

Although the aforementioned XML-based representation issufficient to describe the RBDs, if there are tools to authorthe RBDs without requiring to edit XML file, it will be muchconvenient to nonprogrammers such as SMEs. Therefore, atool named RoboStudio has been developed as the front endof RBD. The details of RoboStudio have been presented else-where [19].

Fig. 13 shows the transition through some sample stateswith corresponding screens. This may help the reader under-stand what happens during the execution of a robot behavior.This figure depicts a simplified FSM, showing state transitionfrom the starting state (robot’s default state) to “vital signsmeasurement.”

V. CHANGES TO THE DEVELOPMENT PROCESS

The architecture presented above implies a change to thetraditional software development methodologies, which wasdeemed unfit for the assistive robotics domain [6]. As presentedin [5] and [6], the user requirements are often hard to elicit,and some become available only during field testing. Therefore,the iterative approach illustrated in Fig. 6 has been found moresuitable [6]. An example would be robot’s greeting behaviorand action sequence to start a service to the older people;it requires some inputs from the actual users to be able tofind the most attractive and appropriate action sequence, andit usually takes several iterations. These inputs include bothexplicit feedback through conversations with the participantsand implicit feedback from our observation of their interactionswith the robots. Fig. 6 illustrates the development process thatwas adopted in actual practice when using the framework.

The development began with a set of rough requirementsreceived from the SMEs. These were not complete; SMEswere not aware of all the requirements. The inherent natureof the problem domain means that SMEs could only suggest aminimal set of requirements to develop a quick prototype, andmany detailed requirements had to wait until a prototype wastested with real users. Since these kinds of robot applicationsare not yet widely available, it is easier to understand the end-user actual requirements by engaging them with a workingprototype.

Based on the SME requirements, an initial prototype wasdeveloped, then iterated with SMEs and end users, in fieldevaluations. It is very important to emphasize that the devel-opment was a collaborative effort between software engineers,robotic researchers, SMEs, and end users, rather than a purelytechnical endeavor. These requirements from all stakeholderswere satisfied only at the field site.

Fig. 7 shows the interactions between the three main groups(SMEs, engineers, and the target user) in the developmentteam in software iterations. The main focus of the electricaland software engineers in each iteration is to configure thesubsystems and compose suitable short sequence of behaviorsin the framework, whereas the main focus of the SMEs isto fine-tune robot’s behavior (speech utterance, words to bedisplayed on screen and screen layout, etc.). The end users (theolder people) interact with both groups and input to both ofthese two areas to some degree during the testing and the usertrial period.

More details of the software development process and theroles of the above groups in the development process are givenin [20].

VI. CASE STUDIES AND DISCUSSION

Using the approach supported by the HealthBot design,multiple scenarios were developed on a robot and tested withend users (older people in a retirement village). This sectiondiscusses the advantages of the HealthBot design with lessonslearned from the case studies.

Two main case studies are presented below. In case study 1,a single robot was deployed in three field trials, with three

1062 IEEE SYSTEMS JOURNAL, VOL. 10, NO. 3, SEPTEMBER 2016

Fig. 6. Software development process.

Fig. 7. Three groups of developers and their interaction in software iterations.

different behaviors. In case study 2, multiple robots with differ-ent behaviors were deployed in a field trial simultaneously. Theobjective of these case studies were to study the psychologicalfactors that promote acceptance or rejection of robots by olderpeople. Psychological results of these case studies are notdiscussed in this paper and are published elsewhere [21].

A. Case Study 1: Single-Robot Field Trials

Using the architecture presented above, a complete robot sys-tem was designed, developed, tested, and deployed at SelwynRetirement Village (Auckland, New Zealand) for a user trialwith the older people who are 65 years or older. The objectiveof the field trials was to investigate whether robots have benefitsand disbenefits for residents and staff at the retirement village.It consists of three separate studies at different places in theretirement village.

Fig. 8. HealthBot robot in Selwyn Retirement Village. (a) Researcher show-ing the HealthBot to a participant. (b) HealthBot visiting a participant.

The studies are as follows:

1) study 1: in public spaces (resident lounges/lobby areas inan independent apartment building;

2) study 2: in private spaces (independent living apartments);3) study 3: monitoring studies with falls monitoring, wan-

dering, and activity monitoring in the rest home.

1) Trial Procedures: The robot spent approximately twoweeks in an independent living building (studies 1 and 2) andapproximately two weeks in a rest home (study 3). At scheduledtimes in every morning, the robot visited the participants intheir apartments or rooms (study 2). The remainder of theday was spent in public places (study 1). When the robotwas in the public places, anyone could approach the robotand interact with it. In study 3, a ZigBee sensor network andother systems were implemented to receive falls events fromwearable accelerometer devices. When a falls event occurs, afalls signal would be relayed to the robot, and the robot wouldreact by navigating to the fallen person’s location and start aremote monitoring session.

JAYAWARDENA et al.: SOCIALLY ASSISTIVE ROBOT HEALTHBOT 1063

Fig. 9. Overall user satisfaction. (a) How much did you enjoy interacting with that robot? (b) How well would you rate your interaction with the robot overall?(c) How much would you like to interact with the robot again?

TABLE IAPPLICATION STATISTICS

A total of 67 people interacted with the robot in studies 1and 2. There were 42 participants in study 1, 25 in study 2,and 5 in study 3. Fig. 8 shows the HealthBot robot in SelwynRetirement Village.

2) Results: Several evaluation questions were put on therobot as a part of its interactive behavior to retrieve user’sfeedback and ratings (see Fig. 9). This feedback includes rel-evant ratings right after a service is finished. For example, therobot asks what user thinks of its blood pressure monitoringservice after it finishes a reading. Furthermore, three evaluationquestions were put on the robot to retrieve user’s feedback aftereach interaction, i.e., when the user has finished everything andis wanting to leave. Table I shows the ratings collected by therobot during this user trial.

Participants were asked to rate the robot using the followingquestions at the end of each interaction.

1) Q1: How much did you enjoy interacting with that robot?2) Q2: On a scale of 0 to 100, how well would you rate your

interaction with the robot overall?3) Q3: On a scale of 0 to 100, how much would you like to

interact with the robot again?

B. Case Study 2: Multiple-Robot User Trial

With the same goal in mind, the first case study was extendedto involve both staff and residents at the retirement village andretrieve more statistics to confirm the hypothesis that were laidin the previous case study. In this extension, six robots weredeployed. This involved more participants and went for a longerperiod (approximately three months between November 2011and March 2012).

For this extension, the requirements of the robot system wererevised by both the SMEs and the engineers. Revisions weremade to the robot’s behaviors, but the robot’s functionalities andservices remained the same. Since the functional requirements

Fig. 10. Service usage by week during the second case study.

of the robot system were not significantly changed, the BEEand the other software components were not changed.

Two field trials were carried out with multiple robots.

1) Study 1: A crossover randomized trial of robots in inde-pendent units (private apartments or villas). This involvedfour HealthBot robots.

2) Study 2: A nonrandomized trial of robots in the lounge ar-eas for residents and staff in the rest homes and hospitals.It involves two HealthBot robots.

1) Trial Procedures: In study 1, the participants were intro-duced to the robots in their own apartments. The robots were setup and the participants were shown through all of the modulesavailable on the robots. Each participant was also provided witha user manual, which informed him/her how to perform basictroubleshooting steps. All residents were provided with the dutyphone contact to use if they had any questions. Participantswere able to interact with the robots within their apartmentsas much or as little as they liked over the six-week period.

In study 2, the robots were deployed to all areas of the resthome. After informed consent was obtained, but prior to robotplacement, baseline questionnaires were administered to staffand resident participants. During the study, structured observa-tions were made of robot interactions in order to evaluate thepotential effect on residents not enrolled in the study (as well asthose who were) and on the workings of the facility. We wereinterested in whether robots changed social dynamics, such asthe number of positive and negative interactions.

2) Results: The two HealthBot robots deployed in study 2were widely used by the residents of the retirement village.Fig. 10 shows the average use of the robot in terms of how manytimes the services on the robots were used.

1064 IEEE SYSTEMS JOURNAL, VOL. 10, NO. 3, SEPTEMBER 2016

Fig. 11. Some example scenarios employed on the robot. Key: FR = face rec-ognition; DB = database; BP = blood pressure; SPO2 = blood oxygen levels.

C. Example Scenarios

For the above case studies, a number of scenarios weredeveloped. For these scenarios, several RBDs were developed.It is not possible to depict the complete scenarios in this paperas each scenario contains hundreds of states and several RBDfiles. Fig. 11 illustrates some scenarios (in a simplified way)used in case studies for the understanding of the reader.

In Fig. 11, the robot in the default position and certainevents trigger the different behaviors. By default, the robot waskept docked at its charging station. If someone approachedor touched the robot, either a “Face Detected” or a “ScreenTouched” event was triggered. This starts the initial interactionphase, which includes face recognition, authentication, andself-introduction. At the end of the initial interaction phase, therobot displays the “main menu.” At this junction, users couldselect any available service (vital signs measurement, calling,entertainment, brain fitness, etc.). The interaction session endswhen the user finishes the session or no interaction is detectedafter a certain time period.

D. Advantages of the Proposed Development Process

The separation of RBD and BEE eliminated the need for coderecompilation when SMEs or engineers wanted to change thewords, screen components (text boxes, etc.), screen layout, anddesign, or robot’s action sequences. This gets rid of the need forproprietary integrated development environments or editors andrequires only a text editor to change the robot behavior. Anyonefrom the engineers group or the SMEs groups could edit theRBD files and make changes that are enabled the expected be-havior. The key advantage is that these changes could be testedand tried with the end users over many iterations in a shortamount of time. In the HealthBot version 1 presented in [5],the time for one software iteration (starting from SMEs suggest-ing changes to the system to the implementations and testing ofthese changes and then revision) usually takes up to a week, butthe time required for one design iteration is minimized throughthe current design.

The range of modifications possible through RBD was iden-tified early by the engineers. SMEs and engineers can also sitand modify the RBD script together. The software iteration timecan take as short as 1–2 h instead of more than a day. Thisencourages design iterations since they are shortened testingof different robot action sequences (behaviors) for different

Fig. 12. SVN activity of BEE and RBD development. (a) SVN activity.(b) Accumulated SVN activity.

studies or purposes and shortens the turnaround time in the co-design between any two of the three groups in the team.

E. Development for the User Trials

A total of around ten different scenarios were developedduring the course of the above case studies. A few morescenarios were developed and used for demonstrations andpublicity events in which cases, authoring, testing, and deploy-ment almost all happened in few hours or in one day. In thebeginning of study 1, two to three different scenarios wereauthored and tested to find the best way to engage passersbyin public spaces. The efforts were concentrated on the robot’sbehavior to initialize interaction with potential users.

Fig. 12 presents the effect of the architecture on the softwaredevelopment process. These results explain the advantage ofthe architecture stated in Section III-A. Fig. 12(a) shows thenumber of commits to Subversion (SVN) repository of bothBEE and RBD, against time (number of SVN commits oneach day). This clearly shows that at the beginning of thedevelopment phase, there were many changes to BEE, andeventually, it stabilized. RBD development started much later,

JAYAWARDENA et al.: SOCIALLY ASSISTIVE ROBOT HEALTHBOT 1065

Fig. 13. Sample FSM.

and the latter part of the development phase is entirely dedicatedto RBD development, with almost no changes to BEE. Pleasenote that the four scenarios shown in the graphs are the fourscenarios developed for studies 1 and 2. They are the mostcomplicated ones and took the longest time. The rest of thescenarios were developed in one to two days, and they did nothave a sequential record in the SVN repository (see Fig. 13).

Fig. 12(b) shows the accumulated number of SVN commits.BEE development starts before RBD development as BEE doesnot require the exact scenarios of application requirements. Astate model for behavior authoring was developed at this stage.Therefore, developers can start with a set of features and focuson making BEE stable. As and when scenario requirementsare developed, RBDs can be developed independently. It is

1066 IEEE SYSTEMS JOURNAL, VOL. 10, NO. 3, SEPTEMBER 2016

possible for SMEs to understand the RBDs and co-developwith a developer. This figure shows the accumulated SVNcommits of four scenarios (RBDs). RBD development timesare relatively shorter since the development requires authoringan XML file and testing only. In this graph, the period from18/11/2011 to 18/01/2012 includes several holidays due to theChristmas period, and therefore, the actual RBD developmenttime is shorter than what is depicted in the graph.

VII. CONCLUSION

In conclusion, a socially assistive robot HealthBot was suc-cessfully designed, implemented, and tested with its targetusers; the older people aged 60 years old and over. The robotsystem is an integration of many subsystems, including a text-to-speech synthesizer, a speech recognition engine, a navi-gation engine, a falls detector (ZigBee) network, and a facerecognition engine. To fulfill the unique requirements of a newemerging application area, in social assistive robotics, a soft-ware framework (BEE and RBD) was successfully developedto separate the integration of subsystems in a complete robotsystem and the composition of a robot service application,including robot’s behaviors in service applications.

The software framework promotes a range of changes tothe development process. To fulfill the requirements of thesocial assistive robotic application domain, the frameworkand its development process provide the following importantfeatures.

a) Increased involvement of SMEs: SMEs in this new devel-opment process become co-developers to the engineers ina team in the development of a robot service application.RBD allows a designated scope of authoring to bothengineers and SMEs while keeping the integration of thesubsystems intact. RBD is much easier to understand andedit than a programming language; thus, the involvementof SMEs in the development process was encouraged. Inthe case study presented, some SMEs learned the syntaxof RBD quickly and were able to author and modify robotbehavior themselves, with some help from the softwareengineers.

b) Rapid prototyping: During testing and requirement analy-sis, it was required to develop several robot behaviors fordemonstrations and discussions. Software engineers wereable to rapidly develop these applications since changesto BEE (source code) were not required.

c) Increased participation from end users: During pilot test-ing and even during field trials, valuable suggestions andcomments were received from the stakeholders such asend users, caregiver, nurses, and other health profession-als. Most of suggested changes were incorporated to robotbehavior with minimal effort, just by modifying RBD.

d) Improved testing: In robotic applications, usually, it isdifficult to resolve all software issues in the laboratory,and a substantial period of field testing is required tocorrect all errors. Due to the separation of RBD andBEE, it was possible to handle errors related to robot

features and robot behavior separately. Most errors relatedto features were resolved in the laboratory since thoseerrors were confined to BEE, which does not containany behavior specific data. On the other hand, behaviorspecific errors were resolved during field trials. Fixingbehavior specific errors in the field was quick as it didnot involve any changes to the source code.

e) Software integration: BEE is a complex and distributedsoftware module, which was built integrating severalresearch software modules. Therefore, considerable timeand effort were spent on BEE development and testing.However, during the development of BEE software, engi-neers did not have to cater to all end-user requirements,since BEE did not contain robot behavior specific code.Through BEE and BSD, end-user specific requirementswere separated from the requirements on robotic func-tions and features that are engineering specific. Thisseparation provides a way to separate the concerns of anengineering team and the SME team, which often dictatesthe requirements of the robot applications. In other words,requirements on robot behavior and GUI can be delayedto when they can be best identified, developed, and tested,i.e., during field testing and trials.

f) Cost of software errors: In software development, it isgenerally true that errors introduced at the beginning ofthe specification phase are likely to be detected onlyduring the use of the product [22]. We also observed thisto be true in socially assistive robot design. In particular,because the application domain is not well understood,errors in requirements are discovered only during fieldtrials [6]. However, in the proposed architecture, sincerobot behavior (RBD) is separated from execution (BEE),user requirements are not required at the beginning, andtherefore, user requirement errors are not introduced tothe core software (BEE). User requirements are requiredfor the development of RBD and errors of these re-quirements can be identified through field testing andtrials and can be quickly fixed without touching thesource code.

The robot system and the software framework were eval-uated through two case studies in two separate user trialsin a retirement village in New Zealand. The first case studyinvolves a single robot system in private apartment and pub-lic areas in an independent living apartment building. Fromthe evaluation, it was observed that the software frameworksuccessfully shortened software iterations from weeks in aniteration to a few hours in an iteration and encouraged moreinteractions with SMEs and end users. Data collected from thedevelopment of around ten different robot scenarios showedthat the development of the robot behavior was successfullyseparated from the integration of the subsystems.

The second case study provided an evidence of the robustnessand scalability of the software framework and the completerobot system. The ratings and feedback collected from theuser trial showed that the robot system was well perceivedand provided the healthcare services successfully over severalmonths.

JAYAWARDENA et al.: SOCIALLY ASSISTIVE ROBOT HEALTHBOT 1067

REFERENCES

[1] L. Boccanfuso and J. M. O’Kane, “Charlie: An adaptive robot design withhand and face tracking for use in autism therapy,” Int. J. Social Robot.,vol. 3, no. 4, pp. 337–347, Nov. 2011.

[2] Y. Yamaji, T. Miyake, Y. Yoshiike, P. R. S. Silva, and M. Okada, “STB:Child-dependent sociable trash box,” Int. J. Social Robot., vol. 3, no. 4,pp. 359–370, Nov. 2011.

[3] W. Lutz, W. Sanderson, and S. Scherbov, “The coming acceleration ofglobal population ageing,” Nature, vol. 451, no. 7179, pp. 716–719,Feb. 2008, 10.1038/nature06516.

[4] Establishing and Monitoring Benchmarks for Human Resources forHealth: The Workforce Density Approach, World Health Organization,Department of Human Resources for Health, WHO, Geneva, Switzerland,2008, no. 6.

[5] C. Jayawardena et al., “Deployment of a service robot to help olderpeople,” in Proc. IEEE/RSJ Int. Conf. IROS, Oct. 2010, pp. 5990–5995.

[6] C. Jayawardena et al., “Design, implementation and field tests of a so-cially assistive robot for the elderly: HealthBot version 2,” in Proc. 4thIEEE RAS EMBS Int. Conf. BioRob, 2012, pp. 1837–1842.

[7] I. Kuo, C. Jayawardena, P. Tiwari, E. Broadbent, and B. MacDonald,“User identification for healthcare service robots: Multidisciplinary de-sign for implementation of interactive services,” in Social Robotics,vol. 6414, S. Ge, H. Li, J.-J. Cabibihan, and Y. Tan, Eds. Berlin,Germany: Springer-Verlag, 2010, ser. Lecture Notes in Computer Sci-ence, pp. 20–29.

[8] I.-H. Kuo, C. Jayawardena, E. Broadbent, and B. MacDonald, “Multidis-ciplinary design approach for implementation of interactive services,” Int.J. Social Robot., vol. 3, no. 4, pp. 443–456, 2011. [Online]. Available:http://dx.doi.org/10.1007/s12369-011-0115-x

[9] K. Kushida et al., “Humanoid robot presentation through multimodalpresentation markup language MPML-HR,” in Proc. AAMAS WorkshopCreating Bonds Humanoids, 2005, pp. 23–29.

[10] Y. Nishimura et al., “A markup language for describing interactivehumanoid robot presentations,” in Proc. 12th Int. Conf. Intell. UserInterfaces, 2007, pp. 333–336.

[11] T. Kanda et al., “Development of Robovie as a platform for everyday-robot research,” Electron. Commun. Japan (Part III: Fundam. Electron.Sci.), vol. 87, no. 4, pp. 55–65, 2004.

[12] Q. A. Le and C. Pelachaud, “Generating co-speech gestures for thehumanoid robot Nao through BML,” in Proc. Gesture Sign LanguageHuman-Comput. Interaction Embodied Commun., 2012, pp. 228–237.

[13] H. Prendinger, S. Descamps, and M. Ishizuka, “MPML: A markup lan-guage for controlling the behavior of life-like characters,” J. Visual Lan-guages Comput., vol. 15, no. 2, pp. 183–203, 2004.

[14] D. Glas, S. Satake, T. Kanda, and N. Hagita, “An interaction designframework for social robots,” Robot., Sci. Syst., vol. 7, p. 89, 2012.

[15] K. Tsui, M. Desai, H. Yanco, and C. Uhlik, “Exploring use cases for telep-resence robots,” in Proc. 6th ACM/IEEE Int. Conf. HRI, 2011, pp. 11–18.

[16] R. Tamagawa, C. I. Watson, I. H. Kuo, B. A. MacDonald, and E. Broad-bent, “The effects of synthesized voice accents on user perceptions ofrobots,” Int. J. Social Robot., vol. 3, no. 3, pp. 253–262, 2011.

[17] A. A. Abdelhamid, W. H. Abdulla, and B. A. MacDonald, “Roboasr:A dynamic speech recognition system for service robots,” in SocialRobotics. Berlin, Germany: Springer-Verlag, 2012, pp. 485–495.

[18] P. Tiwari et al., “Feasibility study of a robotic medication assistant for theelderly,” in Proc. AUIC, 2011, pp. 57–66.

[19] C. Datta, C. Jayawardena, I. H. Kuo, and B. A. MacDonald, “RoboStudio:A visual programming environment for rapid authoring and customizationof complex services on a personal service robot,” in Proc. IEEE/RSJ Int.Conf. IROS, 2012, pp. 2352–2357.

[20] C. Datta, B. MacDonald, C. Jayawardena, and I.-H. Kuo, “Program-ming behaviour of a personal service robot with application to health-care,” in Social Robotics, vol. 7621, S. Ge, O. Khatib, J.-J. Cabibihan,R. Simmons, and M.-A. Williams, Eds. Berlin, Germany: Springer-Verlag, 2012, ser. Lecture Notes in Computer Science, pp. 228–237.

[21] R. Stafford, B. MacDonald, C. Jayawardena, D. Wegner, andE. Broadbent, “Does the robot have a mind? Mind perception and attitudestowards robots predict use of an eldercare robot,” Int. J. Social Robot.,vol. 6, no. 1, pp. 17–32, Jan. 2014. [Online]. Available: http://dx.doi.org/10.1007/s12369-013-0186-y

[22] B. Cohen, The Specification of Complex Systems. Reading, MA, USA:Addison-Wesley, 1986.

Chandimal Jayawardena (M’00–SM’13) receivedthe B.Sc.Eng.(Hons.) and M.Eng. degrees in elec-tronic and telecommunication engineering from theUniversity of Moratuwa, Moratuwa, Sri Lanka, andthe Ph.D. degree in robotics and intelligent systemsfrom Saga University, Saga, Japan.

From 1999 to 2001, he was with Sri LankanAirlines, and from 2001 to 2003, he was with SriLanka Telecom as an Engineer. In 2006, he joined SriLanka Institute of Information Technology, Malabe,Sri Lanka, as a Senior Lecturer. In 2009, joined the

University of Auckland, Auckland, New Zealand, as a Research Fellow. He iscurrently with the Department of Computing, Unitec Institute of Technology,Auckland, as a Senior Lecturer. His research interests include human–robotinteraction, software engineering for robotics, and machine intelligence.

I-Han Kuo received the Ph.D. degree in electricaland computer engineering from the University ofAuckland, Auckland, New Zealand.

He is currently a Lecturer with the Depart-ment of Computing, Unitec Institute of Technology,Auckland. His research interests include human–robot interaction and software engineering forrobotics.

Elizabeth Broadbent received the B.E. degree inelectrical and electronic engineering from the Uni-versity of Canterbury, Christchurch, New Zealand,and the Ph.D. degree in health psychology from theUniversity of Auckland, Auckland, New Zealand.

She is currently a Senior Lecturer of health psy-chology with the Faculty of Medical and Health Sci-ences, University of Auckland. Her research interestsinclude the effects of psychological stress on health,how psychological interventions can improve healthoutcomes, and how humans interact with robots.

Bruce A. MacDonald (SM’06) received the B.E.(first-class) and Ph.D. degrees from the University ofCanterbury, Christchurch, New Zealand.

He spent ten years in the Department of Com-puter Science, University of Calgary, Calgary, AB,Canada, and then returned to New Zealand in 1995,joining the Department of Electrical and ComputerEngineering, University of Auckland, Auckland,New Zealand. He is the Director of the Robotics Lab-oratory. His long-term goal is to design intelligentrobotic assistants to improve the quality of peoples’

lives. His research interests include human–robot interaction and robot program-ming systems, with applications in areas such as healthcare and agriculture.