Towards designing more accessible interactions of older people with digital TV

64
PAGE 1 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011 ISSUE 99, January 2011 Accessibility and Computing A regular publication of the ACM Special Interest Group on Accessible Computing Inside This Issue 1 A Note from the Editor 2 SIGACCESS Officers and Information 3 Overview of ASSETS2010 Doctoral Consortium 4 Online Mathematics Manipulation for the Visually Impaired 9 Inclusive Technologies for Enhancing the Accessibility of Digital Television 13 REACT: Intelligent Authoring of Social Skills Instructional Modules for Adolescents with High-Functioning Autism 24 Towards Designing More Accessible Interactions of Older People with Digital TV 30 Promoting Sharing Behaviours in Children through the Use of a Customised Novel Computer System 37 Accessible Design Through Shared User Modelling 43 Generating Efficient Feedback and Enabling Interaction in Virtual Worlds for Users with Visual Impairments 47 The Automation of Spinal Non-manual Signals in American Sign Language 50 Developing a Phoneme-based Talking Joystick for Nonspeaking Individuals 55 Immersive Virtual Exercise Environment for People with Lower Body Disabilities 60 Accessible Electronic Health Record: Developing a Research Agenda A Note from the Editor Dear SIGACCESS member: Welcome to the January issue of the SIGACCESS newsletter. This issue mainly reports the research work of the doctoral students who participated in the ASSETS 2010 Doctoral Consortium (Orlando, FL). Dr. Matt Huenerfauth provided an overview of the consortium. The proposed dissertation work of 10 doctoral students was presented starting on page 4. In addition, this issue presents a report from the Workshop ‗Accessible Electronic Health Record: Developing a Research Agenda‘ written by Dr. Andrew Sears and Dr. Vicki Hanson. Jinjuan Heidi Feng Newsletter editor

Transcript of Towards designing more accessible interactions of older people with digital TV

PAGE 1 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

ISSUE 99, January 2011

Accessibility and Computing

A regular publication of the ACM Special

Interest Group on Accessible Computing

Inside This Issue

1 A Note from the Editor

2 SIGACCESS Officers and Information

3 Overview of ASSETS2010 Doctoral Consortium

4 Online Mathematics Manipulation for the

Visually Impaired

9 Inclusive Technologies for Enhancing the

Accessibility of Digital Television

13 REACT: Intelligent Authoring of Social Skills

Instructional Modules for Adolescents with

High-Functioning Autism

24 Towards Designing More Accessible

Interactions of Older People with Digital TV

30 Promoting Sharing Behaviours in Children

through the Use of a Customised Novel

Computer System

37 Accessible Design Through Shared User

Modelling

43 Generating Efficient Feedback and Enabling

Interaction in Virtual Worlds for Users with Visual

Impairments

47 The Automation of Spinal Non-manual

Signals in American Sign Language

50 Developing a Phoneme-based Talking

Joystick for Nonspeaking Individuals

55 Immersive Virtual Exercise Environment for

People with Lower Body Disabilities

60 Accessible Electronic Health Record:

Developing a Research Agenda

A Note from the Editor

Dear SIGACCESS member:

Welcome to the January issue of the SIGACCESS

newsletter. This issue mainly reports the research

work of the doctoral students who participated in

the ASSETS 2010 Doctoral Consortium (Orlando,

FL). Dr. Matt Huenerfauth provided an overview

of the consortium. The proposed dissertation work

of 10 doctoral students was presented starting on

page 4. In addition, this issue presents a report

from the Workshop ‗Accessible Electronic Health

Record: Developing a Research Agenda‘ written

by Dr. Andrew Sears and Dr. Vicki Hanson.

Jinjuan Heidi Feng

Newsletter editor

PAGE 2 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

SIGACCESS Officers and Information

Chairperson

Andrew Sears

Information Systems Dept.

UMBC

1000 Hilltop Circle

Baltimore, MD 21250, USA.

+1-410-455-3883 (work )

[email protected]

Vice-chairperson

Enrico Pontelli

Computer Science Dept.

New Mexico State University

Las Cruces, NM 88003, USA.

[email protected]

Secretary/Treasurer

Shari Trewin

IBM T. J. Watson Research Center

19 Skyline Drive,

Hawthorne, NY 10532, USA.

[email protected]

Newsletter Editor

Jinjuan Heidi Feng

Computer and Information Sciences Dept.

Towson University

7800 York Road

Towson MD 21252, USA

+1 410 704 3463

[email protected]

Who we are

SIGACCESS is a special interest group of

ACM. The SIGACCESS Newsletter is a regular

online publication of SIGACCESS. We

encourage a wide variety of contributions,

such as: letters to the editor, technical

papers, short reports, reviews of papers of

products, abstracts, book reviews,

conference reports and/or announcements,

interesting web page URLs, local activity

reports, etc. Actually, we solicit almost

anything of interest to our readers.

Material may be reproduced from the

Newsletter for non-commercial use with

credit to the author and SIGACCESS.

Deadlines will be announced through

relevant mailing lists one month before

publication dates.

We encourage submissions as word-

processor files, text files, or e-mail. Postscript

or PDF files may be used if layout is

important. Ask the editor if in doubt.

Finally, you may publish your work here

before submitting it elsewhere. We are a

very informal forum for sharing ideas with

others who have common interests.

Anyone interested in editing a special issue

on an appropriate topic should contact the

editor.

PAGE 3 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

ASSETS 2010 Doctoral Consortium

On Sunday, October 24, 2010, the day before the ASSETS 2010 conference in Orlando,

Florida, twelve Ph.D. students gave presentations of their research projects to each other

and to a panel of distinguished researchers including Jinjuan Heidi Feng (Towson

University), Simon Harper (University of Manchester), Matt Huenerfauth (City University of

New York), Sri Kurniawan (UC Santa Cruz), and Kathleen McCoy (University of Delaware).

Matt Huenerfauth served as the chair and organizer of the ASSETS 2010 Doctoral

Consortium.

During the event, the participating received feedback from the faculty panelists; the

purpose of this feedback was to help the students understand and articulate how their

work was positioned relative to other research on access technologies, universal access,

and universal usability. The feedback also addressed whether the students‘ topics were

adequately focused for thesis research projects, whether their methods were correctly

chosen and applied, and whether their results were being appropriately analyzed and

presented.

The participating students also attended the ASSETS conference for the next three days,

and they presented their work during a poster session at the conference that was

dedicated specifically to the work of the Doctoral Consortium participants. This allowed

the attendees to interact with, and ask questions of, the Doctoral Consortium participants,

and it allowed the students the opportunity to get feedback from the broader ASSETS

community.

Some comments from the student participants included:

―The doctoral consortium helped me to know people that are working in similar research

areas. Also it helped me to get to know other students in the field and learn about what

they are working on. I also received a lot of great feedback about my work. It helped me

to identify new literature sources related to my work, to identify weak areas of my work,

and to think about new things to explore.‖

―The doctoral consortium was a good opportunity to think of various ways to approach my

research topic. Also, the act of preparing my short presentation was a valuable way to

organize my work, think about current problems, and consider potential solutions.‖

―The doctoral consortium provided me with guidance about which areas of my research I

should focus on – rather than trying to take on the world!‖

―Through the doctoral consortium, I have received lots of useful advice on new evaluation

methods that I can try in my research and issues to consider – e.g., user group‘s abilities,

screening criteria for studies, etc. This will assist me as I design my evaluation plan.‖

―The constructive feedback during the doctoral consortium shaped my ideas about my

research topic. I also found the chance to discuss different research subjects with other

participating students during the free times during the day very beneficial.‖

The ASSETS 2010 Doctoral Consortium was funded by the National Science Foundation,

Grant Number IIS-1035382.

PAGE 4 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Online Mathematics Manipulation for the visually impaired

Nancy Alajarmeh

New Mexico State University

Department of Computer Science

Las Cruces, NM 88003

[email protected]

Abstract Mathematics manipulation accessibility is a newly discussed and introduced research area

extending the general mathematics accessibility approaches on the web. This project

goes further beyond what researchers already started in mathematics rendering

accessibility approaches; so that the final outcome we aim into is allowing visually

impaired students to have some manipulation controls over the rendered mathematical

algebraic expressions; i.e. solving an equation, have a complete history of doing

mathematics (algebra) in steps they perform to reach what they are looking for, exactly

the same as giving them a pencil and a piece of paper to do whatever they see

appropriate to get a final simplified equation or result.

Categories and Subject Descriptors

H.5.4 [Information Systems]: Information Interfaces and Presentation—User Issues.

General Terms

Design, Human Factors.

Keywords

Accessibility, Mathematics manipulation, Visual impairments.

Introduction Being a fundamental knowledge of any educational stage; mathematics has a significant

importance for everybody especially learners as core part of many science areas, and

learning mathematics up to a certain level is considered a must for everyone regardless

what disabilities they have.

Unlike text which is of a linear nature and has a very unified standards, mathematical

representation is not a straightforward standard that can be implemented and rendered

easily especially for visually impaired individuals. The complexity of expressions, the 2-

dimentionality, the non-linearity, the different semantics and notations, the variety of ways

to express one idea, and the spatial nature make mathematics challenging for

representation, interpretation, access, and manipulation for visually impaired individuals.

See figure 1 to help understanding how mathematical information is more challenging

than text for students with visual impairments.

PAGE 5 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

(a) (b)

Figure 1: Mathematics 2-D complexity, extra symbols, and visual nature

From the issues of mathematics availability and accessibility, a lot of work had already

started to give powerful solutions to overcome barriers. Mathematics witnessed many

phases until it was brought available and accessible for sighted and unsighted users.

Mathml; which is a W3C recommended standard to bring mathematics into the web is an

example of an aid into rendering mathematics on the web [4]. As a result, mathematics

became available online in a convenient format that can be neatly and easily rendered.

But availability is not everything towards usability, in a sense that many user categories

were not able to make use of mathematics until many great efforts contributed to turn

mathematics accessible for visually impaired individuals. Static approaches that use Braille

to render mathematics, and Dynamic approaches in which audio is used in rendering are

the main classification of accessibility approaches.

Math2Braille [Crombie et al. 2004], LAMBDA [Edwards et al. 2006], and MAVIS [Karshmer et

al. 1998] are examples of the static approaches in which several new Braille notations were

created for mathematics. AsTeR [Raman 1994], MathGenie [Stanley and karshmer, 2006],

and MathPlayer [Soiffer 2005] are examples of dynamic approaches which basically

deploys audio in rendering. InftyEditor [Suzuki et al. 2003], Infty converters [Suzuki et al.

2003], and WinTriangle [Gardner et al. 2002] are some creating and editing tools. Another

useful offline tool is LiveMath maker, which is an application that enables users to edit their

formulas offline, and request solving or graph sketching them if applicable; in many steps,

LiveMath maker shows how the final result was reached.

Our goal of this project is to develop a system that enables students who are visually

impaired to access online mathematical content and to perform algebraic manipulations

required to solve algebraic problems. Unlike LiveMath and other online creating and

editing tools, our proposed approach is not meant to give results; instead it allows visually

impaired individuals to practice algebra and solving techniques without any guidance

and results given from the system. Moreover, it keeps a history of steps of all manipulations

carried out so far by the visually impaired student, so that they can easily go back and

forth to review what has been so far made, which facilitates building a full clear image of

what they have been doing.

Such work gives visually impaired individuals the capability to manipulate mathematics on

the web; i.e. just like books, a webpage may contain equations that are not clearly solved,

or abstract equations to support the topic under consideration. In such pages, it is strongly

feasible if we could make it possible to have some manipulation over these mathematical

expressions; i.e. solving the equation. The easy to access history of manipulations gives the

visually impaired students the capability to make sure what they are making, how they

handled the expression under consecutive manipulation steps they made, and what

expression they got at the end of each step.

PAGE 6 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

The new approach Structure Starting from navigating and browsing a webpage, the system is to first detect

mathematical components included in that page through Mathml code parsing.

Rendering the detected components in a separate space comes next. With manipulation

functions and accessibility support, the system is brought to the visually impaired student to

work on and experience. Figure 2 shows the system flow of operations.

Figure 2: The logical flow of functions in the system being introduced

To enable algebraic manipulations for visually impaired students, the basic operations one

needs in solving an equation are taken into consideration. It is important to mention again

that no solution hint or guidance will be given either implicitly or explicitly, since the

approach acts the same as giving a piece of paper and pencil for a student and asking

them to solve an equation, no directions to the right operations will be included, i.e. the

visually impaired students will not be given an operation called (move to the other side of

the equation whether for example it was adding or subtracting a term), since this is an

indirect aid, as many students forget to change the sign of the term being moved to the

other side, even some users do not know that they should move a term from one side to

the other. Based on that argument, some manipulating operations were not taken into

consideration. i.e. multiplying the whole side with a factor or term; as many users make

errors when doing that, instead; the system will just allow the student to multiply the factor

by each term in the side under consideration one after another as they are supposed to

do. With the deployment of audio feedback, hot access keys, and MathPlayer plug-in [6],

accessibility is supported.

Initial Design and Functions of the System The system includes a plug-in that works on parsing Mathml source code to capture any

mathematical content on the being navigated page. After mathematical content

detection step, and with a control from the visually impaired student, a new space opens

to enable the visually impaired student to manipulate mathematical expressions freely.

Moreover, TTS and audio feedback approaches are used to guide the visually impaired

student to the content of the page. In addition, hot access keys will be deployed for the

ease of access to help visually impaired in carrying out manipulations. Initially, the following

operations (distinguished in bold) which are mainly derived from the mathematical

algebraic perspective are considered:

PAGE 7 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Delete or Eliminate a term to get rid of it from the whole expression being manipulated. It is

necessary to mention that when talking about a term; it can be a factor, variable, number,

operator, etc. Combine like terms to collect terms that fall in same category. Cancel terms

together. Enclose to make a set of consecutive terms look like one unit. Add or subtract an

existent term which is clear and straightforward in algebraic manipulations. Add or Subtract

a term that is not part of the expression being manipulated. Replace and Insert a term,

these operations are commonly used in the context of manipulation although the

terminologies are not directly extracted from the formal mathematical definition of

operations for doing and manipulating mathematics. Multiply or Divide by an existent or

nonexistent term, same as Add or Subtract operations. Factor, Expand, Apply roots or

powers functions are also supported through the system. Work on specific selected parts

option is provided so that the overall manipulations become more manageable. Save for

later option to keep interesting components or results for later use, in other words to

bookmark desired components. Undo steps, to go back one step from the currently under

focus step, so that the visually impaired students would correct some mistakes made in any

step prior to the active one. Mouse and keyboard navigations are supported, and for later,

voice commands and haptic device navigation and control might be supported.

Future work In the process of studying the issues of accessibility, availability, and usability for people

with different kinds of disabilities, it has been found that despite the massive work done in

the field in general and specifically in mathematics accessibility for the visually impaired

students, there is still an obvious lack of attention paid for enhancing mathematics

manipulation accessibility for students with visual impairments. This work is a starting point

for an extendable research to handle the problem of lacking frameworks that help

students who are visually impaired to overcome their disability in accessing and

manipulating mathematical content available online. Other than the options and features

the current framework provides, a great interest exists to cover more of the mathematical

content, i.e. geometry. In addition, more approaches are under focus to be considered in

the body of the framework, such as haptic devices. As this framework is targeting middle

school level mathematics, a later bigger step is to provide a complete framework for all

school stages.

More attention will be paid to enhance the efficiency and usability of the framework. As

they are very valuable in the development process, more user groups will be involved to

help better modeling features included or to be included later.

Conclusions

In this paper, a system framework being developed for enhancing math manipulation

accessibility for students with visual impairments is described briefly. This system is meant to

make mathematics doing and manipulation available and accessible in a convenient

manner allowing students who are visually impaired to practice and do math more than

they can afford and handle now due to the limitations they face in using available tools

made for this purpose. In using such approach, mathematics manipulation which is an

essential skill for many majors of study and many careers, is made more accessible which in

turn gives a credit for the visually impaired students.

References

PAGE 8 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Crombie, D., Lenoir, R., McKenzie, N., and Barker, A. math2braille: Opening access to

Mathematics. Proceedings of the International Conference on Computers Helping People

with Special Needs (ICCHP 2004) (Paris, France, July 7-9, 2004). LNCS 3118, Springer, Berlin,

2004, 670-677.

Edwards, A., McCartney, H. and Fogarolo, F. Lambda: a multimodal approach to making

mathematics accessible to blind students. Proceedings of the 8th International ACM

SIGACCESS Conference on Computers and Accessibility (ASSETS 2006) (Portland,Oregon,

USA, October 23-25, 2006). ACM, 2006, 48-54.

Gardner, J., Stewart, R., Francioni, J. and Smith, A., TIGER, AGC, AND WIN-TRIANGLE, REMOVING

THE BARRIER TO SEM EDUCATION. in CSUN International Conference on Technology and

Persons with Disabilities, (Los Angeles, CA, 2002).

Karshmer, A.I., Gupta, G., Geiiger, S. and Weaver, C. Reading and writing mathematics: the

MAVIS project in Proceedings of the third international ACM conference on Assistive

technologies ACM Press, Marina del Rey, California, United States 1998 136-143

Raman, T.V. Audio Systems for Technical Reading. Ph.D. thesis, Department of Computer

Science, Cornell University, NY, USA, 1994.

Soiffer, N. Mathplayer: web-based math accessibility.in ASSETS3905 ACM Press, Baltimore,

Maryland, USA 2005.

Stanley, P. B. and Karshmer, A. I. Translating MathML into Nemeth Braille Code. Proceedings of

the International Conference on Computers Helping People with Special Needs (ICCHP

2006) (Linz, Austria, July 12-14, 2006). LNCS 4061, Springer, Berlin, 2006, 1175-1182.

Suzuki M, Tamari F, Fukuda R, Uchida S, Kanahori T (2003) INFTY – An integrated OCR system for

mathematical documents. In: Proc. ACM symposium on document engineering (DocEng),

Grenoble, France, pp 95–104

About the Author

Nancy Alajarmeh, currently enrolled in the CS PHD program at

NMSU. I obtained my bachelor and master degrees in computer

science from Jordan in 2004, 2007 respectively. I have a strong

background in computer science and i selected that field of study

because of my interest and my strong belief that life can be turned

much easier with the aid of computer systems and innovations.

During my studies I worked in many areas of research like software

engineering, systems analysis and design, until recently I have

started researching in the field of Human Computer Interaction

under the supervision of Dr. Enrico Pontelli, when he opened my eyes to this wonderful area

of research.

PAGE 9 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Inclusive Technologies for Enhancing the Accessibility of Digital Television

Amritpal Singh Bhachu University of Dundee

School of Computing

Dundee, DD1 4HN

[email protected]

Abstract

Digital Television (DTV) is full of interactive content that was not previously available

through analogue television. However, the interactive content can be complex to use,

which in turn creates obstacles for users, and in particular older adults, to access this

content. This project work looks into ways in which DTV content can be made accessible

for older adults through novel DTV menu interface design and alternative input types.

Proposed Problem Digital Television (DTV) has brought with its introduction a multitude of new channels,

interactive services and features. Included in these services are the Electronic Programme

Guide (EPG), Personal Video Recorder (PVR), Video on Demand (VoD), Red Button services

and Digital Text. Future development will see Internet functionality brought to Set Top Boxes

(STBs). This additional functionality has the potential to enhance the TV experience for

viewers as well as increasing audiences for DTV services.

However, the additional functionality brings with it an increase in complexity (Kurniawan

2007) and an increase in the physical and mental demands put on the user to operate

these functions. This is at a time where the older population is growing throughout modern

day society (Centre of Economic Policy 2008), and because of age related decline of

physical and mental attributes, this group in particular will find DTV a challenge to use.

There is therefore a need to make DTV content more accessible to older adults.

Motivation Behind Research

Older Adults

As we grow older, we develop multiple minor impairments (Gregor et al. 2002, Keates and

Clarkson, 2004). Our eyesight is affected in that by the age of 65 our vision is impaired

severely, although in most cases it can be corrected to 20/40 for at least 80% of over 65s.

Motor abilities experience a decline in our control and speed of movement as well as a

possible loss in our kinesthetic senses. Our working memory, where we temporarily keep

information active while we work on it or until it is used, is also affected by age. It is likely

that older adults will perform less well in dual task conditions and have difficulties

developing new automatic processes (Fisk et al. 2009).

PAGE 10 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Definition of Impairments

It must be stressed that throughout this document, when discussing impairments, it is in

reference to impairments through age related decline. It must also be identified that

despite the generalizations, older adults are not a homogenous group, with their

impairments having progressed at different rates and each having their own way of

dealing with their issues (coping strategies) and differing life experiences.

The Remote Control and its Issues

Remote controls have evolved as television has evolved. The remote control can be

looked upon as a second interface in the use of a TV (Springett and Griffiths 2007). The

issues posed by modern day remote controls are appropriate to all users, though because

of age related decline, these issues are increased for older adults. There are now more

buttons on a remote control than previously, with the need to control the additional

functionality that DTV offers. The growth in buttons adds complexity to the remote control,

with many of them rarely used. Also, buttons are likely to be smaller on the remote and will

often have multiple functions attached to them. The typical remote control also constrains

interface design because of the conventional navigational inputs.

For those who have visual impairments, the buttons can be difficult to find with labeling

hard to read. For those with motor impairments, it can be a problem to press the required

button. There are also issues for those with cognitive impairments, as remembering the

function of each button can be a challenge. This is increased as the user constantly

switches focus between the TV screen and the remote control while trying to complete a

task, and further complicated if the user requires different corrective lenses for each

interface (Carmichael et al. 2006).

The EPG and its Issues

The EPG was developed to give viewers an overview of what was on DTV without having to

‗channel surf,‘ as this had become difficult with the introduction of new many channels.

However, in most cases the EPG is designed from a computing perspective, taking on a file

system type layout. This is inappropriate for many reasons. The screen resolution of a TV has

traditionally been smaller than that of a Personal Computer (PC) monitor, although modern

TV sets do balance this out. There are limitations on screen space available due to the low

resolution and the inclusion of ‗safe zone‘ for display cut off during design (Cooper 2008).

The viewer is also likely to sit further away from the TV screen than a PC monitor and will

rarely sit directly in front of the screen with an angle being introduced (Carmichael 1999).

Television watching is also considered to be a ‗sit-back‘ activity compared to PC usage,

which is a ‗lean-forward‘ activity.

For older adults with visual impairments, DTV menu text can be difficult to make out and

read and, with too much information on screen, older adults can be cognitively

overloaded and get confused in the options available. It can be also be laborious for older

adults to learn and remember the sequence of steps required to achieve a goal, putting

additional strain on their memory requirements.

Progress So Far Research work began in June 2009 with an extensive literature review, which is an ongoing

process. In October/November 2009, focus groups were conducted with older adults to

PAGE 11 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

elicit their opinions on the issues that they encounter with DTV. In general, the group found

no benefit to using the EPG, and so used paper guides or memorized channel numbers. It

was decided at this point that a new input format may aid accessibility for older adults,

and focus was turned to a pointer based remote. This allows the user to keep attention on

the TV screen without the need to look down at the controller so often. As a result, user

testing was conducted with a Wii running BBC iPlayer to establish if this was a valid

approach.

Participants were recruited as a group of younger adults (under 38) and a group of older

adults (over 65). Overall, both groups felt that this was a system that they could get used

to. The user attention was felt to be more on the TV screen than with traditional remotes

and the on screen controls were found to be, in the main, easily usable. The main observed

differences between the groups were that although both took time to get used to the

system, the older adults did take slightly longer, and in some cases required more

instruction. The older adults were also more likely to get stuck in an area trying to solve a

problem and would be less likely to experiment and find other solutions.

Future Work Future work will focus on developing DTV menu systems, and in particular the EPG, to give

greater accessibility to older adults. Quantitative experiments will be conducted to test

how different text sizes and the use of zoom functions affect how older adults interact with

DTV. Experiments will also look at how images can be used to represent programming and

a comparison made between the use of images and text to represent programming.

Pointer based control devices will be used in the experiments. These are less likely to restrict

designs in the same way that traditional remote controls do as well as putting the user

focus mainly on the TV screen. The flow through the sequence of required steps will also be

experimented with after further reading into novel interface designs for older adults, with

an aim to reduce the cognitive load on the user while also addressing the visual and motor

issues faced by older adults when using DTV. For this, comparison experiments will be

conducted with any developed novel designs and the traditional DTV designs.

Measurements will include the time taken to complete tasks and use will be made of the

NASA Task Load Index.

This work will contribute to the accessibility of DTV for older adults. User centered design

leading to older adults becoming more comfortable in a DTV environment may make

content easier to access and more likely to be frequently used.

Doctorial Consortium By attending the doctorial consortium I look to get valuable feedback on my work to date

as well as feedback on my future work and research methods. The constructive comments

from experts in the field of accessibility will help further focus and progress my research.

Acknowledgements Thanks go to the BBC and EPSRC for providing support and funding for the PhD research

being conducted.

PAGE 12 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

REFERENCES CARMICHAEL, A. 1999. Style Guide for the Design of Interactive Television Services for

Elderly Viewers, I.T. COMMISSION Ed., Kings Worthy Court, Winchester.

CARMICHAEL, A., RICE, M., SLOAN, D. AND GREGOR, P. 2006. Digital switchover or digital

divide: a prognosis for usable and accessible interactive digital television in the UK.

Univers. Access Inf. Soc. 4, 400-416.

CENTRE OF ECONOMICPOLICY The Growing Elderly Population. Online. Accessed April

2008. http://www.cepr.org/pubs/bulletin/meets/416.htm

COOPER, W. 2008. The interactive television user experience so far. In Proceedings of the

Proceeding of the 1st international conference on Designing interactive user

experiences for TV and video, Silicon Valley, California, USA2008 ACM, 1453832, 133-142.

FISK, A.D., ROGERS, W.A., CHARNESS, N., CZAJA, S.J. AND SHARIT, J. 2009. Designing for

Older Adults - Principals and Creative Human Factors.

GREGOR, P., NEWELL, A.F. AND ZAJICEK, M. 2002. Designing for dynamic diversity: interfaces

for older people. In Proceedings of the Proceedings of the fifth international ACM

conference on Assistive technologies, Edinburgh, Scotland2002 ACM, 638277, 151-156.

KEATES, S. AND CLARKSON, P.J. 2004. Assessing the accessibility of digital television set-top

boxes. In DESIGNING A MORE INCLUSIVE WORLD, 183-192.

KURNIAWAN, S. 2007. Older Women and Digital TV: A Case Study. In Proceedings of the

Assets '072007, 251-252.

SPRINGETT, M.V. AND GRIFFITHS, R.N. 2007. Accessibility of Interactive Television for Users

with Low Vision: Learning from the Web. In EuroITV 2007, P.C.E. AL. Ed., 76-85.

About the Author

I completed my undergraduate degree in Applied Computing at The

University of Dundee in 2007. From 2007 to 2009 I worked as a Research

Assistant on the SAPHE and EduWear projects. I started my PhD at Dundee in

June 2009. The working title for my work is ‗Inclusive Technologies for

Enhancing the Accessibility of Digital Television.‘ It is BBC and EPSRC

sponsored research into how digital television can be made more

accessible to all. In particular the research is focused on older adults. As

part of this work I am an associate member of the Digital Economy Hub.

PAGE 13 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

REACT: Intelligent Authoring of Social Skills Instructional Modules for Adolescents with High-Functioning Autism

Fatima A Boujarwah, Mark O Riedl, Gregory D Abowd, Rosa I Arriaga School of Interactive Computing, Georgia Institute of Technology

85 5th St. NW, Atlanta, GA, 30308

{fatima, riedl, abowd, arriaga}@gatech.edu

Abstract Difficulties in social skills are generally considered defining characteristics of High-

functioning Autism (HFA). These difficulties interfere with the educational experiences and

quality of life of individuals with HFA, and interventions must be highly individualized to be

effective. For these reasons, we are interested in exploring the way technologies may play

a role in assisting individuals with the acquisition of social skills. In this article we present an

approach to developing an authoring tool designed to help parents and other caregivers

to create social problem solving skills instructional modules. The authoring tool, which we

call REACT, will help non-experts create interactive social scenarios personalized to

individual adolescents with HFA. The technology will assist by advising the author on

instructional strategies, and on scenario creation.

Introduction Adolescents with high-functioning autism (HFA) are a heterogeneous population with a

wide range of needs and abilities. Difficulties in social skills, however, are generally

considered defining characteristics of HFA. Because deficits in socialization interfere with

the educational experiences and quality of life of individuals with HFA, and because

interventions must be highly individualized to be effective, we are interested in exploring

the way technologies may play a role in assisting individuals with the acquisition of social

skills. Indications are that our target population, adolescents with HFA, responds well to

computer-assisted instruction [Williams and Wright, 2002]. Furthermore, there was a general

call for more technologies that specifically target social skills training [Putman and Chong,

2008].

We explore the use of interactive software to provide social skills training. We choose to

target adolescents because they are underrepresented with respect to applicable

therapies, they are more likely to have complex social skills needs, and research indicates

that they should be targeted for social skills intervention [Rao et. al, 2008]. For example, an

adolescent with HFA may want to go to a movie theatre without the assistance of a

parent. Can a software module be developed to help that individual prepare for that

social context? Furthermore, can a system be developed that helps parents and

caregivers to author these modules themselves? Such a system would address one of the

most challenging aspects of teaching children with autism; the need for individualized

instruction for a highly heterogeneous population. These are questions that we would like to

answer with our research.

It is our goal to develop a system that can help teachers and caregivers author and share

instructional modules that adolescents with HFA can use to practice their social problem

solving skills. In an earlier study, Refl-ex, a computer-based system was built and tested

[Boujarwah et. al 2010]. Refl-ex is designed to allow the individual with autism to practice

these skills by experiencing social situations and choosing appropriate responses to

unexpected events. This study yielded promising results with respect to the effectiveness of

PAGE 14 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

the instructional approaches used. In addition, the study logs showed that the participants

did not all struggle with the same social situations. These findings support existing

knowledge of the importance of individualizing interventions for this population of users.

We present an approach to creating an authoring tool designed to help parents, teachers,

and other caregivers to create Refl-ex instructional modules. Our approach to aiding

authors in creating these modules uses automated critiquing and collaborative problem

solving techniques. The authoring tool, we call the Refl-ex Authoring and Critiquing Tool

(REACT), will help non-experts (i.e. authors who have who have little or no knowledge of

instructional strategies) create interactive social scenarios personalized to individual

adolescents with HFA. The technology will assist by advising the author on instructional

strategies, and on scenario creation. For clarity, throughout this document, the user of Refl-

ex will be referred to as the student, and the individual using REACT will be referred to as

the author.

This article is arranged as follows. We begin by presenting some background on autism and

describing our prior work on the Refl-ex instructional modules, which are the goal output of

REACT. Next, we present related work in the areas of computer-assisted instruction and

automated critiquing. Finally, we present our approach to creating REACT, and a plan for

its evaluation.

Autism Background and Prior Work Autism Background Kanner [1943] and Asperger [1944] are generally credited with first identifying and

describing individuals with autism in the 1940‘s. Today we have improved our

understanding and awareness of autism and recognize it as a spectrum, clinically referred

to as autism spectrum disorders (ASD). Individuals who are diagnosed with ASD, but do not

exhibit language impairments, are often referred to as having high-functioning autism

(HFA). Impaired social functioning is the central feature of HFA. A lack of social

competency can result in significant difficulties in daily living, academic achievement, and

poor adult outcomes related to employment and social relationships [Howlin, 2003].

Researchers and educators have attempted to develop and implement interventions that

lead to social competency. The results of one recent meta-analysis, however, suggest that

current school-based interventions were minimally effective for children with autism [Bellini

et. al., 2007]. In order to improve the status quo they recommend increasing the intensity or

frequency of the intervention, and developing interventions that address the individual

needs of the child. Our research provides the means to implement these changes; the

child can practice these skills as frequently and for as long as necessary via Refl-ex, and

REACT will enable the parent or other caregiver to easily create modules that address the

child‘s needs.

Social Skills Interventions Social skills training interventions are an important part of the education of children with

HFA. Educators use a variety of techniques, often in combination, to teach these skills. In

Social StoriesTM, which is a commonly used paradigm, parents or teachers develop stories

that are related to some event in the child‘s life. Each story is meant to help the child to

learn appropriate behavior for the particular situation. These stories are written in a manner

that is instructive, however they do not demand active child involvement [Reynhout and

Carter, 2009]. Refl-ex augments this approach by engaging the student in the creation of

the story and guiding them as they practice social skills.

PAGE 15 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Several interventions exist that incorporate technology. The Junior Detective Training

Program [Beaumont and Sofronoff, 2008], for instance, consists of group social skills training,

parent training, teacher hand-outs, and a computer game. The ―I can Problem-Solve‖

program [Bernard-Opitz et. al., 2001] is a software based intervention used with children

between the ages of 5 and 9 in which they are presented with problem situations and

asked to verbally produce solutions. In the evaluations of both interventions the children

with ASD showed improvement as a result of the intervention. These findings reinforce the

importance of developing such software systems, and confirm that they are an effective

way to teach social skills. The student cannot use these systems independently, however.

Refl-ex enables this independent practice.

Other experimental technological approaches to ASD intervention include virtual reality

simulations, and virtual peers for language learning, an important aspect of social

interaction. In several studies researchers use virtual animated characters to invite

language learning [Tataro and Cassell, 2008]. Researchers have also created virtual reality

environments designed to familiarize individuals with ASD with social settings [Parsons et. al.,

2004]. This work involved learning to identify roles and procedures in a social environment.

Refl-ex differs by simulating the progression through a social situation in which the student

must exhibit social problem solving skills.

Prior Work Refl-ex is designed such that each module presents the student with a real life social

situation in which an unexpected obstacle arises, using language and visuals inspired by

Social StoriesTM [Gray 1995]. These modules are the target output of REACT, and each has

two parts; an experience section and a reflection section. In the experience section the

student is guided as they navigate a social scenario and overcome an unexpected

obstacle. Then, in the reflection section, the student revisits the decisions they made during

the experience portion. In this work we will only address authoring the experience portion.

Throughout the module, an invisible recording system logs the student‘s interaction with the

system. This data-gathering feature allows the system to keep track of how the individual

progresses and particular difficulties they may be having. This information will be used to

augment the knowledge base of the critic in REACT thereby allowing it to advise the

parent or teacher of particular problem areas. Following is a description of the important

characteristics of the instructional modules. The Experience Section

In the experience section Refl-ex presents the social scenario through text, audio narration,

and picture book style images that correspond

to the specifics of the situation (fig. 1). This is

similar to the approach used in Social StoriesTM,

which uses cartoon figures and text-bubbles to

represent speech. One advantage of this visual

presentation is that it allows for the inclusion of

the meaningful information, and the exclusion

of distracting material. In addition, the system

presents the scenarios in the first person

perspective. This is intended to help the student

to identify with the character, and immerse

herself in the storyline. Individuals with HFA

prefer environments with high degrees of

certainty, repetition, and predictability. Dealing Figure 1. Rethink Screen in Refl-ex

PAGE 16 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

with others in social settings introduces a high degree of uncertainty for those with autism. It

is common for individuals with HFA to rehearse situations before hand. Unfortunately,

rehearsal is not always effective as those with HFA might learn cues specific to only one

environment or person [Heflin and Alberto, 2001]. Our visual design choices are meant to

help avoid the learning of incorrect cues by limiting the information in each picture. Scenario Representation. Each scenario is represented as a narrative with multiple paths,

where a narrative is defined as a series of actions that describe how a situation unfolds. All

the paths together resemble a graph, or branching story, where nodes are scenes and

edges are branching points. The canonical branching story systems are Choose-Your-Own-

Adventure novels. However, recent research has resulted in a spate of computational

approaches to branching stories [Riedl et. al., 2008]. Refl-ex can be considered to be an

interactive narrative where each possible narrative is based on productive, unproductive,

and counter-productive possible executions of social problem solving skills in specific

contexts. The student has to make a series of choices to proceed to the obstacle that is

introduced and ultimately successfully navigate the social situation. REACT will help the

author to create this interactive narrative by leading the author in the creation of narrative,

and decision pages.

Errorless Learning. We follow an approach of errorless learning, which means that the

student is not allowed to fail. When the student makes a choice that is considered

unproductive or counter-productive the system explains the possible consequences of that

action without using negative language, and directs the student to rethink their choice (fig.

1). When the student returns to the decision point the undesirable choice that they have

already explored is grayed out to prevent it from being chosen again. In this way the

system provides immediate feedback and helps the student to correct their error. Errorless

learning is often used with individuals with HFA to avoid the possibility that they acquire

incorrect skills; individuals with HFA are extremely prone to repetition so it is essential to

avoid reinforcing anything other than the desirable execution [Heflin and Alberto, 2001].

REACT will also aid the author in the creation of the rethink pages.

Related Work from Artificial Intelligence Computer-Assisted Instruction Computer-assisted instruction (CAI) refers to any use of software in the instruction of

students. Research in this area has evolved significantly in the more than 50 years since it

began, and has been shown to effectively augment traditional instruction and

interventions [Suppes and Morningstar, 1969; Anderson et. al. 1995]. When the software

incorporates artificial intelligence to model real human tutoring practices, it is referred to as

an Intelligent Tutoring System (ITS). ITSs employ models of instructional content that specify

what to teach, and strategies that specify how to teach. Intelligent Tutoring is a subclass of

CAI that is theoretically more suited for personalized interactions.

Refl-ex shares many similarities with ITSs. In particular, both Refl-ex and ITS require labor-

intensive authoring of new content and problems, requiring knowledge from diverse teams

of experts. To facilitate the development of these systems, a great deal of research has

been done on ways to develop ITS Authoring Tools (ITSAT). Blessing and his co-authors

[2008], for instance, have developed and evaluated an authoring tool to aid in the

development of model-tracing ITSs. The aim of the work described in this article is to

facilitate the authoring of social problem solving skills instructional modules. While our work

is not strictly considered an ITS, it is greatly informed by research in ITSs and ITS authoring.

PAGE 17 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Authoring tools that aid users in the creation of instructional stories for children with autism

have begun to appear, both for the classroom and for parents to use at home. Intellitools

[Intellitools, 2010] is an example of a tool developed for use in the classroom that is

currently being used to develop instructional stories. Stories2Learn is one of many iPad apps

that have recently been developed for use with children with autism [Stories2Learn, 2010].

It is designed for use both by teachers and parents, and facilitates the creation of

personalized instructional stories in the standard sequential narrative approach. These

systems enable the creation of the story, but do not guide the author with respect to the

content. REACT will not only provide feedback on the content, but will also allow the

creation of interactive multi-path scenarios.

Critiquing and Cooperative Problem Solving Systems The design of the interaction between the author and the system in REACT is modeled after

that used in critiquing and cooperative problem solving systems. Such systems have been

designed for varied applications ranging from architectural design to education. In a

cooperative problem solving system, instead of simply providing input, and waiting for the

system to provide answers, the user is an active agent and works with the system in the

problem solving and decision making process [Fischer, 1990]. The precise roles played by

the user and the system are chosen based on their different strengths with respect to

knowledge of the goals and the task domain. Critics are an important component of these

systems, and can be designed such that they detect inferior or inappropriate content,

provide explanations and argumentation for their "opinion,‖ and suggest alternative

solutions.

JANUS, for instance, is a critiquing system for architecture design that distinguishes ―good‖

designs from ―bad‖ designs based on design rules. After the user changes a design, the

critic‘s messages are displayed, and the user can click on them to be shown alternative

designs that have been successful [Fischer et. al., 1991]. One approach to developing

critiquing systems that is of particular interest is the incremental model used in Java

Critiquer [Qiu and Riesbeck, 2008]. In addition to providing students learning to program in

Java with individualized critiques, this system allows teachers to refine existing critiques and

author new critiques, thereby creating a system that is always evolving and improving to

meet the user‘s needs (i.e. getting smarter). We will employ this approach in the design of

REACT.

REACT The Refl-ex Authoring and Critiquing Tool (REACT), will help non-experts create interactive

social scenarios personalized to individual adolescents with HFA. We envision the REACT

authoring experience proceeding as follows. The author will be presented with a simple

user interface that allows them to create the various types of screens needed for Refl-ex

(decision, rethink, etc). The author will also be presented with suggestions for what could

happen next in the scenario, including appropriate instances in which the obstacle may

be introduced, what that obstacle is, and possible solutions to populate the decision point.

The author can choose to either drag the suggestion to the text field and use it as is, drag

the suggestion and then modify it as they see fit, or use it as inspiration for their own text.

Once the author has inputted text, the critic will provide feedback on the appropriateness

of the text. This process will continue until the author is satisfied with the text. In this way, the

author and the critic will collaborate in the creation of the module. Following is a more

detailed description of the interaction, including a description of how the suggestions are

generated.

PAGE 18 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

The Critiquing Rules In REACT we will use an analytic critiquing approach. Analytic critiquing requires that a

system have sufficient knowledge to detect possible problems. Such a system offers

critiques by evaluating a user‘s solution against stored rules [Robbins, 1998]. In REACT the

critiquing rules will be made up of rules based on correct and complete cognitive scripts of

the situation to be presented, and instructional strategies for teaching social problem

solving skills to adolescents with HFA. Script Rules. How do neurotypical individuals organize their knowledge in such a way that

allows them to know what appropriate behavior is in a particular situation? For instance,

how do you know that you are supposed to pay for your food before you sit down at a fast

food restaurant, but not until after you have eaten at other restaurants? Schank and

Abelson [1977] present the notion that we develop scripts, or standard event sequences,

which enable us to subconsciously know what to expect. These scripts are not rigid, and

instead contain multiple contingencies. They explain that people develop these scripts

early in life, based on their experiences. Research has shown that these scripts develop

early in childhood, and that children with autism generally generate fewer well-organized

scripts [Volden and Johnston, 1999].

We are currently conducting a study where we use crowd-sourcing techniques to elicit the

cognitive scripts of many neurotypical individuals for every day tasks, like going to a

restaurant. In addition, we are asking participants what could go wrong at each step, and

how they would handle these situations. In a manner inspired by Orkin and Roy‘s [2009]

Collective Artificial Intelligence, the data collected will be analyzed to develop a model

similar to a Plan Network. A Plan Network is a graph showing probabilistically how events

follow each other; as illustrated by the Restaurant Game, a plan network shows all the

ways in which a restaurant experience can unfold. The introduction of an obstacle is salient

to our pedagogical approach. For this reason our model will also enable REACT to provide

the author with ideas for obstacles and possible solutions to insert into the modules. This

model will be used to develop the critiquing rules. For instance, one rule might be ―wait in

line then order food‖ or ―restaurant runs out of [favorite food].‖ This class of rules will

address the completeness, and sequencing of the content in the instructional module and

will be called script rules. The steps in these scripts will also be modified in accordance with

the instructional strategies described in the next section to generate suggestions for the

author. We will begin by developing scripts for a subset of situations, and progressively

develop more scripts to incrementally increase the REACT knowledge base.

Strategy Rules. Rules will also be developed based on research validated guidelines and

strategies for teaching social skills to this population of students. For instance, in order for an

instructional story to be considered a Social StoryTM, it has to follow specific guidelines

[Gray, 1995]. In particular, the story must include several types of sentences, including

descriptive statements used to describe the situation and people involved in it, perspective

statements that describe the reactions, feelings, and responses of others, and directive

statements that identify an appropriate response and guide the student‘s behavior. Gray

recommends using the ratio of one directive sentence for every two or more sentences of

another type. Other researchers provide additional strategies for social skills instruction. Klin

and Volkmar emphasize the importance of teaching theory of mind; explicitly explaining

what others are feeling and doing [Klin and Volkmar 2000]. Others indicate that the story

should be presented in the first person, use vocabulary that is of an appropriate level of

difficulty, and avoid terms like ―always‖ (because of the literal mindedness of individuals

with HFA) [Heflin and Alberto, 2001; Bernard-Opitz et. al., 2001]. This is only a subset of the

PAGE 19 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

guidelines, but it can be seen how they can be used in combination with the steps in the

cognitive scripts to generate suggestions for the author on every screen. The guidelines will

also be transformed into rules we will call strategy rules, and will address such factors as

reading level, the use of first person language, and the avoidance of problem words and

phrases.

The User Interface In a cooperative problem solving system, the user is an active agent and collaborates with

the system in the problem solving and decision making process. Our tool is designed such

that the role of the user is to act as the author of the modules, and contribute social skills

expertise, and knowledge of the particular needs of the student. We assume that, as

neurotypicals, the authors have social problem solving skills expertise. Our system will

employ the incremental approach used in Java Critiquer [Qiu and Riesbeck, 2008], and will

evolve and improve incrementally, both as a result of the data collected by the logging

system on the student‘s interactions with the modules, and the knowledge provided by the

author.

In our system the author will be provided with an interface that facilitates the creation of

the branching structure, and prompts them to input the content. As described earlier, the

instructional modules are made up of three different screens; the narrative screens that

present the scenario, the decision screens where the student chooses a solution, the rethink

screens where they are explained the consequences of an unproductive or

counterproductive solution.

The interaction for the three types of screens will be similar. Imagine the author is working

on a scenario related to going to a restaurant and they are currently working on a

narrative screen. It can be very intimidating for an author to be presented with a blank

screen and asked to input content. In REACT we try to make the process as easy as

possible for the author. We do this by offering the author several suggestions for text they

can use. REACT generates these suggestions based on its knowledge of the instructional

needs of students with HFA, and of the most commonly taken next step in the script. The

suggestions are ranked based on their appropriateness for the student. For instance,

imagine the author has just completed a screen in which the student has ordered their

food, the suggestions in this case might be:

1. Y

ou ask the cashier to repeat

your order so you make sure

she heard you correctly.

2. Y

ou listen as the cashier repeats

your order. This lets you make

sure she heard you correctly.

3. You ask the cashier how much your food

costs. You know you have $10, and you

want to make sure you didn‘t order too

much.

4. The cashier tells you your food costs $8.50.

You take out your wallet, and hand the

cashier a $10 bill to pay for your food.

In this case the first suggestion is ranked highest because it involves the student taking

initiative to prevent a potential problem situation. The author can choose to either drag

the suggestion to the text field and use it as is, drag the suggestion and then modify it as

they see fit, or use at as inspiration for their own text. Once the author has inputted text,

REACT provides feedback on the appropriateness of the text based on any script or

strategy rules that have been broken. This process will continue until the author is satisfied

with the text. REACT will similarly lead the author in the creation of the other screens, (e.g.

PAGE 20 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

indicating options for when to introduce obstacles, and providing suggestions for possible

solutions on a decision screen).

On each screen the author will be asked to record the narration, or import previously

recorded audio files. As in existing systems, authors will be able to import pictures they have

taken to use as the visuals in the module. In addition, REACT will aid the user in creating

picture-book style images by providing the user with a drag and drop interface that will

allow them to construct the image using various components.

Plan for Evaluation The goal of REACT is to enable novice authors to develop expert-like social skills

instructional modules for adolescents with HFA. Our plan for evaluating REACT will involve

asking parents to develop Refl-ex instructional modules with and without REACT and

acquiring expert evaluations of the various modules. In order to evaluate REACT, a second

interface, called Refl-ex Builder, will be developed that simply allows the author to create

the Refl-ex instructional modules, but provides no feedback on the content.

Participants will be recruited from our target author population; neurotypical parents or

other caregivers of adolescents with HFA who have little or no knowledge of instructional

strategies. After a brief presentation of the Refl-ex instructional modules the participants will

be asked to create their own modules. Each participant will be asked to create 3 modules,

one each using REACT, Refl-ex Builder, and Stories2Learn [Stories2Learn, 2010]. The order of

using the systems will be such that it counterbalances any learning effects. We ask the

authors to create stories using Stories2Learn in order to compare REACT to a system that is

currently available. All of the modules that are created will be presented to experts in

social skills instruction for evaluation. The experts will be asked to rate the modules on their

instructional soundness, and their appropriateness for the target student population. For

instance, are the situations presented with an appropriate level of detail, and using

appropriate language? This study design should enable us to evaluate the influence of the

intelligence in REACT.

Conclusions Adolescentswith HFA can function effectively and independently, but they need

assistance learning to handle the complexities of society. We believe that the approach to

creating REACT described in this article will put the ability to easily create individualized

instructional modules in the hands parents and other caregivers. This will empower the

parents to develop modules of whose quality they are confident. Most importantly it will

enable adolescents with HFA to have access to a larger variety of modules that they can

use to practice social problem solving skills independently.

References Anderson, J. R., Corbett, A. T., Koedinger, K. R., & Pelletier, R. (1995). Cognitive tutors:

Lessons learned. The Journal of the Learning Sciences, 4 (2) 167-207.

American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders

(4th Edition, text-revised). Washington, DC. 2000.

Asperger, H. Die ―autistichen psychopathen‖ in kindasalter. Archiv fur Psychiatrie and

Nervenkrankheiten. 117: 76-136. 1944.

Asperger, H. The Autistic Psychopathy of Childhood. Archive for Psychiatry and Nervous

Diseases, 117:76-136. 1944.

PAGE 21 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Beaumont, R. and K. Sofronoff. A Multi-Component Social Skills Intervention for Children

with Asperger's Syndrome: The Junior Detective Training Program. J. of Child Psychology

and Psychiatry, 49(7): 743-753. 2008.

Bellini, S., Peters, J.K., Benner, L., Hopf, A. A Meta-Analysis of School-Based Social Skills

Interventions for Children with Autism Spectrum Disorders. Remedial & Special

Education, 28: 153-162. 2007.

Bernard-Optiz, V., Sriram, N., and Nakhoda-Sapuan, S.. Enhancing Social Problem Solving in

Children with Autism and Normal Children Through Computer-Assisted Instruction.

Journal of Autism and Developmental Disorders, 31(4): 377-384. 2001.

Blessing, S.B. and Gilbert, S. Evaluating an Authoring Tool for Model-Tracing Intelligent

Tutoring Systems. Proc. of Intelligent Tutoring Sys, 204–215, 2008.

Boujarwah, F. A., Hong, H., Isbell, J., Arriaga, R.I., Abowd, G.D. Training Social Problem

Solving Skills in Adolescents with High-Functioning Autism. In Proc. of the 4th Int. Conf. on

Pervasive Computing Technologies for Healthcare. 2010.

David Elson and Mark O. Riedl. A Lightweight Intelligent Virtual Cinematography System for

Machinima Production. In Proc. of the 3rd Annual Conf. on AI and Interactive Digital

Entertainment. 2007.

Fischer, G., Communication Requirements of Cooperative Problem Solving Systems. Int. J.

of Info. Systems (Special Iss. on Knowledge Engineering), 15(1):21-36. 1990.

Fischer, G., Lemke, Andreas, C., Mastaglio, T., Critics: An Emerging Approach to

Knowledge-Based Human Computer Interaction. Int. Journal of Man-Machine Studies,

35(5):695-721. 1991.

Gray, C. Social Stories Unlimited: Social Stories and Comic Strip Conversations. Jenison, MI:

Jenison Public Schools, 1995.

Heflin, L.J. and Alberto, P.A. Establishing a Behavioral Context for Learning with Students

with Autism. Focus on Autism and Other D. D., 16, 93-101. 2001.

Howlin, P. Outcome in High-Functioning Adults with Autism with and without Language

Delays: Implications for the Differentiation Between Autism and Asperger‘s Syndrome. J.

of Autism and Developmental Disorders, 33: 3-13. 2003.

Intellitools Classroom Suite: http://www.intellitools.com.

Kanner, L. Autistic Disturbances of affective content. The Nervous Child, 2: 217-250. 1943.

Klin, A. and Volkmar, F.R. Treatment and Intervention Guidelines for Individuals with

Asperger‘s Syndrome. In Asperger‘s Syndrome. Guilford Press. 2000.

Oh, Y., Gross, M.D., and Do, E.Y.-L. Computer-Aided Critiquing Systems, Lessons Learned

and New Research Directions, In Proc. of the 13th Int. Conf. on Comp. Aided

Architectural Design Research in Asia, Chiang Mai , 9-12 April 2008, pp.161-167.

Orkin, J. and Roy, D. Automatic Learning and Generation of Social Behavior from

Collective Human Gameplay. Proc. of the 8th International Conference on

Autonomous Agents and Multiagent Systems (AAMAS), Budapest, Hungary, 2009.

PAGE 22 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

[Parsons, S. Mitchell, P. and Leonard, A. The Use and Understanding of Virtual Environments

by Adolescents with Autistic Spectrum Disorders. J. of Autism and Developmental

Disorders, 34(4): 449-466. 2004.

Putman, C. and Chong. L. Software and technologies designed for people with autism:

What do users want? In Proc. of the 10th ACM Conf. on Computers and Accessibility

(ASSETS). 2008.

Qiu, L. and Riesbeck, C., An Incremental Model for Developing Educational Critiquing

Systems: Experiences with the Java Critiquer. J. of Interactive Learning Research,

19(1):119-145. 2008.

Rao, P.A. Beidel, D.C. and Murray, M.J. Social Skills Interventions for Children with Asperger's

Syndrome or High-Functioning Autism: A Review and Recommendations. J. of Autism

and Developmental Disorders, 38(2): 353-361. 2008.

Reynhout, G. and Carter, M. The Use of Social Stories by Teachers and their Perceived

Efficacy. Research in ASDs, 3: 232-251. 2009.

Riedl, M., Stern, A., Dini, D., and Alderman, J. Dynamic Experience Management in Virtual

Worlds for Entertainment, Education, and Training. Int. Transactions on Systems Science

and Applications, 4(2), 2008.

Robbins, J.E. Design Critiquing Systems. Tech Report UCI-98-41. Department of Information

and Computer Science, University of California, Irvine.

Schank, R.C, Abelson, R.P. Scripts. In Scripts, Plans, Goals, and Understanding: An Inquiry

into Human Knowledge Structures. Lawrence Erlbaum Associates Publishers, Hillsdale,

NJ. 1977.

Stories2Learn: http://itunes.apple.com/us/app/stories2learn.

Suppes, P. and Morningstar, M. Computer-Assisted Instruction. Science, 166:343-350. 1969.

Tartaro, A. and Cassell, J. Playing with Virtual Peers: Bootstrapping Contingent Discourse in

Children with Autism. In Proc. of the Int. Conf. of the Learning Sciences (ICLS). 2008.

Volden, J. and Johnston, J. Cognitive Scripts in Autistic Children and Adolescents. J. of

Autism and Dev. Disorders, 29(3):203-211. 1999.

Williams, C. Wright, B., Callaghan, G., Coughlan. B. Do Children with Autism Learn to Read

more Readily by Computer Assisted Instruction or Traditional Book Methods? Autism,

6(1): 71-91, 2002.

About the Authors

PAGE 23 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Fatima A. Boujarwah, is a

computer science PhD

student in the School of

Interactive Computing at

Georgia Tech.

Mark O. Riedl is an

Assistant Professor in the

School of Interactive

Computing at Georgia

Tech.

[1] Gregory D. Abowd is

the Distinguished

Professor in the

School of interactive

Computing at

Georgia Tech.

Rosa I. Arriaga is a

developmental

psychologist in the School

of Interactive Computing

at Georgia Tech.

PAGE 24 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Towards designing more accessible interactions of older people with Digital TV

Susan Ferreiraa, Sergio Sayagob, Ernesto Arroyoa, Josep Blata aInteractive Technologies Group, Universitat Pompeu Fabra

RocBoronat,138,E-08018, Barcelona,Spain

[email protected], (ernesto.arroyo, josep.blat)@upf.edu bDigital Media Access Group, School of Computing, University of Dundee

Dundee DD1 4HN, Scotland

[email protected]

Abstract

This paper outlines an ongoing PhD thesis about accessibility, DTV (Digital TV) and older

people. We aim to design and evaluate accessible interactions of older people with DTV

services enabling communication and access to information about health and wellbeing.

DTV opens up a wide range of opportunities for older people to communicate with their

social circles and access online information. However, DTV services are far from being

accessible to older people. We aim to fill this gap by addressing three relevant and up to

now relatively unexplored areas in DTV accessibility research with older people: interaction

barriers, everyday use of DTV services enabling communication and access to information

about health, and cultural differences in both interaction and use. We will use key

ethnographical methods to understand these aspects with two user groups of older people

living in developed and developing countries (Spain and Brazil). We will also design and

evaluate interactive prototypes of related DTV services by drawing upon the

ethnographical data, and involving older people in the design and evaluation

(participatory design) process.

Introduction

This paper outlines an ongoing PhD thesis about older people, accessibility and Digital

TeleVision (DTV). This paper is an extended version of the one presented in ASSETS 2010

Doctoral Consortium1.

The overall aim of the PhD carried out by Susan Ferreira is to design and evaluate

accessible interactions of older people (65+) with DTV services enabling communication

and access to information about health and wellbeing by adopting an ethnographical

approach. Communication serves critical functions in ageing [Nussbaum et al., 2000].

Previous Human-Computer Interaction (HCI) research with older people has also shown

that communication is a motivation for them to both take up and use Information and

Communication Technologies ([Dickinson et al., 2005], [Sayago and Blat, 2009]). Seeking

information about health and wellbeing is also very common amongst those older people

who go online [Becker, 2004]. DTV opens up a wide range of opportunities for older people

to communicate with their social circles, and access both information and services related

1 Designing accessible interactions of older people with Digital TV: http://www.sigaccess.org/assets10/doctoral.html

PAGE 25 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

to health and wellbeing that can (and should) improve their quality of life. Our review of

previous HCI research with older people and DTV, which is outlined later, shows that this is

an emerging research area. However, very little is known about how older people interact

with DTV in their everyday lives - and therefore, how we can design better and more

accessible (and useful) DTV services enabling communication and access to e-health for

them. HCI has turned to ethnography to get a deeper insight into the everyday

interactions of people with technologies. This has been motivated by the fact that

understanding interactions in out-of-laboratory conditions is a key aspect to design better

technologies [Moggridge, 2007]. However, very little ethnographical research has been

carried out in HCI research with older people to date [Sayago and Blat, 2010]. Thus, we

consider that this thesis can make several contributions (e.g. accessibility barriers, everyday

use of DTV, methodology), which can potentially push forward current HCI research.

Why a PhD on older people, accessibility and Digital TV?

Both population ageing and the increasing role of DTV in current society create a need to

enhance DTV accessibility to older people. Whereas DTV opens up numerous opportunities

for keeping in touch with their social circles and access to online information/services, older

people run the risk of being unable to exploit DTV. This is mainly due to their general lack of

experience with digital technologies, and a tendency towards excluding older people

from software and hardware developments.

What is this PhD about?

A number of previous studies have addressed DTV accessibility for older people ([Rice and

Alm, 2008], [Svensson and Sokoler, 2008], [Kurniawan, 2007]). They have mainly focused on

developing novel and interesting prototypes. Rice and Alm proposed and evaluated four

prototypes layouts of communications systems. These prototypes were related to sharing

multimedia contents. Svensson and Sokoler proposed a prototype that facilitate and

stimulate the elderly communication while watching TV. Kurniawan interviewed older

women to investigate the reasons that make the digital TV unappealing to older people.

Yet, none of them have explored the everyday interactions of older people with DTV.

However, there is growing awareness in HCI that understanding everyday interactions is a

crucial element to design better, and therefore, more accessible interactions [Moggridge,

2007]. This PhD addresses this overall problem by breaking it into three specific aspects: (i)

interaction barriers; (ii) everyday use; and (iii) cultural differences in accessibility and use of

DTV content. Next we explain the reasons why we concentrate on these areas.

Many interaction barriers, but…with DTV?

Whereas the barriers that older people face while interacting with the web, e-mail

systems and other technologies have been explored ([Dickinson et al., 2005], [Sayago

and Blat, 2009]), very little is known about those they experience when interacting with

DTV. However, this is a crucial step for older people to use and interact with DTV. As

stated in [Rice and Alm, 2008], significant work still should be done to design interfaces

that are more usable to older adults and can support their skills and abilities.

Everyday use

According to [Chorianopoulos and Lekakos, 2007]: “the major open research question in

ITV is when, and how much audiences want to interact with devices, content, and other

PAGE 26 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

people”. Previous studies have shown that communication systems (as e-mail) are very

popular among older people ([Dickinson et al., 2005], [Sayago and Blat, 2010]). Some

studies have also suggested that communication and social interaction through TV are

important applications for them ([Rice and Alm, 2008], [Svensson and Sokoler, 2008]).

However, there is a dearth of information as to how older people interact with DTV in

out-of-laboratory conditions.

Culture matters, but little has been done so far in DTV

[Piccolo and Baranauskas 2008], points out that different aspects should be considered

in relation to the design of DTV services in both developed and developing countries.

Culture pervades our interactions and this might be even more important for older

people, due to their lifelong experiences. However, we are not aware of any studies of

cultural differences (and similarities) in the everyday use and accessibility of DTV with

older people.

Overall PhD approach and expected results

We are going to adopt a user-centered design approach, paying special attention to

everyday aspects of older people‘s interactions with DTV (e.g. with whom? Where?

When?, How?). We will use basic and essential ethnographical methods (in-situ

observations and conversations for prolonged periods of time) to understand older

people‘s needs, patterns of use and interaction barriers due to age-related changes in

functional abilities and previous experience with DTV. We will analyze this data by using

qualitative data techniques in order to identify key points and common understanding

related to the previous stage. We will also carry out participatory design with our two user

groups in order to discuss DTV services enabling communication and access to information

about health and wellbeing, which will emerge from our analysis of the ethnographical

data. We will also design low- and high-fidelity user interface prototypes. We will evaluate

these prototypes in real-life settings. The concrete evaluation protocol still needs to be

defined and agreed with our users. We will conduct interviews; questionnaires; focus

groups and analysis of log files to gather data that help us understand the overall picture.

We expect to work with 40 older people (20 living in Spain, 20 living in Brazil), ranging from

65 to 80 years old. Our user group will include older people with low education levels, and

with some previous experience with information technologies. All the participants will have

DTV at home and be motivated to interact with and learn more about DTV services.

Whereas the number of participants might be insufficient to draw significant conclusions

from an experimental point of view, we expect to gain deeper insights into their

interactions with DTV, which future research can use to make more general claims.

The main results of this PhD can be divided into three groups: (A) identification of the

interaction barriers that hinder more and less severely the everyday interactions of older

people with DTV - prototypes related to communication and access to online information

about health; (B) examination of how older people interact with and use some DTV

services in real-life settings; and (C) exploration of cultural differences in interaction barriers

and use of DTV amongst older people living in developed and underdeveloped countries.

PAGE 27 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Work done so far, and future perspectives

We have carried out a literature review on HCI research with older people on DTV and

defined a methodological approach for the research to be done. We are in contact with

companies in order to explore the possibility of doing our PhD research with either

experimental or already running commercial prototypes. We are also creating the pool of

users; designing the material for the ethnographical studies and exploring the technologies

to develop interactive DTV prototypes.

We have also started to work in Life 2.0, an ICT PSP project which aims to research on and

develop services for Geographical positioning services to support independent living and

social interaction of elderly people2, and technologies such as DTV are to be explored to

achieve this goal.

Acknowledgments

Susan thanks the Spanish government for her grant - Ministry of Foreign Affairs and

Cooperation and the Spanish Agency for International Development Cooperation (MAEC-

AECID), her colleagues of the Interactive Technologies Group and Universitat Pompeu

Fabra for their support, and the feedback gathered in the ASSETS Doctoral Consortium.

Sergio also acknowledges the support of the Commission for Universities and Research of

the Ministry of Innovation, Universities and Enterprise of the Autonomous Government of

Catalonia.

References

Becker, S.A. (2004). A Study of Web Usability for Older Adults Seeking Online Health

Resources. ACM Transactions on Computer-Human Interaction, 11 (4). 387–406.

Chorianopoulos, K., & Lekakos, G. (2007). Learn and play with interactive TV. Computers in

Entertainment (CIE). 5. New York: ACM.

Dickinson, A., Newella, A. F., Smithb, M. J., & Hillc, R. L. (2005). Introducing the Internet to the

over-60s: Developing an email system for older novice computer users. Interacting with

Computers, Volume , Issue 6, December. 17, pp. 621-642.

Holtzblatt, K., Wendell, J., & Wood, S. (2004). Rapid contextual design: a how-to guide to

key techniques for user-centered design. San Francisco: Morgan Kaufmann.

Kurniawan, S. (2007). Older Women and Digital TV: A Case Study. ASSETS´07, (pp. 251-252).

Tempe, Arizona.

Moggridge, B. (2007). Designing Interactions. Cambridge, MA. The MIT Press.

Nussbaum, J.F., Pecchioni, L.L., Robinson, J.D. and Thompson, T.L. (2000). Communication

and Aging. Second Edition. Lawrence Erlbaum Associates.

Piccolo, L. G., & Baranauskas, M. C. (2008). Understanding iDTV in a Developing Country

and Designing a T-gov Application Prototype., (p. 7).

2 http://ec.europa.eu/information_society/activities/livinglabs/openinnovation/index_en.htm.

PAGE 28 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Rice, M., & Alm, N. (2008). Designing New Interfaces for Digital Interactive Television Usable

by Older Adults, Computers in Entertainment (CIE) - Social television and user

interaction, Volume 6 Issue 1, January 2008.

Sayago, S., & Blat, J. (2009). About the relevance of accessibility barriers in the everyday

interactions of older people with the web. W4A2009 - Technical, (pp. 104-113). Madrid,

Spain.

Sayago, S., & Blat, J. (2010). Telling the story of older people e-mailing: An ethnographical

study. International Journal of Human-Computer Studies, Volume 68, Issues 1-2, January-

February, (pp. 105 -120).

Svensson, M., & Sokoler, T. (2008). Ticket-to-Talk-Television: Designing for the circumstantial

nature of everyday social interaction. Proceedings of the 5th Nordic conference on

Human-computer interaction: building bridges, October 20-22, Lund, Sweden.

About the Authors

Susan Moller Ferreira is a PhD Student in the Interactive Technologies Group,

Department of Information and Communication Technologies at Universitat

Pompeu Fabra (UPF, Barcelona, Spain). She received her masters in

Information, Communication and Audiovisual Media Technologies (TICMA)

by UPF in 2009. She graduated in 2005 in Computer Science from the Federal

University of Santa Catarina, Brazil, where she obtained a master degree in

Electrical Engineering in 2007. With a background in network management

and communication, her research interests are related to Human Computer

Interaction and older people.

Sergio Sayago is currently a visiting post-doctoral researcher (Beatriu de

Pinós Fellowship) in the School of Computing at the University of Dundee

(Dundee, Scotland). He received his PhD in Computer Science and

Digital Communication from University Pompeu Fabra in 2009. His

research focuses on ethnography, Human-Computer Interaction and

older people. Best Technical Paper Award in the International Cross-

Disciplinary Conference on Web Accessibility in 2009.

Ernesto Arroyo holds an associate teaching position at the Department

of Information and Communication Technologies of the Universitat

Pompeu Fabra (UPF). He earned a PhD from MIT Media Laboratory

(2007). Ernesto Arroyo coordinates the Interaction research line of the

Interactive Technologies Group at UPF. His work focuses on developing

intelligent interactions, subtle visualizations and user centred interfaces

that respond in adequate fashion and context, enabling and preserving

the fluency of the user engagement.

PAGE 29 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Josep Blat is a professor of Computer Science at Universitat Pompeu

Fabra (Barcelona) since 1998. He is the head of the Interactive

Technologies Group in the Department of Information and

Communication Technologies (DTIC) at UPF. He was the director of the

Institut Universitari de l‘Audiovisual and the head of the DTIC. He was

previously at Universitat de les Illes Balears where he was head of its Dpt

of Maths & Computer Science from 1988 till 1994. He graduated from

Universitat de València in 1979, got his Ph D (Mathematics) at Heriot-Watt

University (Edinburgh) in 1985 and has developed post-doctoral work at

Université Paris-Dauphine where he has been visiting professor. After his

initial research work in Applied Nonlinear Analysis, he moved to modelling

and mathematical analysis of images (collective prize Philip Morris France

1991). His current research interests are human computer interaction,

including cooperative environments; intelligent web portals; educational

telematics; multimedia and GIS; computational educational toys;

advanced 3D graphics (human modeling and animation).

PAGE 30 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Promoting sharing behaviours in children through the use of a customised novel computer system

Rachel Menzies School of Computing, University of Dundee

Dundee, Scotland, United Kingdom, DD1 4HN

[email protected]

Abstract

Asperger Syndrome is a condition on the Autism Spectrum, characterised by a lack of

ability to recognise and understand the thoughts, beliefs, desires and intentions of others.

Thus, the ability to make sense of others‘ behaviours and predict their social actions is

reduced.

This paper aims to develop a technology-enhanced learning environment in which

children with Asperger Syndrome can explore and improve their sharing skills. This may

increase their social awareness and interaction. The system is scenario-based, with each

scenario focussing on a specific aspect of sharing. It is expected that, through the user-

centred development and implementation of this system, children who previously were

socially isolated will be more capable of becoming more socially networked as individuals.

This will have implications on learning and interaction.

Background

Asperger Syndrome

Asperger Syndrome is a condition on the Autistic Spectrum, a continuum of developmental

disorders of varying severity, collectively termed Autistic Spectrum Disorders (ASD),

affecting at least 1% of the UK population (Baird, 2006). Asperger Syndrome is at the higher

functioning end of the spectrum, differentiated from other conditions by the development

of spoken language. Classical autism is at the opposite, lower functioning end of the

spectrum within the most severe range. Within each condition, the severity of symptoms

may differ between individuals across the spectrum.

First described by Hans Asperger (Asperger, 1944 [1991]), there is a lack of appreciation of

the thoughts, beliefs, feelings and desires of others, which manifests as a difficulty in

predicting the social actions of others. The individual is also likely to experience difficulties in

joint attention; that is the act of attending to an object in the environment with a peer

(Mundy, Sigman, & Kasari, 1994). For a diagnosis of Asperger Syndrome an individual should

have normal development of spoken language, widely accepted to be single words by 2

years of age and communicative phrases by 3 years (WHO, 1992), along with a triad of

impairments in social interaction, social communication and social imagination (Wing &

Gould, 1979). These impairments map onto many diagnostic frameworks for Autism

PAGE 31 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Spectrum Disorders, e.g. (WHO, 1992). Individuals may display only a few or many

symptoms in each triad to differing degrees of severity (Wing & Gould, 1979).

Examples of such impairments are shown below in table 1.

Impairment Social Interaction Social

Communication

Social Imagination

Description A difficulty in

recognising the

thoughts, feelings and

emotions of others.

A difficulty

understanding and

utilising non-verbal

cues, such as

gestures, facial

expressions, tone of

voice and abstract

language.

A difficulty in

imagining social

situations out with

their established

routine; difficulty with

engaging in pretend

play.

Examples Appearing rude or

unresponsive,

displaying absent or

inappropriate eye

contact, gestures and

expressions, showing

no spontaneous

sharing of interest with

peers or a perceived

unwillingness to

interact with and

befriend peers.

Literal/functional

conversations. A

difficulty in expressing

themselves in a subtle

way. Use of unusual or

repetitive language;

finding it difficult to

initiate and sustain a

conversation.

Play is functional,

fascination with

object parts, pre-

occupation with

narrow interests.

Anxiety when

changes are made to

routine or

environment.

Table 1: The Triad of Impairments (Wing & Gould, 1979)

Individuals with Asperger Syndrome may become socially isolated both as children and

throughout adulthood and can be cut off from learning with and through other people in

ways that their typically developing (TD) peers are not (Jordan & Jones, 1999).

Sharing

The importance of social and communication skills have long been emphasised as the

basis of learning by all children (Piaget, 1962; Scottish-Executive, 2004). However, for

children with Autism Spectrum Disorders (ASD), including Asperger Syndrome (AS), the lack

of these skills can be particularly problematic and become more pronounced in times of

prolonged and/or intense social environments such as school. One particular area that

has been noted as being of importance in Asperger Syndrome is a difficulty in sharing. The

act of sharing is a complex behaviour and can be characterised as:

1. The division and distribution of objects amongst a number of people

2. Turn-taking (division of time)

PAGE 32 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

3. Lending and borrowing personally owned or controlled items

4. Negotiating shared resources

Individuals display sharing behaviours for a variety of reasons, which often depend on the

circumstances in question. As the basis of many friendships, children may share items of

importance, such as toys, to develop a bond and mutual respect. In many cases, this can

be seen in terms of reciprocity, whereby an act of sharing by one individual can result in a

reciprocal act of sharing by the others (van den Bos, Westenberg, & van Dijk, 2010). This

reciprocity is necessary to maintain social relationships, since a lack of such may result in

short-lived relationships and friendships (Lahno, 1995). Reciprocity may be immediate or

delayed and this will depend on the development of trust within a given relationship, along

with the cognitive development of the individuals involved (van den Bos, et al., 2010).

Sharing behaviours can also be identified in relation to the avoidance of conflict, social

responsibility, conformity and the rule of law.

The difficulties in sharing within Asperger Syndrome (Attwood, 2006) may be attributed to

poorly developed Theory of Mind abilities. For example, there may be no understanding or

interpretation of social cues that would initiate sharing, such as eye contact or pointing. In

addition, the person concerned may not realise that the sharing partner desires the object

in question and so may appear rude by seemingly refusing to share.

Current Interventions

There is a variety of intervention methods used in addressing the difficulties seen in sharing,

with mixed results across a range of studies (Kuoch & Mirenda, 2003; Swaggart, et al., 1995;

Tartaro, 2007). One of the established interventions used in therapy to improve social

communication in children is Social Stories (Gray & Garand, 1993). These are used

extensively and are based on explicitly teaching the child how to ―read‖ specific social

situations and how to unearth ―hidden‖ (non-verbal) aspects of social communication. The

format can be varied depending on the cognitive abilities of the individual, as well as their

specific needs and preferences. Social Stories are highly individualized in order to cater for

the diverse range of difficulties seen in Asperger Syndrome and also to engage the child by

falling within their narrow range of interests. The use of Social Stories has yielded positive

results in a variety of situations (Adams, Gouvousis, VanLue, & Waldron, 2004).

Role-play activities are also frequently used (Rajendran, 2000). This type of activity allows

for experience and exploration of social situations, which is crucial to the acquisition of

deep social understanding, rather than the approach of providing specific rules (e.g.

Social Stories) which may be difficult to generalise.

Proposed Solution

The PhD research aims to draw on the strengths of both Social Stories and role-play to

create an interactive sharing task, designed using participatory design influences, to which

a customisation tool can be applied. This will allow for individualisation of the scenario to fit

to the abilities and preferences of the individual child; this customisation has proved to be

effective previously (e.g. (Ozdemir, 2008)).

PAGE 33 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

The system will comprise a 42‖ touch screen with scenario-based software. Each scenario

will represent one of the four aspects of sharing as outlined earlier. The graphics used will

be simple pictorial representations, which may aid in the generalisation of skills from the

software setting into other settings, such as the classroom.

Iteration 1

The first stage of the research is to develop, through adapted and innovative design

methodologies, a suite of scenarios focussing on the promotion of sharing skills. These will

allow the user to explore their sharing skills, while directing them towards socially

acceptable solutions in conjunction with explanations of why this is the case. This mirrors the

standardised teaching practices currently used within schools for those with Asperger

Syndrome.

User-centred design methodologies will be adapted to include children with Asperger

Syndrome, since maintaining familiar routines and reduced social anxiety are of utmost

importance when working with this user group. The design of the system will be developed

by means of Design Workshops and Focus Groups.

Iteration 2

This iteration aims to explore the scenarios further through the development of

customisation opportunities for these scenarios. This will serve to promote engagement and

motivation to use the system, as well as providing a way to ensure that the scenarios can

best fit the needs of an individual. To begin with, this will focus on initial customisation

before the child begins to use the system but the aim is to progress to automated

customisation and scenario generation.

User Centred Design

The research outlined will follow user-centred design methodologies. This can ensure that

the needs and preferences of the end user groups can be considered from an early stage,

resulting in a more meaningful integration of design and functionality (Preece, Rogers, &

Sharp, 2002; Preece, et al., 1994). In addition, the engagement of the user groups may be

elevated through an increased sense of ownership of the system.

Of importance in this research is the inclusion of children with Asperger Syndrome in the

design of the system. Children, in particular those with disabilities, are infrequently involved

in the design and testing of computer systems (Druin, 1998). In this case, the various user

groups will be involved in the design of the system from the outset in a variety of roles from

informant to designer (Olsson, 2004). However, current methodologies may not be useful as

they may be incompatible with the difficulties encountered in Asperger Syndrome. For

example, the think-aloud method is often used in the development and evaluation of

software, but requires the articulation of thoughts, feelings and emotions; this ability is often

delayed or absent in Asperger Syndrome.

With this in mind, there will certainly need to be some adaptation of current methodologies

in use within user-centred design. The children are likely to be anxious in an unknown

PAGE 34 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

situation, so there must be provision of a clear routine and structure to combat this anxiety.

For example, involvement in the research will be preceded by an information session that

prepares the children by giving them an idea of what to expect. Familiar routines will be

used, such as introductions and exploration of mood before the beginning of an activity

and reflection on completion of the task. In addition, every effort should be made to

ensure that the research is conducted within a familiar environment such as school and

with familiar adults available to the children, at least at the beginning of the activities. This

is useful for the children as a familiar face can be reassuring and will also be initially

beneficial for the researcher since they may be unfamiliar with the child‘s particular

communication methods. Specific needs and preferences will be determined on a case-

by-case basis in conjunction with the child‘s parent or caregiver, teacher and/or therapist.

In addition, there will be opportunities for the children to provide opinions and preferences

in various forms, such as ranking lists, drawings and spoken descriptions. Individual children

will have preferences on the type of interaction they prefer (e.g. some may prefer to write

whereas others may prefer to draw). The provision for different activities is helpful in

encouraging the children to engage fully with the tasks. The use of their preferred

communication method, such as augmentation with a particular symbol set such as PECS,

will further reduce any anxiety.

Stage of Study

The research is currently focussing on the development of the scenarios, in particular the

scenario based on the division of objects. Social narratives have been developed to

explain the need for the sharing process and to give the users access to a number of

situations where this may be required. For example, you may have to divide sweets

amongst classmates or pick flowers and share them with a family member. These narratives

were created in the form of storyboards together with therapists and teachers to ensure

that the system interaction flow and narrative was appropriate for the end users.

The scenarios will have a number of ―skins‖ that can be applied; these change the look-

and-feel of the situation without altering the underlying process. The themes currently

being used are ‗space‘, ‗fantasy‘, ‗animals‘ and ‗typical garden‘. These are based on

design workshops conducted with children who were involved in creating an ideal place

to play, with the resulting drawings being analysed using Grounded Theory analysis (Glazer

& Strauss, 1967). The method of development is such that these themes can be expanded,

with custom themes implemented for specific individuals at a later date.

Further work will involve evaluating the scenarios to determine whether use of the system

can result in an increase of sharing behaviours or indeed an awareness of these

behaviours. This will be conducted in a school setting, with children with Asperger

Syndrome aged 6-10 years.

Conclusions

The research outlined in this paper indicates a novel computer-based learning tool for

those with Asperger Syndrome to develop their awareness of social skills, in particular

sharing. It is proposed that this research can add to the literature since the intervention has

been developed through a User-Centred Design process and so will be more suited to task,

PAGE 35 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

whilst promoting engagement in the primary users.

The involvement of children in the design of software to date is minimal and this research

highlights the possibilities of working with minority user groups. There is potential to further

this work with other groups, such as those with severe autism or learning difficulties, with the

development of alternative User-Centred Design methodologies to ensure the

engagement of users in the design process.

References

Adams, L., Gouvousis, A., VanLue, M., & Waldron, C. (2004). Social story intervention:

Improving communication skills in a child with autism spectrum disorders. Focus on

Autism and other developmental disabilities, 19, 8-16.

Asperger, H. t. b. F. U. (1944 [1991]). Autistic Psychopathy in Childhood Autism and Asperger

Syndrome (pp. 37-92): Cambridge University Press.

Attwood, T. (2006). The Complete Guide to Asperger's Syndrome: Jessica Kingsley

Publishers.

Baird, G. S., E; Pickles, A; Chandler, S; Loucas, T; Meldrum, D; Charman, T (2006).

Prevalence of disorders of the autism spectrum in a population cohort in South Thames:

the Special Needs and Autism Project (SNAP). The Lancet, 368, 210-215.

Druin, A. (1998). Beginning a discussion about kids, technology, and design The design of

children's technology: Morgan Kaufmann Publishers Inc.

Glazer, B., & Strauss, A. (1967). The Discovery of Grounded Theory: Strategies for Qualitative

Research. Chicago: Aldine Transaction.

Gray, C., & Garand, J. (1993). Social Stories: Improving Responses of Students with Autism

with Accurate Social Information. Focus on Autism and other developmental disabilities,

8(1), 1-10.

Jordan, R., & Jones, G. (1999). Meeting the Needs of Children with Autistic Spectrum

Disorders. London: Fulton.

Kuoch, H., & Mirenda, P. (2003). Social Story interventions for young children with autism

soectrum disorders. Focus on Autism and other developmental disabilities, 18(4), 219-

227.

Lahno, B. (1995). Trust, reputation, and exit in exchange relationships. Journal of Conflict

Resolution, 39(3), 495-510.

Mundy, P., Sigman, M., & Kasari, C. (1994). Joint attention, developmental level, and

symptom presentation in autism. Development and Psychopathology, 6, 389-401.

Olsson, E., (2004). What active users and designers contribute in the design process.

Interacting with Computers, 16(2), 377-401.

Ozdemir, S. (2008). Using multimedia social stories to increase appropriate social

engagement in young children with autism. [Article]. Turkish Online Journal of

Educational Technology, 7(3), 80-88.

PAGE 36 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Piaget, J. (1962). Play, Dreams and Imitation in Childhood. New York: Norton.

Preece, J., Rogers, Y., & Sharp, H. (2002). Interaction design: Beyond human-computer

interaction. New York: John Wiley & Sons, Inc.

Preece, J., Rogers, Y., Sharp, H., Benyon, D., Holland, S., & Carey, T. (1994). Human-

Computer Interaction. Essex, England: Addison-Wesley Longman Limited.

Rajendran, G., Mitchell, P. (2000). Computer mediated interaction in Asperger's Syndrome:

the Bubble Dialogue program. Computers & Education, 35, 189-207.

Scottish-Executive (2004). A Curriculum For Excellence: The Curriculum Review Group.

Edinburgh.

Swaggart, B., Gagnon, E., Bock, S. J., Earles, T. L., Quinn, C., & Myles, B. S. (1995). Using

social stories to teach social and behavioral skills to children with autism. Focus on

Autistic Behavior, 10, 1-16.

Tartaro, A. (2007). Authorable virtual peers for children with autism. Paper presented at the

CHI '07 extended abstracts on Human factors in computing systems.

van den Bos, W., Westenberg, M., & van Dijk, E. (2010). Development of trust and

reciprocity in adolescence. Cognitive Development, 25(1), 90-102.

WHO (1992). International Classification of Diseases and Health Related Problems (10th

ed.). Edinburgh: Churchill Livingstone.

Wing, L., & Gould, J. (1979). Severe impairments of social interactions and associated

abnormalities in children: epidemiology and classification. Journal of Autism and

Developmental Disorders, 9, 11-29.

About the Author

Rachel Menzies is a PhD student in the School of Computing at the

University of Dundee. She completed her MSc Applied Computing at the

University of Dundee in 2008 and started her PhD in November 2008,

funded by the Teaching and Learning Research Programme (TLRP), jointly

funded by the Economic and Social Research Council (ESRC) and the

Engineering and Physical Sciences Research Council (EPSRC). Her primary

research interests focus on User-Centered Design and the use of

technology in learning, in particular with children who are under-

represented in the design of software. Rachel Menzies is supervised by Dr

Annalu Waller (University of Dundee, Scotland) and Dr Helen Pain

(University of Edinburgh, Scotland).

PAGE 37 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Accessible Design Through Shared User Modelling

Kyle Montague School of Computing

University of Dundee, Dundee, DD1 4HN

[email protected]

Abstract

In the proposed thesis, research examines possible exploitations of shared user modelling in

order to improve the accuracy and scope of models used for personalisation within touch

screen mobile devices to improve the accessibility and usability of software applications.

Previous work in adaptive user interfaces has relied on local and domain specific user

models, which lack in scope and detail. Shared user models can increase the accuracy

and depth of data used to adapt the interfaces and user interactions.

Introduction

Personalised applications and websites are becoming increasingly popular, and many

computer users interact with some form of user-adaptive system on a daily basis.

Contemporary adaptive educational systems (AES) tend to be driven by two factors- the

user‘s domain knowledge and the user‘s interests of the session[Brusilovsky et al. 2005].

Currently domain knowledge is more dominant within the user model and whilst non-

educational systems apply a more equal weight to the user‘s interests, they are still by no

means perfect and fail to address the affects of external factors out with the applications

scope.

A Digital Economy requires that computer technologies and software are usable by

people with very diverse skills and needs, needs that often provide for accommodation for

a disability. For many web designers accommodating these needs may simply mean

meeting the minimum requirements of the WAI guidelines[Chisholm et al. 2001] and

assuming that all users will be able to use their website. While this may make a site

technically ‗accessible‘, it does not necessarily make it usable [Sloan et al. 2006].

Guidelines exist to give a generalised view of accessibility issues and techniques to reduce

these issues, not to eliminate them. From this thinking the universal design movement

emerged, promoting developers to design accessible, usable, flexible, and intuitive

solutions. Whilst this encouraged the designers to take a more user centred approach

when developing, the final products were still a one-size fit‘s all solution. Technology users

are unique, with individual needs and requirements of software tools, and may not

necessarily fit designers generalised abstract group representations. Attempts have been

made to bridge the gaps and provide personalised designs using Adaptive Systems

[Brusilovsky and Millan, 2007]. Such systems can be used to personalise system content

based on domain specific user models, which provides a valuable structure for

educational systems intended to develop and progress users‘ domain knowledge. The

PAGE 38 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

adaptive systems often use a multi-layer interface design [Shneiderman 2003], with a

spectrum of diverse complexity and functionality layers for users of different skill set.

Adaptive systems allow content to be presented and tailored to meet the needs and

interests of individual users. Therefore a more realistic user model the higher accuracy of

the personalisation and accessibility of applications.

This research will be focusing on the use of shared user models with touch capable mobile

devices. Thanks to application stores like the Apple app store, Android Marketplace and

Nokia Ovi store; Smartphone users have access to thousands of neat single purpose apps.

Apps build to perform a specific task such as create a voice memo, or show you the

closest ATM. The latest Ofcom reports [Ofcom, 2009, 2010] show an increase in the number

of smartphone users in the UK. The same documents detailed the top 10 smartphones in

2009 and 2010, of the 10 smartphones there were 2 and 8 with touch screen interfaces (7 of

which were multi-touch.) While apps are a popular choice of use, these devices are also

being used to access existing web content that has been developed and optimised

against guidelines intended for a different interaction method. Although these

technologies present new challenges and complications, touch screens also allow for very

flexible interfaces and interaction methods.

As well as focusing on touch screen mobile devices, the research will concentrate on

improving the accessibility of the technology for users with low vision and mobility needs.

Proposed Solutions

Multi-layer Adaptive Interfaces

The goal of adaptive interfaces is to improve the interactions a user has with the software

based on knowledge of the user and their experience within the application[Langley,

1999]. For a lot of systems this can simply mean filtering content to present the user with

only relevant information. An example of this is Amazons product recommendations. These

products are filtered based on users previous purchases, searches and similar users activity.

Adaptive interfaces tend to retain the same overall layout and interaction style and it is

solely the content that changes. Multi-Layer interface[Shneiderman 2005] design works

differently. With these, users are presented with an interface layer that matches their

application skill level. Therefore, novice users are given a limited subset of functionality and

a stripped out interface to begin. As they progress and become more skilled new features

are opened up and added to the interface. The goal of multi-layer interface design is to

provide a user with a feature set and interaction method that suits their abilities. Multi-Layer

Adaptive Interfaces combine the user modelling and domain knowledge monitoring

techniques of Adaptive Systems, with designing custom interface layers for different

characteristics or situations.

Shared User Modelling

User models are structured data sources for representing user characteristics such as

interests, preferences and domain knowledge. All the data is recorded in context of the

system it was stored for, and self-contained within the application. Shared user modelling

expands the range of variables being recorded, and centralises the location of user model

to allow multiple applications to make use of the same data. Shared user modelling

already exists in technology; the Facebook Open Source Graph protocol

(OSG)[Facebook, 2010] is a good example of this. OSG allows 3rd party applications to

PAGE 39 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

request profile data of a user, which is then used to customise their user experience with

the application. The data available is currently limited to the users interests and a little

personal detail such as date of birth and email address. Facebook‘s OSG does not show

great promise for more accessible designs because of its limited model scope. However it

does possess the foundations of a suitable structure to support this research into shared

user modelling to improve accessibility of software applications.

Shared user models need to accommodate for a wide and diverse range of systems,

requiring them to contain additional data to allow the abstract shared data to be put into

context for individual applications or uses. Therefore we use the following architecture of

model sections; User, Application, Device and Environment. Within the User section we look

at interests and user preferences or attributes. Application records data on the users

domain knowledge and goals within the current application. Any data about the

hardware preferences and capabilities is recorded in the Device section, and attributes of

the users surroundings during system use are logged in the Environment section.

Structure

It is vital that the applications can all obtain and update information stored within the user

model. This is why the User Model Web Service (UMoWS) [Bielikova and Kuruc, 2005]

structure was selected. The user models are contained within the database accessed only

by the UMoWS. Any application wishing to make use of the shared user models must

communicate via the web service methods of the UMoWS using unique authentication

credentials. Applications can request and update information on a selected user as long

as they have permissions to do so. Connected to the UMoWS is a User Management

control panel, allowing users to view and edit their model information and access

permissions for this content. The diagram in figure 4 shows the overall structure and how

applications can make use of the shared user models

Figure 1. UMoWS structure showing how applications access the shared user model data via SOAP requests.

PAGE 40 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Projects

As an example of the intended work a prototype tool to assist with indoor navigation has

been created (Accessible Indoor Navigation, submitted to the ASSETS ACM Student

Research Competition 2010). The prototype also demonstrates the acquisition of the

above interaction data into a centralised modelling source, and the personalisation of

system based on this data. Indoor navigation can involve fairly complex sequences, with

little or no landmarks to assist the way-finder. In an attempt to reduce the cognitive load of

detailed instructions the prototype combines text directions with rich media elements such

as audio transcriptions and images associated with the instructions. By loading in the user

model the tool is able to customise the route taken from a start point to destination, based

on data that might affect a users way-finding capabilities. The navigation tool is also able

to customise the interface and interaction style the user has with the system based on the

shared user model. An example of this is the user pressing the audio button to request an

audio transcription when given text instructions. The tool would then update the user

model to reflect the preference for audio and as a result when the next set of text

instructions are delivered to the user, the system will automatically play the audio

transcription as well.

Future Work

To further investigate the potential of shared user modelling more research will need to be

carried out using an application within a different domain set, whilst still applying the same

shared model to make assumptions and personalise the interactions of the system. Work

would also need to be done to incorporate other external factors into the user model. The

environment (Will the user hear the audio? Is the contrast sufficient enough in this light?)

And device capabilities (Is the screen large enough to show all the toolbars? Is the text

readable at this font size on the display?) Play an important role within interactions. On top

of this we must also consider how much the user is willing to share. The level to which a user

is willing to share information will be proportional to possible personalisation. A key factor

that will influence the success of this research is the security of these shared user models. It

is vital that people are comfortable and have control over the data being stored about

them. As such it is important that the user has the final word on access permissions, and

modelled data. The considered foundation architecture for storing the models is the ‗User

Model Web Service‘ (UMoWS) structure [Bielikova and Kuruc, 2005] which uses HTTPS and

user credentials to authenticate with the service. Changes would need to be made to

incorporate the external factors mentioned in this proposal.

Conclusions

Can shared user modelling improve the accessibility of touchscreen mobile devices? What

information would users is willing to share? Which factors of the user model are more

important when personalising interactions? With such diverse technologies and users, we

must bridge the gap what the user knows and what they need to know [Shneiderman

2002]. Access to detailed user models would allow developers to move away from the

current universal design of; one size fits all, and instead build applications, which can be

customised and tailored to suit each individuals needs at runtime. This additional layer of

PAGE 41 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

personalisation would help to ensure that applications and websites are accessible and

usable by all, regardless of individual user differences, devices and environments; for

example allowing users with visual impairments to access the same applications as users

with no visual impairments. Through using a multi-layered design [5], systems would just

select the appropriate interface based on the user model data and provide the user with

an interaction style that suits their needs and preferences.

References

Bielikova, M. and J. Kuruc, Sharing user models for adaptive hypermedia applications. 5th

International Conference on Intelligent Systems Design and Applications, Proceedings,

2005: p. 506-511 572.

Brusilovsky, P. and E. Millan, User Models for Adaptive Hypermedia and Adaptive

Educational Systems. The Adaptive Web, 2007. 4321/2007: p. 3 – 53.

Brusilovsky, P., S. Sosnovsky, and O. Shcherbinina, User Modeling in a distributed E-learning

architecture. User Modeling 2005, Proceedings, 2005. 3538: p. 387-391.

Chisholm, W., G. Vanderheiden, and I. Jacobs, Web content accessibility guidelines 1.0.

interactions, 2001. 8(4): p. 35-54.

Facebook. Open Graph protocol. 2010 [cited 2010 11/06/2010]; Available from:

http://developers.facebook.com/docs/opengraph.

Langley, P., User modeling in adaptive interfaces. Um99: User Modeling, Proceedings,

1999(407): p. 357-370, 393.

Ofcom, Communcations Market Report. 2009.

Ofcom, Communcations Market Report. 2010.

Shneiderman, B., Leonardo's laptop : human needs and the new computing technologies.

2002, Cambridge, Mass.: MIT Press. xi, 269 p.

Shneiderman, B., Promoting universal usability with multi-layer interface design, in

Proceedings of the 2003 conference on Universal usability. 2003, ACM: Vancouver,

British Columbia, Canada. p. 1-8.

Sloan, D., et al., Contextual web accessibility - maximizing the benefit of accessibility

guidelines, in Proceedings of the 2006 international cross-disciplinary workshop on Web

accessibility (W4A): Building the mobile web: rediscovering accessibility? 2006, ACM:

Edinburgh, U.K. p. 121-131.

PAGE 42 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

About the Author

Recently completed an undergraduate degree in Applied Computing

at The University of Dundee, since which I have started a postgraduate

within The School of Computing. I am currently reading for a PhD as part

of the Social Inclusion through the Digital Economy hub. At present I am

researching tools for ‗Accessible Indoor Navigation‘, and ‗Shared User

Modelling‘ to create more personalised and accessible mobile

applications. I have recently won 3rd place in the ASSETS 2010 student

research competition with my mobile accessible indoor navigation application.

PAGE 43 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Generating Efficient Feedback and Enabling Interaction in Virtual Worlds for Users with Visual Impairments

Bugra Oktay Department of Computer Science and Engineering

University of Nevada, Reno

[email protected]

Abstract

Despite the efforts for making virtual worlds accessible to users with visual impairments (VI),

we still lack an effective way of describing the virtual environment and having the users

interact with it. This work aims to complete these missing parts by generating meaningful

textual/audio feedbacks as necessary and by providing text-based interaction

capabilities.

1 Introduction

Virtual worlds are accommodating tens of millions of users worldwide. These 3D

environments are being used for socializing, educational purposes, business meetings and

so on. However, until recent research efforts [Folmer et al. 2009; IBM Human Ability and

Accessibility Center, 2010; Pascale et al. 2008; Trewin et al. 2008], users with VI, who can surf

through web pages, check e-mails and follow news online over the text-based 2D internet,

could not be a part of these highly graphical 3D virtual worlds. Screen readers, which are

essential tools for users with VI, could not be used to access and explore virtual worlds as

these have no textual representation [Abrahams, 2006].

TextSL [2010] is our research effort for the inclusion of the users with VI to virtual worlds. It is a

screen reader accessible, command-line interface which lets users to explore the

environment, communicate and interact in virtual worlds. Our experiences and the user

studies with TextSL revealed that even though our approach is efficient for communication

-the most common activity in virtual worlds [Griffiths et al. 2003]; it has important drawbacks

in other aspects: (1) Describing a crowded environment results in long speech feedbacks

which easily overwhelms the user; (2) Compared to the clients used by sighted users

[Imprudence Viewer, 2010; Linden Research, 2010; Snowglobe Viewer, 2010], TextSL offers

slower interaction due to its iterative nature; (3) Users cannot interact with the scripted

objects (e.g. cars, tour balloons, elevators) or even observe the changing environment; (4)

Users cannot create content.

2 Research Summary

2.1 Feedback

Creating more effective feedbacks is one of the two main goals of this research. However,

it is quite challenging to effectively describe the graphical virtual environment only with

PAGE 44 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

sound/speech and without overwhelming the user or missing important details. The

following two subsections describe our feedback creation approaches.

2.1.1 Synthesizer

TextSL lets the users query their environment to get information about the location and the

objects/avatars around. In densely populated areas, describing all objects at once usually

results in overwhelming feedbacks. On the other hand, not providing enough information

when there are a few objects around, causes more queries which simply means loss of

time. To be able to solve this problem, a synthesizer [Oktay and Folmer, 2010] is designed

that creates highly descriptive yet not overwhelming feedbacks.

The synthesizer will expand or summarize the initial feedback based on the user‘s

preferences (e.g. word limit, object detection range limit) and the object population of the

environment. Objects with non-descriptive names (e.g. ―object‖) will be removed from the

feedback. Objects that are decided to be of less importance (i.e objects that are far, small

etc.) will be filtered out by thresholding. Objects of the same type (e.g. cats) will be

grouped together. A taxonomy of virtual world objects will be created with the help of a

Second Life object labeling game called Seek-n-Tag [Yuan et al. 2010]. With the use of this

taxonomy, object names will be aggregated if they are members of the same class (e.g.

cats, dogs animals). The level of aggregation will be decided based on the desired

length of feedback.

How can we generate the most descriptive feedback for a specified length?

2.1.2 Ambient Sound

Even with the use of the synthesizer, the textual feedback may lack some important

information due to high population. Since we aim to minimize the number of queries, this

information should best be provided with a supplementary mechanism. Playing ambient

sound effects for the objects having distinctive sounds would enhance the descriptiveness

of feedback while it avoids overwhelming the user as speech and sound are perceived

differently. The use of spatial sound will also give the user an idea about the object‘s

location. Specific sound effects may be played for interactive objects or for objects on

sale. That way, apart from the object type and location, the user may learn about different

object properties.

To what extent, ambient sound can be included in feedback without overwhelming

the user?

2.2 Interaction

The second main goal of this research is to make interaction with the environment possible

for the users with VI. Nevertheless, this is even more challenging because it is the interaction

more than any other aspect that depends on vision. The following two sections describe

our approaches on command-based interaction.

2.2.1 Interacting with Scripted Objects

Virtual worlds are highly populated with scripted (interactive) objects. Interacting with the

objects and having an idea of what is happening around are crucial for being part of that

environment. However, activating a scripted object typically results in a behavior that can

only be perceived visually.

PAGE 45 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

A method called ―virtual eye‖ will help the users with VI to observe what happens when a

scripted object is touched/activated. Virtual eye is a black box observer where a number

of descriptors are employed. Each of these descriptors will be looking for a different

change (e.g. change in size, color, position etc.) and they will help to derive a meaningful

textual description out of the changes resulted by an interaction (e.g. ―the billboard

started to play a video tagged ‗wild life‘‖, ―the car changed its color from red to black

and started to go west‖).

How can ‗virtual eye‘ observe and describe the changes in a virtual environment?

2.2.2 3D Content Creation

Almost all the objects (from animals to buildings) in non-game-like virtual worlds are

created by the users. The economies of virtual worlds also mostly depend on user-created

object sales. However, since this is an entirely visual activity, users with VI cannot contribute

to the virtual environments by creating content.

Command-based content creation from the scratch is a very challenging task but since all

the objects in virtual worlds are made of small geometric blocks called ‗prims‘, letting users

apply basic operations (e.g. create, resize, paint, stretch, transform, connect) on simple

prims would be a good start. During the content creation process, the user will receive

detailed feedback about the current status of the content. Alternatively, with the help of

the taxonomy mentioned above, readily created object templates will be made

available and the users will be able to make any changes they like on those objects (e.g.

―create a car named myCar―, ―paint myCar blue‖).

What is the most efficient way of command-based content creation?

2.3 Status

A web-based version of TextSL has been implemented and it is available for public use

(along with the previous standalone version) [TextSL, 2010]. These implementations

currently have communication and exploration capabilities as well as limited interaction

features. First version of the synthesizer, which uses a basic taxonomy, has also been

integrated to the web-based TextSL.

Future work includes the improvements on the synthesizer (and taxonomy), inclusion of

supplemental spatial ambient sound, design of the descriptors for the virtual eye,

development of command-based content creation and generalization of these solutions

for all open-source virtual worlds. Further user studies will be conducted to evaluate and

improve our solutions.

3 Conclusion

Our work is an effort to make virtual worlds more accessible for the users with VI by

generating meaningful, descriptive yet concise feedbacks, supplementing their

experience with spatial ambient sounds, letting them interact with objects while observing

the changes in the environment and providing 3D content creation abilities. Thanks to

these features, the users with VI will be able to truly experience virtual worlds as they have

never done before.

Acknowledgement

PAGE 46 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

This work is supported by NSF Grant IIS-0738921.

References

Abrahams, P. (2006). Second life class action. http://www.it-analysis.com/blogs

/Abrahams_Accessibility/2006/11/second_life_class_action.html.

Folmer, E., Yuan, B., Carr, D., Sapre, M. (2009). Textsl: A Command-Based Virtual World Interface

for the Visually Impaired. Proc. ACM ASSETS ‗09, 59-66.

Griffiths, M., Davies, M., and Chappell, D. (2003). Breaking the stereotype: The case of online

gaming. Cyber Psychology and Behavior. Vol. 6 Iss.1, 81-91.

IBM Human Ability and Accessibility Center, Virtual Worlds User Interface for the Blind.

http://services.alphaworks.ibm.com/virtualworlds/.

Imprudence Viewer. http://imprudenceviewer.org/.

Linden Research, Second Life. http://www.secondlife.com/.

Oktay, B., Folmer, E. (2010). Synthesizing Meaningful Feedback for Exploring Virtual Worlds Using

a Screen Reader. Proc. CHI 2010, 4165-4170.

Pascale, M., Mulatto, S., Prattichizzo, D. (2008). Bringing haptics to Second Life for visually

impaired people. Proc. EuroHaptics 2008, 896–905.

Snowglobe Viewer. http://snowglobeproject.org/.

TextSL: Screenreader accessible interface for Second Life. http://textsl.org

Trewin, S., Hanson, V., Laff, M., Cavender, A. (2008). Powerup: An accessible virtual world. Proc.

ACM ASSETS ‗08, 171–178.

Yuan, B., Sapre, M., Folmer, E. (2010). Seek-n-Tag: A Game for Labeling and Classifying Virtual

World Objects. Proc. GI'10.

About the Author

Bugra Oktay is a Ph.D. student at Computer Science and Engineering

Department of University of Nevada, Reno. He received his B.S. degree at

Middle East Technical University, Turkey in 2009. His research is on enabling

accessibility in virtual worlds.

PAGE 47 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

The Automation of Spinal Non-manual Signals in

American Sign Language

Lindsay Semler DePaul University

1 E. Jackson, Chicago, IL 60604

[email protected]

Abstract

American Sign Language (ASL) is the first language of the Deaf in the United States. The

most effective method for communication between the Deaf and Hearing is certified ASL

interpreters, however interpreters are not always available. In this case, the use of a

translator producing 3D animations of ASL could improve communication, however the

characterization of ASL for automated 3D animation is still an open question. The goal of

this research is to determine how and when non-manual signals are used in ASL sentences,

and develop a methodology to portray them as 3D animations.

Introduction

American Sign Language (ASL) is the preferred language of the Deaf in the United States

and most of Canada. It is a dynamic set of hand shapes, location, orientation, hand

movement, and non-manual signals. The most effective way for the Deaf and hear to

communicate is through certified ASL interpreters. Typically, ASL interpreters are in high

demand and are not always available, which means that communication among the

Deaf and hearing may be impaired or nonexistent. In these situations, a 3D animated

display of ASL could assist with breaking the communication barrier.

Non-manual Signals in ASL

A prerequisite to such a display is the ability to create well-formed, understandable

sentences automatically. Previous efforts have created a method for properly conjugating

agreement verbs in declarative sentences [Toro 2004], however the work only addresses

the motion and configuration of the hands and arms. An important aspect to the

understanding of ASL are non-manual signals; signals that are not expressed on the hands

[Valli 1995]. These include facial expressions, eye gaze, and movements of the head, spine,

and shoulders. The non-manual signals have a wide-ranging and diverse set of functions,

playing a role in phonology, morphology, syntax, and semantics [Baker 1983]. Previous

linguistic studies have identified eye gaze and head tilt [Bahan, 1996] but have not

researched the subtler, yet important aspects of non-facial non-manual signal production

such as shoulder and torso movement. This research examines the types of non-manual

signals used in ASL, and develops a methodology to portray them in 3D animations.

PAGE 48 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Research Goal

The goal of this research is to automate rotations of the shoulder and spine as they relate to

non-manual signals. There are three main milestones of this research: a motion study of

non-manual signals, geometric characterization of the observed motions, and

implementation of the mathematics as animation. The characterization of these motions

must be general enough that it can be applied to any sentence, as well as robust enough

to produce correct results on any figure with human proportions.

Results

Preliminary results from motion studies indicate that significant shoulder and body

movements occur only when the hands are outside the region we call the ‗zone of

comfort‘. This is a rectangular area roughly bounded on the top and sides by the shoulders,

and on the bottom by the waist.

Several phenomena of ASL production occur outside the zone of comfort, including a)

fingerspelling, b) two-handed raise, c) one-handed lateral, and d) two-handed

asymmetry. In the case of fingerspelling, the dominant hand is raised above and/or to the

side of the ‗zone of comfort‘. The one-handed signal occurs when one hand is outside the

width of the ‗zone of comfort‘, this case contains a twist in the body. In the case of two-

handed asymmetry, both hands are on one side of the midline of the body. This situation

contains a similar, but more pronounced twist in the waist, spine, and shoulders.

In some cases, several of these situations might occur at the same time, and multiple filters

were used following the guidelines above. Generally, the shoulders should always be

balanced with one another, except when both hands are raised. Additional observations

from the motion study included observing motion of the head and eye gaze. The head

typically tilts with the shoulders and rotates towards where the hands are pointing. Eye

gaze has a tendency to follow the dominant hand and typically begins movement just

before the sign.

These findings relate to kinesiology, however they also need to be analyzed in the context

of ASL linguistics. Previous research efforts studied the use of facial expressions as non-

manual signals and as emotional affect in ASL [Schnepp 2010], and this new research effort

has the potential to complement the facial nonmanuals signals by exploring the linguistic

and extra-linguistic influences on the spinal column, including body shifting, questions,

negation or negative expressions, and pointing signs as well as affect.

Conclusions

Plans for future work include implementing general filters for the above observations of

non-manual signals. The filters will then be applied to sentences based on ASL gloss

information, and will enable body movement and rotation in the body in regards to the

context of sentences. Further research will be conducted to determine when non-manual

signals are used in linguistics, compared to those used in kinesiology, as well as, how to

combine the non-manual signals for a 3D animated model.

PAGE 49 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

The addition of non-manual signals to the current 3D animations will improve

comprehension. When viewing previous animations without non-manual signals, Deaf and

viewers commented that the animations appeared mechanical. When reviewing

animations with non-manual signals, viewers were strongly positive in their comments. They

remarked on a more life-like quality of the motion, as well as making it easier to understand

the signing. With additional research, contextual based non-manual signals will allow the

model to convey a wider range of expression, and further aid in the understanding of the

3D animations.

References

Bahan, B. [1996] Non-Manual Realization of Agreement in American Sign Language.

Doctoral dissertation, Boston University, Boston, MA.

Baker, C. [1983] A microanalysis of the non-manual components of questions in American

Sign Language. In P. Siple (Ed.), Understanding language through sign language research.

New York: Academic Press. 1983. pp. 27-57.

Toro, J. [2004] Automated 3D Animation System to Inflect Agreement Verbs. Proceedings of

the Sixth High Desert Linguistics Conference. Albuquerque, New Mexico, November 4-6,

2004.

Valli, C. and Lucas, C. [1995] Linguistics of American Sign Language: An Introduction. 2nd.

Ed. Washington, DC: Gallaudet University Press, 1995.

Schnepp, J., Wolfe, R., and Donald, J. [2010] Synthetic Corpora: A Synergy of Linguistics

and Computer Animation. 4th Workshop on the Representation and Processing of Sign

Languages: Corpora and Sign Language Technologies. Malta, May 17-23, 2010.

PAGE 50 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Developing a Phoneme-based Talking Joystick for Nonspeaking Individuals

Ha Trinh School of Computing, University of Dundee

Dundee, Scotland, United Kingdom, DD1 4HN

[email protected]

Abstract

This research investigates the potential of developing a novel phoneme-based assistive

communication system for pre-literate individuals with severe speech and physical

impairments. Using a force-feedback joystick-like game controller as the access tool, the

system enables users to select forty-two phonemes (i.e., English sounds) used in literacy

teaching and combine them together to generate spoken messages. What distinguishes

this phoneme-based device from other communication systems currently available on the

market is that it allows users who have not mastered literacy skills to create novel words

and sentences without the need for a visual interface. Natural Language Processing (NLP)

technologies, including phoneme-to-speech synthesis, phoneme-based disambiguation

and prediction, and haptic force feedback technology are being incorporated into the

device to improve its accessibility and usability.

Background

Speech Generating Devices

Over the last thirty years there has been a substantial increase in the use of speech

generating devices (SGDs) to provide an augmentative and alternative means of spoken

communication for individuals with little or no functional speech (Glennen, 1997). Most of

the commercially available SGDs can be classified into two categories, namely

pictographic-based and letter-based SGDs, each of which has its own advantages and

disadvantages. Pictographic-based SGDs employ graphical symbols to encode a pre-

stored vocabulary, enabling users to sequence those symbols to produce spoken output.

Although this type of SGD allows for quick retrieval of commonly used words and

utterances, it requires users to browse through the symbol set and decode the visual

representations of pre-stored items, which might be problematic for individuals with

intellectual disabilities. Moreover, the restricted vocabulary encoded in these devices

might not satisfy the needs of those who would like to spontaneously create novel words

and messages in everyday conversations. To overcome this limitation, a number of letter-

based SGDs have been developed to enable nonspeaking users to spell their own

messages and generate them in the form of synthesized speech. Although this type of SGD

provides users with an open vocabulary, it requires users to master literacy skills; skills which

many children and adults with severe speech and physical impairments (SSPI) struggle to

acquire (Koppenhaver, 1992). This raises the question of how to design a communication

PAGE 51 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

device that could provide users with access to an unrestricted vocabulary and enable

them to generate novel messages without being literate.

The PhonicStick Project

The PhonicStick project, conducted at the University of Dundee, aims to develop a

phonics-based literacy learning tool for pre- literate children with SSPI. This talking joystick,

known as the PhonicStick, provides pre-literate children with access to the 42 spoken

phonemes introduced in the Jolly Phonics literacy program (Lloyd, 1998), and allows users

to blend these phonemes into spoken words (Black, 2008). The most prominent feature of

the PhonicStick is that it enables users to access the phonemes without the need for visual

displays. The 42 phonemes are divided into eight groups and mapped onto discrete

locations on the joystick workspace using a method adapted from the Quikwriting stylus

text input system (Perlin, 1998), as shown in Figure 1. The selection of each phoneme

consists of three stages: (1) Push the joystick from the centre of the joystick workspace into

the group to which the target phoneme belongs (this movement is shown by the straight

lines in Figure 1); (2) Move the joystick around its circumference to find the target phoneme

within that group; (3) Move the joystick back to the centre to select the target phoneme. Auditory cues are utilized to assist users in moving the joystick along pre-defined paths on

the joystick interface towards the location of target phonemes. The use of such a gestural

access method will help reduce the cognitive workload imposed on the users by

eliminating the need to decode any pictographic or orthographic representations of the

phonemes.

Figure 1. The mapping of the 42 phonemes onto the joystick workspace

To evaluate the potential of the joystick-based phoneme access paradigm, the

PhonicStick project undertook a pilot study within which a prototype joystick was

developed to enable access to a subset of six phonemes, including /s/, /a/, /t/, /p/, /n/,

and /i/. Evaluations conducted with seven children with varying degrees of speech and

physical impairments demonstrated that the participants were able to remember the

joystick movements required to access the six phonemes and create their own words

without the presence of visual cues (Black, 2008).

Since the PhonicStick makes use of speech sounds as the base units for word creation, it

PAGE 52 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

enables users to generate novel spoken words and messages without the need for literacy

skills. The question thus arises as to whether such a literacy learning tool could be optimized

to be an interactive communication device used by non-literate individuals with SSPI in real

time conversation.

Proposed Solution

This PhD research highlights two major problems that could prevent the PhonicStick from

being an effective communication system. First, the six-phoneme PhonicStick prototype

employs pre-recorded speech to produce spoken output (Black, 2008). With the full set of

42 phonemes, it would be infeasible to pre-record all possible combinations of these

phonemes. To solve this problem, the research explores the feasibility of incorporating

speech synthesis technology into the PhonicStick. It is expected that a phoneme extension

to existing text-to-speech systems will be integrated into the PhonicStick, allowing the

device to generate computerized speech from phoneme sequences. Experiments will be

conducted to evaluate the intelligibility of the phoneme-based synthetic speech using

well-established methodologies such as the Modified Rhyme Test (House, 1965). The

second major problem with the current PhonicStick is that the word creation process is very

slow and may require many physical movements, which may cause great difficulties for

users with physical disabilities. Thus, this research aims to investigate whether predictive

techniques, such as phoneme- based disambiguation and prediction, can be applied to

the PhonicStick to achieve joystick movement savings and increase the communication

rate. These techniques would enable the system to predict a list of probable next

phonemes or words given the previously selected phoneme sequence. Two methods have

been proposed to present the predicted list to the users without the need for a visual

interface. The first method employs auditory feedback by which predicted phonemes and

words will be spoken out. The second method utilizes force feedback technology by which

physical forces will be applied to guide the users towards the location of highly probable

next phonemes. The idea of incorporating force feedback into the prediction process is

inspired from an existing dynamics and probabilistic text entry system (Williamson, 2005).

A user-centered design methodology will be adopted in the research to ensure that the

needs and the desires of the end users will be properly addressed. There will be two groups

of stakeholders involved in the research, including: (i) non-literate individuals with SSPI who

will participate in the evaluation phases, (ii) speech and language therapists and

rehabilitation engineers who will be involved in the design of the optimal phonic layout (i.e.

the method of mapping the phonemes onto the joystick interface) and force feedback

features. An iterative, incremental approach will be employed throughout the system

design and development process to take advantage of early and continuous feedback

obtained from evaluations with a focus group. The system will be evaluated against a

number of criteria, including efficiency (i.e., the communication rate achieved by using

the system), learnability (i.e., how easy for the users to memorize the phonic layout and

learn to create spoken messages from phonemes with support of predictive features), and

user satisfaction.

Stage of Study

The research is in the process of implementing the first working prototype of the phoneme-

PAGE 53 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

based communication system. The prototype uses the Novint Falcon, a newly developed

haptic game controller, as its access tool. This haptic device is capable of generating high-

fidelity three-dimensional force feedback, thereby allowing for the implementation of the

proposed force feedback features at later stages of the development process. A

phoneme-to- speech synthesis module, which has functions to stream phoneme sequence

input into the Microsoft text-to-speech engine to produce spoken output, has been

integrated into the prototype. Results from evaluations with thirty English native speakers

who have normal hearing showed that the synthetic speech produced by the prototype

achieved a satisfactory level of intelligibility as it scored 92.6% correct on average in a

closed-response Modified Rhyme Test.

The research is currently focusing on the development of the phoneme-based predictive

techniques. The research is in progress of building a phoneme-based n-gram statistical

language model (Jurafsky, 2009), from which the proposed predictive features will be

developed. The frequency of occurrence of each phoneme computed from this model is

also being used to rearrange the current phoneme layout so that phonemes with high

frequencies will be located at easy-to-access positions on the joystick workspace. A series

of experiments with both disabled and non-disabled participants are being planned to

gather input for the design of the predictive features and to evaluate the usability of those

features upon their completion.

Conclusions

This research proposes a new type of communication system that has the potential of

overcoming the limitations of the picture and letter-based systems currently available on

the market. The solutions developed from this research would provide a large proportion of

people with SSPI who experience literacy difficulties with a new approach to speech

generation, thereby adding more flexibility to their communication strategies.

Since the proposed system explores the use of a novel gestural joystick-based interface

with the application of innovative technologies such as haptic force feedback, the

findings obtained from the design and development of the system would provide an

effective contribution to the research of non-classical interfaces, an interesting topic of

accessibility and usability research.

References

Black, R., Waller, A., Pullin, G., Abel, E. (2008). Introducing the PhonicStick: Preliminary

evaluation with seven children. Paper presented at the 13th Biennial Conference of

the International Society for Augmentative and Alternative Communication Montreal,

Canada.

Glennen, S. L., DeCoste, D. C. (1997). The Handbook of Augmentative and Alternative

Communication: Thomson Delmar Learning.

House, A., Williams, C., Hecker, M., Kryter, K. (1965). Articulation testing methods:

Consonantal differentiation with a closed response set. Journal of the Acoustical

Society of America, 37, 158-166.

PAGE 54 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Jurafsky, D. S., & Martin, J. H. (2009). Speech and language processing: an introduction to

natural language processing, computational linguistics, and speech recognition (2nd

ed.). London: Pearson Education, Inc.

Koppenhaver, D. A., & Yoder, D. E. (1992). Literacy issues in persons with severe speech and

physical impairments. In R. Gaylord-Ross, Ed. (Ed.), Issues and research in special

education (Vol. 2, pp. 156-201). NY: Teachers College Press, Columbia University, New

York.

Lloyd, S. M. (1998). The Phonics Handbook. Chigwell: Jolly Learning Ltd.

Perlin, K. (1998). Quikwriting: continuous stylus-based text entry. Paper presented at the

11th Annual ACM Symposium on User Interface Software and Technology, California,

United States.

Williamson, J., Murray-Smith, R. (2005). Hex: dynamics and probabilistic text entry Switching

and Learning in Feedback Systems (Vol. 3355, pp. 333-342): Springer.

About the Author

Ha Trinh is a PhD student in the School of Computing at the University

of Dundee. She completed her BSc (Hons) in Applied Computing at

the University of Dundee in June 2009 and started her PhD in August

2009, funded by the Scottish Informatics and Computer Science

Alliance (SICSA) and the University of Dundee. Her primary research

interests focus on Augmentative and Alternative Communication

(AAC) technology, in particular the application of Natural Language

Processing and multi-modal interaction to AAC systems.

PAGE 55 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Immersive Virtual Exercise Environment for People with Lower Body Disabilities

Shaojie Zhang P. Pat Banerjee Cristian Luciano University of Illinois at Chicago

842 W Taylor St., 2039ERF, Chicago, IL 60608

[email protected] [email protected] [email protected]

Abstract

This paper presents the development and evaluation of the use of virtual reality

technology, together with appropriately adapted exercises through an augmented reality

interface, to create Virtual Exercise Environments (VEEs). These environments allow persons

with lower body disabilities to exercise and train just like what people do in real world. The

major aim of this research is an architecture to create virtual exercise environments for

people with lower body disabilities, which can make exercise more enjoyable and less

repetitive. This paper presents our current research on this architecture to facilitate

participation and adherence using a VEE as a prototype and common-off-the-shelf

hardware components.

Introduction

It is well established that physical exercise can improve individual‘s health and lower the

risks of developing certain types of cancers, cardiovascular disease, and other serious

illnesses (Blair et al., 1989; Kampert et al., 1996), which is also associated with many

psychological benefits including lowering depression, anxiety, and stress (Gauvin &

Spence, 1995; Byrne & Byrne, 1993). This is especially true for people with lower body

disabilities. Sedentary lifestyle brings physical, attitudinal and societal barriers to beginning

and continuing a program of regular exercise. A recent review (Humpel et al., 2002)

showed that: (1) accessibility of facilities, (2) opportunities for physical activity, (3) safety,

(4) weather, and (5) aesthetics were leading determinants of participation in physical

activity. While the review focused on the general population, there is ample evidence

that these factors are usually more important in determining participation in physical

activity by people with disabilities. Weather conditions such as excessive heat in summer or

excessive cold and dangerous surfaces in the winter make outdoor exercise extremely

hazardous for people with disabilities. One approach to overcoming these barriers is to use

technology to bring equally, if not better, engaging, entertaining, and motivating indoor

exercise opportunities to people with disabilities.

With the increasing performance and decreasing cost of computers, it is becoming

possible to use the virtual reality techniques on the fitness machines in gyms and homes.

Use of fitness machines is often plagued by boredom and the resultant poor efforts. Very

few fitness centers offer fully accessible opportunities for people with lower body disabilities

and exercising at home can become boring for even the most dedicated exercise

participant. Virtual reality helps that by adding realism and motivation. Research has

PAGE 56 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

revealed that individuals are more likely to engage in a program of regular exercise if it is

fun (i.e. the enjoyment outweighs the discomfort) and if they have a ―partner‖ to exercise

with (Johnson et al., 1996). For persons with lower body disabilities, there are far fewer

opportunities to exercise with a partner, and exercise may be more likely to be perceived

as a chore than as a looked-forward-to part of the day‘s activities (Ravesloot et al., 1998;

Heath & Fentem, 1997).

As mentioned above, a lot of research on virtual reality has been explored in various areas,

and some applications have already become products in the market. These products

include the software systems for virtual athletic training and competition like NetAthlon

(www.riderunrow.com/products_na.htm), game consoles such as Nintendo

(www.nintendo.com), Sony Playstation (www.sony.com), X-Box (www.xbox.com), and so

on. These solutions are not suitable for research and development because they are not

open. Herein, we try to develop a Remote Exercise and Game Architecture Language

(REGAL) to create high level immersive VEEs. These environments allow people with lower

body disabilities to exercise more often in a highly engaging activity.

The major objective of our research is an architecture of virtual exercise environment to

make exercise more enjoyable and less repetitive for people with lower body disabilities.

For this group of people, accessible exercises in real world are very limited and it is hard or

even impossible to participate in certain enjoyable exercises such as swimming, running,

skiing, cycling, football, and so on. The architecture presented in this paper is designed to

provide opportunities to practice in as many exercises as possible. Our VEE architecture has

two features to make this happen: one is that people with lower body disabilities have full

access to the VEE; the other is that our VEE can support various exercises, including the pre-

loaded exercises we designed and new exercises implemented by other developers, and

various fitness equipments.

Current Methodology and Experiments

The architecture of the software is geared towards open standards for immersive virtual

exercise environment, which has the ability to support multiple fitness machines, various

sports and displays immersive 3D graphics. The architecture can support new exercises

implemented by other developers and the new exercises can be uploaded to the

architecture and be played by the participants in the same way as our pre-loaded

exercises.

In the current architecture shown in Fig. 1, two ways are provided to create the 3D virtual

model, either from an existing game or on top of the software core, which are Open

Inventor and PhysX. This architecture also provides tracking systems to get input from the

user‘s body movement. For people with lower body disabilities, the main body movement

comes from the users‘ hands/arms and heads. Two independent tracking systems are

included to track these two movements respectively. The architecture supports exercise

equipments that have serial communication like those supported by NetAthlon via serial

ports on PC. To support arbitrary equipments more flexibly, the hand tracking system is also

connected to the equipment so that the architecture comes independent of the

exercising equipments. The rendered image is output to the stereo driver for 3D display. We

also playback 3D spatialized stereo sound for each collision to provide a more realistic

experience.

PAGE 57 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Based on the architecture, we designed a throwing game using the first option of creating

the 3D virtual model using Open Inventor and PhysX. This game was demonstrated at 2009

RESNA (Rehabilitation Engineering and Assistive Technology Society of North America). We

used the electromagnetic PCIbird tracker from Ascension Technologies Corp. for head and

the same sensor for hand tracking to simulate the change of viewport and hand

movement respectively, and a Samsung DLP HDTV to display the real time 3D graphics. The

feedback from the participants showed that they are satisfied with the natural interaction,

consistent and responsive virtual environment but they expect more, and participants with

lower body disabilities showed higher satisfaction with every aspect of the environment

than those without disabilities. (Zhang et al., 2010)

Fig. 1 REGAL architecture

We also designed a rowing demo using the second option of creating the 3D virtual model

from an existing game called FarCry. This demo also shows the support for various

equipments by using a prototype of rowing machine which has no PC communication. This

rowing demo was shown at the 2nd State of the Science Conference of Rectech 2010. The

head tracking system was the electromagnetic PCIbird tracking system from Ascension

Technologies Corp. and the hand tracking system was built based on the Nintendo Wii

remote controller and the managed library for Windows. This demo also shows our concept

of transferring fine motor control to gross motor control in playing games which provides

the game players more opportunities of exercising and having fun at the same time.

PAGE 58 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Future work

Research has shown that people are more likely to engage in a program of regular

exercise if they have partners to exercise together with. We plan to implement the virtual

environment for multiple users so that users can communicate and even compete with

either real or virtual partners. For the real partners, they can play either in the same place

or over the internet from different locations in the world. The current architecture can be

extended to run on interconnected geographically distant computers to allow remote

collaboration and multi-user participation. The networking component will be developed

to communicate between individual users.

Performance data such as speed, heart rate are usually very important to analyze the

participation and adherence of the users. Collection and analysis of performance data will

be one important component of the next version of the VEE. These performance data can

also be used to playback previous games and balance the users‘ levels when they

compete with each other over the internet.

A gesture library is also desirable for supporting various exercises such as skiing, walking and

cycling. This library will provide individual components to recognize the typical gestures for

different exercises so that they can be supported without making any change to the

architecture.

Conclusions

This paper gives a brief description of our current research on an architecture of virtual

exercise environment which helps people with lower body disabilities promote their life

style. The architecture provides the capability to create more enjoyable and engaging

virtual exercise environment for people with lower body disabilities. Two proof-of-concept

experiments were conducted to get the users‘ feedback and to evaluate the quality of

the architecture. The multiple-user mode, the performance database and the gesture

library will be added to the architecture in the next step.

References

Blair, S. N., Kohl, H. W., Paffenbarger, R. S., Clark, D. G., Cooper, K. H. & Gibbons, L. W. (1989)

Physical fitness and all-cause mortality. A prospective study of healthy men and

women. Journal of the American Medical Association, 262, 2394–2401.

Byrne, A. & Byrne, D. G. (1993) The effect of exercise on depression, anxiety, and other

mood states: A review. Journal of Psychosomatic Research, 37, 565–574.

Gauvin, L. & Spence, J. C. (1995) Psychological research on exercise and fitness: current

research trends and future challenges. The Sport Psychologist, 9, 434–448.

Heath, G. W. & Fentem, P. H. (1997) Physical activity among persons with disabilities -- A

public health perspective. Exercise and Sport Sciences Reviews, 25, 195-234.

Humpel, N., Owen, N. & Leslie, E. (2002) Environmental factors associated with adults'

participation in physical activity: A review. American Journal of Preventive Medicine,

22(3), 188-199.

PAGE 59 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Johnson, D. A., Rushton, S. & Shaw, J. (1996) Virtual reality enriched environments, physical

exercise and neuropsychological rehabilitation. The 1st European Conference on

Disability, Virtual Reality and Associated Technologies, Maidenhead, UK

Kampert, J. B., Blair, S. N., Barlow, C. E. & Kohl, H. W. (1996) Physical activity, physical fitness,

and all-cause and cancer mortality: a prospective study of men and women. Annual

Epidemiology, 6, 452–457.

Ravesloot, C., Seekins, T. & Young, Q. R. (1998) Health promotion for people with chronic

illness and physical disabilities: The connection between health psychology and

disability prevention. Health Psychology, 5, 76-85.

Zhang, S., Banerjee, P. P. & Luciano, C. (2010) Virtual exercise environment for promoting

active lifestyle for people with lower body disabilities. Networking, Sensing and Control

(ICNSC), 2010 International Conference on. 80-84

About the Authors

Shaojie Zhang is currently a Ph.D. student in the Department of

Computer Science in the University of Illinois at Chicago. She earned

her B.S. and M.S. degrees in Computer Science and Technology from

Tsinghua University (China) in 2005 and 2007 respectively. Her currently

research interests are virtual reality, virtual exercise environment and

accessibility for people with disabilities.

P. Pat Banerjee received his B.Tech from Indian Institute of Technology

and his Ph.D. in Industrial Eng. from Purdue University. He is currently a

Professor in the Departments of MIE, Computer Science, and

Bioengineering at UIC, and a visiting faculty in the Department of Surgery

at University of Chicago. He is an Associate Editor of IEEE Transactions for

Information Technology in Biomedicine, and is a Fellow of ASME. His

current research interests include virtual reality and haptic applications.

Cristian J. Luciano received his B.S. in Computer Science from the

Universidad Tecnológica Nacional (Argentina), his M.S. in Industrial

Engineering, his M.S. in Computer Science, and his Ph.D. in Industrial

Engineering and Operations Research from the University of Illinois at

Chicago (UIC). He is currently a Research Assistant Professor in the

Department of Mechanical and Industrial Engineering at UIC. His

research interests are virtual and augmented reality, haptics, scientific

visualization, and medical simulation.

PAGE 60 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Accessible Electronic Health Records:

Developing a Research Agenda

Andrew Sears Vicki Hanson

Information Systems Department School of Computing

UMBC University of Dundee

[email protected] [email protected]

Abstract An NSF-sponsored workshop, convened in October 2010, addressed the question of a

research agenda that would lead to accessible electronic health records. The highest

priority identified research areas included addressing the challenges people experience as

a result of temporary disabilities; understanding and addressing the issues faced by older

adults; investigating the efficacy of personalization technologies for improving accessibility;

documenting the current state-of-the-art with regard to the accessibility of existing

electronic health records; and carefully documenting the potential benefits of accessible

electronic health records for various audiences and with regard to potential improvements

in one‘s health.

Introduction Electronic solutions for storing, retrieving, sharing, and analyzing health related information

are being deployed rapidly. Solutions may be designed for healthcare professionals or

consumers. As a result, people may access the records infrequently or frequently, they may

access a single record repeatedly (i.e., consumers reviewing their record) or many different

records occasionally (i.e., providers preparing to meet with patients), or they may need to

compare information across records (i.e., researchers looking for patterns). Individuals may

enter or update information, or they may simply access the information which is likely to

include a combination of text and graphics. Security must be maintained, collaboration

should be supported, and privacy must be ensured. While solutions are being proposed,

the accessibility of electronic health records has not been adequately addressed.

Accessibility of electronic health records is a significant concern since all individuals,

including those with disabilities, should be able to access this information. While there has

been significant research on accessibility in the context of information technologies, there

has been limited attention focused specifically on accessibility in the context of electronic

health records. Careful consideration of the needs of individuals with disabilities, combined

with existing accessibility solutions, can address many of the difficulties that may be

experienced. Specific examples of solutions which may prove useful include the use of

keyboard navigation as an alternative for individuals with visual impairments who find

traditional pointing devices difficult to use, high-contrast displays and the use of larger fonts

for individuals with certain visual impairments, textual descriptions of images for individuals

who blind, captioning of videos for individuals with hearing impairments, simplified text for

PAGE 61 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

individuals with cognitive impairments, and supporting alternative input devices for

individuals with physical disabilities.

An NSF-sponsored workshop was held in the days preceding the ASSETS 2010 conference,

with participants from both the US and the UK. The workshop was organized by Andrew

Sears of UMBC and Vicki Hanson of the University of Dundee. Participants included

academic researchers, representatives of non-profit organizations, and government

agencies. They had expertise in accessibility and healthcare including experience with

electronic health records. The purpose was to discuss issues and challenges associated

with ensuring that future electronic solutions for storing, retrieving, sharing, and analyzing

health related information are accessible to the widest possible segment of the population.

The goal of this specific meeting was to identify critical issues that need to be addressed by

the research community if accessibility is to be effectively addressed in the context of

electronic health records. Through the outcomes of this meeting, further goals were to:

educate accessibility researchers with regard to the concerns and needs of the

healthcare community;

educate the healthcare community with regard to issues, challenges, and existing

solutions with regard to making relevant information accessible to individuals with

disabilities.

Process This one and one-half day event began with a brief review of the agenda and goals for

the event. This was followed by brief introductions, allowing individuals to identify

themselves, their affiliations, and their research interests. Next, there were a series of brief

presentations designed to ground the subsequent conversation. Presentations were from:

Eileen Elias of JBS International discussed policy-related issues associated with

accessibility and electronic health records;

Raja Kushalnagar of the Rochester Institute of Technology discussed legal issues;

David Baquis of the US Access Board talked about accessibility and electronic health

records;

Sharon Laskowski and Lana Lowry of the US National Institute of Standards and

Technology focused on implementation and standards-related issues.

These presentations were followed by a breakout session during which two groups talked

about critical user groups that should be given consideration as accessibility is discussed as

well as policy-related issues that the groups felt would be particularly important if the goal

of ensuring that electronic health records are accessible is to be achieved.

The second day began with another series of four presentations:

Jeffrey Bigham of the University of Rochester discussed issues that affect individuals with

visual impairments;

Juan Pablo Hourcade of the University of Iowa talked about older adults and electronic

health records;

Dean Karavite of Children‘s Hospital of Philadelphia talked about implementation issues

within a hospital environment;

PAGE 62 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Debbie Somers of eHealth Insight also focused on implementation-related issues.

These presentations lead directly into a group discussion of issues that need to be

addressed through research which lead naturally into a conversation that focused on

prioritizing the research topics that were identified.

Outcomes The initial discussions reaffirmed the diversity of users in terms of their backgrounds, roles,

and priorities including clinicians and administrative personnel as well as the patients, their

families, and their caregivers. It was also recognized that electronic health records affect

many different types of organizations. There was significant discussion regarding policies

and laws which could affect efforts to ensure that electronic health records are accessible.

The group also discussed the relationship between usability and accessibility as well as the

goal of ensuring that electronic health records are accessible to everyone regardless of

whether they have legally-defined disability or not. This same discussion explored the

motivation of organizations to address, or to not address, accessibility in the context of

such records. The final topic explored was the concept of disabilities, with the group

adopting the perspective that the context for disabilities can be permanent (e.g., an

individual may be blind) or temporary (e.g., and individual with a broken arm may have a

cast that limits movement), and can also be associated with the environment (e.g.,

working in a noisy environment or a room with poor lighting) or the activities in which an

individual is engaged (e.g., stressful occupations).

While these initial considerations were useful, and framed the discussions that followed, the

primary goal for the workshop was to identify issues that the group felt needed to be

addressed through additional research. Many issues were discussed. While the group

agreed that more research was needed to address all of the issues that had been

identified, by the end of the workshop a short list of the highest priority areas for research

was created; an additional list of important issues that should be addressed through

research was also generated.

The most significant issues, from the perspective of the workshop attendees, include

addressing the challenges individuals experience as a result of temporary disabilities (while

many solutions will address both permanent and temporary disabilities, the group felt it was

not only important to consider chronic disabilities but also to understand accessibility and

usability from the standpoint of a person with a temporary disability since this individual

may not normally require accessible technologies and thus may be using these

technologies for the first time); understanding and addressing the issues faced by older

adults; investigating the efficacy of personalization technologies for improving accessibility;

documenting the current state-of-the-art with regard to the accessibility of existing

electronic health records; and carefully documenting the potential benefits of accessible

electronic health records for various audiences and with regard to potential improvements

in one‘s health.

The second group of issues, which were also viewed as being important but were given a

lower priority than those issues identified above, included addressing accessibility for

individual having multiple disabilities; understanding and more effectively supporting

collaborative and social aspects of electronic health records (e.g., ensuring that

PAGE 63 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

technology does not hinder caregiver-patient interactions); and ensuring accessibility as

we transition from paper archives to electronic records.

In addition to these two lists of high-priority research foci, the group discussed many other

topics that were considered to be in need of additional attention by the research

community. While personalization was a high priority topic, other forms of adaptation were

also considered important, including one-time adaptation based on an individual‘s abilities

and adaptation as an individual‘s abilities change over time (e.g., the challenges

experienced due to Multiple Sclerosis may vary throughout the day). Input solutions

continue to be inadequate for many individuals with disabilities, especially those with

dexterity impairments. There was significant interest in the idea of reducing the need for

expensive third-party solutions by building accessibility solutions into off-the-shelf products

(e.g., iPhone) or through the use of open source solutions.

It was recognized that additional work is needed to develop effective arguments as to why

organizations developing electronic health records should seek to ensure that their systems

are accessible. It also was acknowledged that more accessible solutions are not always

the most successful in the marketplace. The relatively poor record of successful adoption of

accessibility solutions needs to be studied with the goal of understanding the various

factors that must be addressed if adoption rates are to be increased.

Privacy and security were both considered major barriers to the success of accessible

electronic health records. While usable security has received attention, accessible security

has received significantly less attention. Participants also discussed opportunities to

leverage both cloud computing as a way to deliver accessibility solutions as well as

crowdsourcing as a tool for supporting accessibility. It was recognized that accessibility

involves ensuring that individuals with a variety of disabilities can access all forms of

information. While significant quantities of information may be stored as text, research also

needs to explore the issues involved in making images and auditory information accessible

to all users regardless of their disabilities.

As part of these efforts, and many others, it was stressed that individuals with disabilities

must be integrated into the research projects identified throughout the workshop. Ideally,

this would not just be as participants in studies seeking to evaluate solutions, but also as

members of the research team throughout the entire project.

Participants David Baquis, US Access Board

Jeffrey Bigham, University of Rochester

Eileen Elias, JBS International

Larry Goldberg, WGBH National Center for Accessible Media

Vicki Hanson, University of Dundee

Juan Pablo Hourcade, University of Iowa

Faustina Hwang, University of Reading

Dean Karavite, Children‘s Hospital of Philadelphia

Raja Kushalnagar, Rochester Institute of Technology

Sharon Laskowski, National Institute of Standards and Technology

Lana Lowry, National Institute of Standards and Technology

Aqueasha Martin, Clemson University

Bambang Parmanto, University of Pittsburgh

PAGE 64 SIGACCESS NEWSLETTER, ISSUE 99, JAN 2011

Michelle Rogers, Drexel University

Andrew Sears, UMBC

Katie Siek, University of Colorado at Boulder

Debbie Somers, eHealth Insight

About the authors

Andrew Sears is the Constellation Professor of Information Technology

and Engineering and a Professor and Chair of the Information Systems

Department at UMBC. Dr. Sears‘ research explores issues related to

human-centered computing and accessibility. Some of his research

projects have addressed issues associated with mobile computing,

health information technologies, speech recognition, managing

interruptions while working with information technologies, and

assessing an individual's cognitive status via normal daily interactions

with information technologies. Dr. Sears‘ research activities have been

supported by various companies, government agencies, and

foundations including IBM, Intel, Microsoft, Motorola, NSF, NIDRR, NIST, and the Verizon

Foundation. He is an Editor-in-Chief of the ACM Transactions on Accessible Computing and

serves on the editorial boards of several additional journals including the European Journal

of Information Systems, the International Journal of Human-Computer Studies, and

Universal Access in the Information Society. He served as Conference and Technical

Program Co-Chair of the ACM Conference on Human Factors in Computing Systems (CHI

2001) and both General Chair and Program Chair for the ACM SIGACCESS Conference on

Computers and Accessibility (ASSETS 2005 and ASSETS 2004). Dr. Sears currently serves as

the Chair of the ACM Special Interest Group on Accessible Computing. He earned his BS in

Computer Science from Rensselaer Polytechnic Institute and his Ph.D. in Computer Science

with an emphasis on Human-Computer Interaction from the University of Maryland –

College Park.

Vicki Hanson is Professor of Inclusive Technologies at the University of

Dundee, and Research Staff Member Emeritus from IBM Research.

She has been working on issues of inclusion for older and disabled

people throughout her career, first as a Postdoctoral Fellow at the

Salk Institute for Biological Studies. She joined the IBM Research

Division in 1986 where she founded and managed the Accessibility

Research group. Her primary research areas are human-computer

interaction, ageing, and cognition. Applications she has created

have received multiple awards from organizations representing older and disabled users.

She is Past Chair of ACM's Special Interest Group on Accessible Computing (SIGACCESS)

and is the founder and co-Editor-in-Chief of ACM Transactions on Accessible

Computing. Prof Hanson is a Fellow of the British Computer Society and was named ACM

Fellow in 2004 for contributions to computing technologies for people with disabilities. In

2008, she received the ACM SIGCHI Social Impact Award for the application of HCI

research to pressing social needs. She currently is Chair of the ACM SIG Governing Board

and is the holder of a Royal Society Wolfson Merit Award.