Talking-Points: A Mobile, Context-based Information System for Urban Orientation of Sighted and...

13
Talking-Points: A Mobile, Context-based Information System for Urban Orientation of Sighted and Visually-Impaired Users Michelle R. Escobar [email protected] Mark W. Newman [email protected] Jason Stewart [email protected] Jakob Hilden [email protected] Kumud Bihani [email protected] Sara Baumann [email protected] University of Michigan School of Information Ann Arbor, Michigan 48109 United States of America 1. Introduction As visually-impaired individuals walk from place to place within an urban environment, they may miss a great deal of contextual information to which their sighted peers are privy. For example, they may be less likely to become aware of new businesses that have opened along their walking route, sales and promotions displayed through posters or billboards, informative signs indicating hazardous conditions or route changes, and so forth. In this paper we present Talking-Points, a system designed to provide additional contextual information to visually-impaired users as they move through the urban environment by supplying community generated, location-specific audio information via off-the-shelf mobile devices. The work presented in this paper represents a continuation of earlier work [Gifford 06] that demonstrated the technical feasibility of supplying audio-based information triggered by “tags” situated throughout the physical environment (see Figure 1). We have sought to extend this earlier work in a number of ways, including engaging with both visually-impaired and sighted users to understand the needs faced by each in the process of undertaking an urban journey improving upon the form factor of the early Talking-Points prototype in order to provide a less obtrusive mobile device that will not stigmatize, impede, or discourage visually-impaired users from adopting the system, and providing complete and up-to-date information about urban points of interest by allowing information to be provided by various members of the community.

Transcript of Talking-Points: A Mobile, Context-based Information System for Urban Orientation of Sighted and...

Talking-Points: A Mobile, Context-based Information System for Urban Orientation of

Sighted and Visually-Impaired Users

Michelle R. Escobar [email protected]

Mark W. Newman [email protected]

Jason Stewart [email protected]

Jakob Hilden [email protected]

Kumud Bihani [email protected]

Sara Baumann [email protected]

University of Michigan School of Information

Ann Arbor, Michigan 48109 United States of America

1. Introduction

As visually-impaired individuals walk from place to place within an urban environment, they may miss a great deal of contextual information to which their sighted peers are privy. For example, they may be less likely to become aware of new businesses that have opened along their walking route, sales and promotions displayed through posters or billboards, informative signs indicating hazardous conditions or route changes, and so forth. In this paper we present Talking-Points, a system designed to provide additional contextual information to visually-impaired users as they move through the urban environment by supplying community generated, location-specific audio information via off-the-shelf mobile devices. The work presented in this paper represents a continuation of earlier work [Gifford 06] that demonstrated the technical feasibility of supplying audio-based information triggered by “tags” situated throughout the physical environment (see Figure 1). We have sought to extend this earlier work in a number of ways, including

• engaging with both visually-impaired and sighted users to understand the needs faced by each in the process of undertaking an urban journey

• improving upon the form factor of the early Talking-Points prototype in order to provide a less obtrusive mobile device that will not stigmatize, impede, or discourage visually-impaired users from adopting the system, and

• providing complete and up-to-date information about urban points of interest by allowing information to be provided by various members of the community.

• •

• Figure 1: The original Talking-Points system employed RFID-based tags. These tags are inexpensive and have a short range that is appropriate for indoor environments, however current RFID reader technology is expensive and unwieldy. The system presented in this paper represents an attempt to realize the advantages of a location-based information system in a less bulky form factor.

In the remainder of this paper, we expand upon the motivation for this work, report on formative fieldwork, describe our current prototype, report on early experiences with the prototype, discuss related work, and conclude.

2. Motivation

Working from an earlier rendition of Talking-Points, our design team was motivated to further develop the system based on three main objectives which consisted of following a user centered design process; making the device unobtrusive in terms of form factor and user interaction; and leveraging community-generated location information. These objectives were the foundation and anchor for the system that is presented in this paper.

User centered design process

Ideally, any application intended for end-users will be designed in accordance with user centered design principles, and this is especially true for users with needs and abilities that differ from that of "mainstream" users. User centered design involves the following principles: an early focus on the user; active user participation throughout the project; early prototyping; continuous iteration of design solutions; and, having a multidisciplinary design team [Gould 85].

Keeping these principles in mind, we first set out to learn more about visually-impaired individuals, in particular with respect to their needs surrounding the use of an urban orientation device. Due to the specific needs of the blind community, it was essential that individuals with visual impairments were actively participating in the design and testing of the device in order to grasp a better and realistic understanding of user wants and needs. Early prototyping in various forms, such as Wizard of Oz studies and mock-ups, along with frequent tests of the client software, took place to iteratively evaluate and improve the design of the system. Both sighted individuals and visually impaired users were asked to give feedback on the designs and suggestions as to how it could improve. We feel that this emphasis on iterative design and end-user feedback has proven to be a great asset in the overall design of the system. The team working on the design and development Talking-Points consists of accessibility specialists, UI designers, social scientists, and computer scientists. This multidisciplinary perspective has helped to bring further insight and various perspectives of the issues for the system's design.

Adoption of an unobtrusive device

As technology evolves more rapidly year after year, the costs and size of technology have shrunk considerably, just as the computational power of the technology has increased. Breakthroughs in hands free interfaces, such as speech user interfaces have given individuals with various disabilities more independence. In addition, ubiquitous connectivity allows for the augmentation of a person's abilities, such as location awareness for visually impaired individuals [National Council on Disability 06]. As new technology is presented, it is necessary to keep in mind the needs of individuals with disabilities. The widespread adoption of universally designed devices is currently being promoted as standard practice in some quarters, based on the argument that universal design benefits all users, regardless of their physical or mental abilities [Connell 97]. Universal design is hampered by issues such as closed/locked systems which prevent changes for assistive technology purposes, and so steps have been taken to develop systems using open source code to allow easier manipulation for accessibility needs [National Council on Disability 06]. In accordance with these positive trends, we have designed Talking-Points to benefit both sighted and visually-impaired individuals, and we have built the system to depend only on open-source components. Although assistive technology devices promote independent living and working opportunities that benefit both society and the individual, there is a high abandonment rate for such devices due to their inability to supplement and enhance the daily life of those individuals it was designed for. The adoption of such devices is dependent on many factors: durability of the device; ease of use; an intuitive interface and navigation; ability to perform in multiple environments; value and empowerment found by using the device; and inconspicuous design [Kintsch 02].

This issue of inconspicuous design is especially important due to the the social implications of using a device that identifies one as “disabled.” Mass-market technologies, such as cell phones, are used by everyone and therefore do not reinforce stereotypes typically associated with the use of assistive technology devices when used in public spaces [Loadstone 08]. In addition, both sighted and visually impaired individuals prefer using a device that fits in a pocket, leaving hands free to hold a cane, shopping bag, etc. Some systems (e.g. Drishti [Ling 04]) require multiple pieces to be carried by the user, thus limiting the freedom of the user while also potentially making them stand out. The use of commodity technologies has become feasible in recent years, as cell phones have grown to contain various functions that can be utilized to gather information about locations [Coughlan 06]. In one instance, a cell phone camera takes a picture of QR codes to produce information about the location without having to step into the business [Kim 08]. Although this system was specific to sighted individuals, the same concept has been used for the visually impaired in the development of Talking-Points.

Community generated content

It would be close to impossible for a single institution to provide a sufficient amount of good quality location information for a really large set of points of interest (POIs). Therefore, we believe a more economical and realistic approach is to leverage user-generated content instead. Besides the ability to provide a more complete and detailed coverage of POIs it can also be expected that the location information will be more user-centered and relevant for more diverse audiences. [Espinoza 01] In addition, such a community-driven approach can be expected to have significant positive social influences. When members of the local community are encouraged to get involved in creating content for a location-based service, they will be more likely to get to know more about their local environment, the members of their community, and how they can represent it to others. As a side effect, allowing Talking-Points to support community contributions may help raise general awareness for accessibility issues. For example, contributors may come to learn what location information is relevant to visually impaired people and how it needs to be represented so they can use it. Critical mass is a significant concern for any community-generated content source. Given the relatively small proportion of the visually impaired population with respect to the population at large, it may be difficult to achieve the needed critical mass in any but the most densely populated urban areas. With this in mind, we designed Talking-Points to be attractive to sighted users as well as the visually impaired. The ability to gain more information about points of interest has universally utility, even though it may be more critical for those who cannot take advantage of ambient information that is displayed visually in the environment. This goal echoes the prescriptions of universal design (discussed above), by harnessing a variety of different users’ skills, perspectives, interests, and insights for the benefit of all.

3. Design Process

In accordance with the principles of user-centered design, we engaged potential users before and during the design process. We conducted a series of observational interviews, a focus group, and a Wizard of Oz field trial in order to gain greater insight into the current practices, needs, and preferences of both sighted and non-sighted individuals as they undertake urban walking journeys. The observational interviews consisted of accompanying 10 individuals (7 sighted, 3 visually impaired) as they carried out activities on foot around Ann Arbor, Michigan. During these journeys, the accompanying team members recorded details such as how routes were selected, what details of the journey were attended to while in transit, and how priorities were assessed with respect to accomplishing multiple tasks and processing new information. In addition, we asked questions to explore general patterns related to typical activities conducted during walking journeys to augment the lessons learned from these specific, grounded observations. A focus group was conducted to help process and interpret the observations from the observations. Ten sighted individuals participated in the focus group, which focused on what information was most important and desirable for POIs encountered when walking through town, as well as feedback on the Talking-Points concept. Wizard of Oz studies are useful when an interactive system has been designed but not implemented [Dahlbäck 93]. A human confederate plays the role of the system, interpreting user inputs and issuing the appropriate response according to pre-defined rules that model the intended system behaviour. In the case of Talking-Points, the “wizard” played the role of both tag identifier and speech recognizer by following the participant down the street while connected to them via a cell phone call (see Figure 2). As the subject walked, the wizard would notify them of points of interest and then respond to user input with scripted voice prompts and descriptions. These formative studies highlighted aspects of walking journeys that were similar between sighted and visually impaired individuals and aspects that were different. For both groups, familiarity of an area was accomplished by first identifying a few points, and then constructing a mental map by adding points over time. In other words, known locations were utilized to navigate to other locations and familiarity with an area emerges over a long time comprised of many repeated journeys. Both user also groups expressed interest in information about changes in their surroundings (e.g. construction or new location) that would offer new opportunities for exploration or affect the optimal route to get to a known location.

Figure 2: For the Wizard of Oz field simulation, participants were outfitted with a conventional cell phone headset, through which a study participant simulated the proposed behaviour of the Talking-Points system by notifying the participant of “detected” points of interest and responding to voice commands. In addition, the participants were followed by study team members who observed the participants interaction with the system.

A key difference in the pedestrian experience of visually impaired individuals was explained by one blind interviewee thusly: "[T]he way [blind] people travel is different than the way sighted people travel: when [sighted people] look at a map you see the whole map, you see the total; [people with blindness] take only pieces from a total." Furthermore, it was found that the visually impaired study participants were concerned with the pressure to be independent and therefore the necessity of remembering specific locations in order to get from one point to another. This was seen to cause the visually impaired to be more focused on the challenges of finding one’s way and therefore less inclined to attend to other potential POIs in their environment. Unsurprisingly, there were needs identified by visually impaired interviewees that were simply not concerns of sighted travellers. Obtaining information regarding the location of restrooms, information centers or police stations, blind materials (e.g. Braille menus) availability, physical barriers (e.g. construction sites), and public transportation was a prominent concern for our visually impaired participants. The WOz studies revealed shortcomings of our initial speech user interface design. In particular, our participants found the amount of information rendered for each POI (including the name, type, and a brief, one sentence description) to be overwhelming. Even this fairly abbreviated information was found excessive in

cases where the user was able to quickly determine that the POI was irrelevant to their current needs or interests. Based on this, we changed the system to simply supply the name and type (e.g., “coffee shop,” “clothing store,” “restaurant,”). It was observed, also, that the WOz participants appeared to be immersed in their interaction with Talking-Points at the expense of paying attention to their environment independently of the system. This indicates the need for user control over the interaction with the system and the need to keep system messages brief in order to keep from monopolizing the user’s attention. Nevertheless, the ability to access and explore supplemental information about locations, such as hours, menus, and customer feedback, personalization of the device, and the ability to filter information about POIs was also expressed by all participants.

4. Talking-Points Prototype

The Talking-Points system design is reported in [Stewart 08] and will be briefly summarized here as well.

Figure 3: The Talking-Points system consists of two main components: an online database that receives contributions and revisions from community members as well as business owners or others responsible for maintaining and promoting information about a particular location.

The system consists of two components (see Figure 3): a social online database that facilitates user-generated content creation and storage of the POI information; and a mobile device that detects POIs and presents the contextual information through either a speech user interface (SUI) or a graphical user interface (GUI). In the following we will describe these components in more detail. As discussed earlier, the support for community generation and maintenance of the POI database is critical for populating the system with adequate information as well as ensuring data remains up-to-date. One potential shortcoming of a

community-generated content approach was noted by several focus group participants: such systems typically have few safeguards to ensure information quality and accuracy. While extremely successful community-based systems can overcome these limitations by sheer force of numbers (e.g., Wikipedia [Giles 05]), it is unrealistic to expect that a system on the scale of Talking-Points would achieve such volume. A simple measure to improve quality without sacrificing community involvement is to restrict editing of certain fields. For example, the ability to create and modify the name, type, and short description for a POI would be restricted to that location’s “owner,” whereas other fields could be left open for community modification. This helps ensure that the most frequently accessed information is controlled by a responsible authority, though more nuanced safeguards would likely have to be implemented for a production system.

Location detection technology

Talking-Points is designed to be somewhat agnostic to location sensing technologies, allowing it to adapt to new infrastructures as they become available. The current system is based on using Bluetooth tags to mark POIs and detecting those tags using standard Bluetooth discovery on the client. The earlier version of Talking-Points used RFID sensing, and we are currently in the process of adding GPS as an alternative location sensing technology. Each of these systems has limitations and we are therefore exploring the possibility of implementing a hybrid system that can take advantage of multiple positioning systems simultaneously.

Mobile device

Echoing previously published guidelines such as [National Council on Disability 06], several of our study participants emphasized the need for an inconspicuous and unobtrusive client device. For our prototype we selected the OQO Model 02 ultra mobile PC1which is a palm-sized device offering the full performance of a desktop PC (for example, our prototype device is running Windows XP Professional, SP2). The Talking-Points client software was developed in Java 1.6 and supports two user interfaces: a speech user interface (SUI) and a graphical user interface (GUI). The SUI incorporates the CMU Sphinx library2 for voice recognition and FreeTTS3 for the text-to-speech (TTS) engine. The GUI is implemented in Java Swing.

5. Early Experiences and Future Work

While Talking-Points remains an early prototype that has yet to receive the benefit of user feedback in the field, we have learned a number of lessons during the course of our efforts thus far. Talking-Points has been demonstrated at two events on the University of Michigan campus. In both cases, we constructed a small-scale tagged environment and allowed demo attendees to experience Talking-Points by

1 http://www.oqo.com/products/index.html 2 http://cmusphinx.sourceforge.net/html/cmusphinx.php 3 http://freetts.sourceforge.net

walking around the environment, receiving information about the tagged POIs, and interacting with the system. This provided us with two benefits: first, the process of setting up an environment for end-users uncovered a number of issues with our current design that we intend to further explore; second, the feedback from users (both direct and indirect, as we observed the interaction issues encountered by users) indicated a number of usability issues that must be addressed in order to make Talking-Points work for a wider audience. We discuss three of the issues uncovered through these early experiences in turn.

Limitations of Bluetooth as a location-tagging technology

In the course of preparing Talking-Points for small-scale demonstration, we became acutely aware of the constraints of Bluetooth as a tagging technology. Most commercially-available Bluetooth hardware (e.g., cellphones, headsets, USB-based adapters) falls into the "class 2" category, with an approximate range of 10m. While this range may be adequate for some aspects of an orientation system—for example, identifying a particular storefront along a typical urban street in the U. S., there are a number of applications for which it will clearly be inadequate—for example, identifying the boundaries of a large University campus or identifying closely grouped POIs such as are likely to occur indoors. In addition, it will be inadequate for identifying more fine-grained points of interest such as pedestrian hazards or crosswalks. Through experimentation, we learned that the range of typical Bluetooth hardware tags (for our demos, we primarily used off-the-shelf cell phones with Bluetooth capabilities) can be adjusted by sheathing the tag with metal materials such as copper wire mesh. We were able to decrease the average tag range by about 50% (to ~5m). Others have independently discovered this technique as well and offered some refinements [Cheung 06]. However, even this approach yields only primitive control over the tag range. Custom hardware could be designed to cover specific ranges, but this would increase the cost of deployment significantly. These observations regarding Bluetooth range have significant impact on the design of a tagged environment as well as the granularity of what can be effectively represented to the user.

Lack of orientation information impedes usability

Regardless of any improvements that can be obtained with regards to responsiveness and tag range for a Bluetooth-based system, this approach will never allow us to know where a user is facing and, therefore, how to deliver useful information regarding wayfinding or relative location (e.g., "the entrance is around the corner to the left," "there is a trashcan up behind you and to your right"). Especially for visually-impaired users, such details can be critical for helping users act on information they find useful. Orientation support is lacking in most mass-market devices, even as Bluetooth and GPS have become common. Nevertheless, digital compass technology exists in mostly closed systems and niche markets (e.g., eTrex Vista4, Casio Pathfinder5), indicating that it could become widely available in the future.

4 https://buy.garmin.com/shop/shop.do?pID=163

Textually-submitted content may not be suitable for speech-based delivery

Supporting and encouraging content contributions from members of the community is a key feature of Talking-Points. Our belief is that community-generated and -maintained content will be more voluminous and more up-to-date than content that is centrally curated or more tightly controlled. However, there are a number of issues with giving up such tight control. One particular issue that became apparent during our early experiences is that textual content that is naively contributed to Talking-Points may be particularly difficult to understand and interact with when accessed via a speech user interface. For example, lengthy text passages, while occasionally awkward or undesirable to process in a visual interaction, can be disastrous when rendered via speech. The user may have little idea how much longer the passage will last and whether or not crucial information will be appearing later in the segment. In addition, the current design of Talking-Points allows content contributors to add arbitrary subsections for each POI in addition to the POI's name and general description. This capability is driven in part by the fact that different POI types will need to supply different categories of information--sections such as "menu," "hours of operation," "current exhibits," "special promotions," and "phone number," may not be available or even sensible for each type of POI. However, if users are free to create different categories it can make it difficult for a user to anticipate what commands will be available when encountering a new POI (thus making interaction slower as she listens to a full list of options each time), and for the speech recognizer to effectively understand the arbitrary command sets that could appear for each POI. Put another way, architecting information for a speech UI is significantly different than doing so for a graphical user interface, and it is even less likely that a "typical" community contributor will be sensitive to the constraints and best practices for speech UIs than he will be for a graphical UI. This suggests that additional guidance for contributions must be provided, with the important caveat that additional guidance not discourage or impede contributions from the widest range of possible contributors.

6. Related Work

The Talking-Points system presented here represents the intersection of two domains of research and commercial development: mobile navigation solutions for the visually impaired; and location-based information and navigation systems for the general populace. Marston and Golledge note that "most training for the blind traveller focuses on learning routes to get from point A to B" [Marston 99], a skill utilized in orientation and mobility training. Further, to follow a route, "visually impaired travellers also break their journey into shorter stages and orient themselves within the journey a

5 http://pathfinder.casio.com/features/compass

greater number of times [than sighted travellers]..." [Harper 00]. Based on these and other observations, a handful of alternative mobile navigation systems have been developed in both the research lab and the commercial sphere. Wayfinder Access [Wayfinder 08] is a commercially available system that combines mobile phone-based GPS navigation with screen reader technology. This makes a generic wayfinding system available to the visually impaired, though it does not purport to cater explicitly to the needs of the visually impaired community. The Loadstone [Loadstone 08] project is a volunteer-driven open-source project with a similar design as Wayfinder Access, though with a more explicit focus on the needs of the visually impaired. It also contains the ability to share POI databases through explicit export and import, but it stops short of supporting information that is continually added and maintained by a community of users. SesamoNet [Ceipidor 06] consists of a cane with an embedded RFID reader that can detect tags implanted in the ground. The system relays detected tags to the user via audio, thus providing visually impaired users with an augmentation of an already-familiar navigation aid. SesamoNet has been deployed and tested with users in a public park. The Chatty Environment [Coroama 03] is RFID-based as well, supporting the ability to tagged arbitrary items (ranging from individual items in a retail store to the ticket window of a train station). Like the other systems described here, information is audibly relayed to users via a headset. Talking Signs [Marston 99] uses a directional sensing device to locate signage that has been augmented with infrared beacons. The primary goal of Talking Signs is to relay important information that is otherwise presented visually in the form of signs. While each of these systems is designed to serve a crucial need of the visually impaired community, we believe Talking-Points is the first to combine an accessible user interface, community-generated content, and the ability to employ multiple position sensing technologies. Location-based information systems have been studied and developed for the general population as well. These have primarily focused on the provision of location-specific information about predefined landmarks for mobile users (e.g., Cyberguide [Abowd 97], Lancaster Guide [Davies 03], Magitti [Bellotti 08]). These systems rely on static content associated with each location, preventing easy adjustment to changing or emerging user needs. Other systems (e.g., GeoNotes [Espinoza 01] and ActiveCampus [Griswold 04]) have supported the contribution of free-form notes or commentary on physical locations, though the lack of structure imposed by the system with regards to the contributed information may ultimately subvert its utility. The importance of utilizing collaborative data provided by a community, was noted by Kaasinen, who criticized the trend in location-based systems wherein "...users are seen as passive information consumers" [Kaasinen 03]

7. Conclusion

The goal of this study was to investigate methods of using contextual information systems to enhance the walking journey of both sighted and visually impaired individuals. In order to meet the needs of these individuals a user centered design

process including extensive field research was followed. Information from this research demonstrates that, if designed well, users, both sighted and visually impaired, could benefit from a contextual information systems by having access to location specific content. It was also shown that important design factors included having an device that is inconspicuous to use and having contextual information that meets the user's needs. Therefore the Talking-Points prototype was developed featuring an unobtrusive speech user interface on a small and mobile device, that makes use of community-generated data. Such a socially maintained urban orientation and contextual system, offers relevant, dynamic, and up-to-date information, the combination of which may not otherwise be accessible.

8. Acknowledgements

Funding for this project was provided by the University of Michigan GRant Opportunities for Collaborative Spaces (GROCS) program. We would like to thank James Knox, Donggun Yoo, Peter Kretschman, Josh Rychlinski, Scott Gifford, David Chesney, Atul Prakash, and all our study participants for helping in the design and implementation of Talking-Points.

9. References

Abowd, G.D., et al. Cyberguide: A Mobile Context-Aware Tour Guide. Wireless Networks. 3, 5 (1997), 421-433.

Bellotti, V., et al. Activity-Based Serendipitous Recommendations with the Magitti Mobile

Leisure Guide. Proc. CHI 2008. (2008), 1157-1166.

Ceipidor, U.B., Medaglia, C.M., Rizzo, F. and Serbanati, A. RadioVirgilio/Sesamonet: an RFID-based Navigation system for visually impaired. Mobile guide 2006 (2006).

Cheung, K.C., Intille, S.S. and Larson, K. An Inexpensive Bluetooth-Based Indoor

Positioning Hack. Proc. UbiComp ’06 Extended Abstracts. (2006).

Connell, B, Jones, M, Mace, R, Mueller, J, Mullick, A, Ostroff, E, Sanford, J, Steinfeld, E,

Story, M, & Vanderheiden, G. The Principles of Universal Design: Version 2.0. Raleigh,

NC: North Carolina State University, The Center for Universal Design (1997). http://www.design.ncsu.edu/cud/about_ud/udprinciples.htm

Coroama, V. and Röthenbacher, F. The Chatty Environment Providing Everyday

Independence to the Visually Impaired. UbiHealth 2003 Workshop at Ubicomp 2003.

(2003).

Coughlan, J., Manduchi, R. and Shen, H. Cell Phone-based Wayfinding for the Visually

Impaired. Proc. IMV 2006. (2006).

Dahlbäck, N., Jönsson, A., and Ahrenberg, L. Wizard of Oz studies: why and how. Proc.

IUI ’93. (1993), 193-200.

Davies, N., Cheverst, K., Mitchell, K. and Efrat, A.. Using and Determining Location in a

Context-Sensitive Tour Guide. Computer. 34, 8 (2001), 35-41.

Espinoza, F., et al. GeoNotes: Social and Navigational Aspects of Location-Based

Information Systems. Proc. UbiComp 2001 (2001), 2-17.

Gifford, S., Knox, J., James, J., and Prakash, A. Introduction to the talking points project. Proc. Assets '06. (2006), 271-272.

Giles, J. Internet encyclopaedias go head to head. Nature, Vol. 438, (2005), p. 900-901.

Gould, J.D., & Lewis, C. (1985). Designing for Usability: Key Principles and What

Designers Think. Comm. of the ACM, Vol. 38, No. 3, pp. 300-311.

Griswold, W.G., et al. ActiveCampus: Experiments in Community-Oriented Ubiquitous

Computing. Computer. 37,10 (2004), 73-81.

Harper, S. and Green, P. A Travel Flow and Mobility Framework for Visually Impaired Travellers. Proc. ICCHP 2000. (2000), 289-296.

Kaasinen, E. User Needs for Location-Aware Mobile Services. Personal and Ubiquitous

Computing. 7, 1 (2003), 70-79.

Kim, R. Bar Codes Create Bridge for Window-Shoppers, SFGate, (27 March, 2008).

http://www.sfgate.com/cgibin/ article.cgi?f=/c/a/2008/03/27/BU1LVQQOB.DTL

Kintsch, A. and Depaula, R. A Framework for the Adoption of Assistive Technology.

SWAAAC 2002: Supporting Learning Through Assistive Technology. (2002), 1-10.

Ling, R. The Mobile Connection: The Cell Phone's Impact on Society. Morgan Kauffman

Publishers, San Francisco, (2004).

Loadstone Project. http://www.loadstone-gps.com. Accessed Sept. 2, 2008. (2008)

Marston, J.R. and Golledge, R.G. Towards an Accessible City: Removing Functional

Barriers for the Blind and Vision Impaired: A Case for Auditory Signs. Tech. Report, Dept.

of Geography, University of California Santa Barbara. (1999).

National Council on Disability. Over the Horizon: Potential Impact of Emerging Trends in Information and Communication Technology on Disability Policy and Practice. Available

from http://www.ncd.gov/newsroom/

publications/2006/emerging_trends.htm#_Toc151518465 (2006).

Stewart, J., Bauman, S., Escobar, M., Hilden, J., Bihani, K. and Newman, M.W. Accessible

Contextual Information for Urban Orientation. To appear in Proc. UbiComp ’08 (2008).

Wayfinder Access. http://www.wayfinderaccess.com. Accessed Sept. 2, 2008. (2008).