WIRELESS & WEARABLES - Computer Science | Faculty of Science

57
Also in this issue: > Careers in Wireless Technology SEPTEMBER 2017 www.computer.org WIRELESS & WEARABLES

Transcript of WIRELESS & WEARABLES - Computer Science | Faculty of Science

Also in this issue:

> Careers in Wireless Technology

SEPTEMBER 2017 www.computer.org

WIRELESS &WEARABLES

One membership.Unlimited knowledge.Did you know IEEE Computer Society membership comes with access to a high-quality, interactive suite of professional development resources, available 24/7?

Powered by Skillsoft, the SkillChoice™ Complete library contains more than $3,000 worth of industry-leading online courses, books, videos, mentoring tools and exam prep. Best of all, you get it for the one low price of your Preferred Plus, Training & Development, or Student membership package. There’s something for everyone, from beginners to advanced IT professionals to business leaders and managers.

The IT industry is constantly evolving. Don’t be left behind. Join the IEEE Computer Society today, and gain access to the tools you need to stay on top of the latest trends and standards.

Learn more at www.computer.org/join.

SkillChoice™ Complete Now with expanded libraries and an upgraded platform! uValued at

$3,300!

ACCESS TO SKILLSOFT IS AVAILABLE WITH

6,000+ videos

3,000+ online courses

OVER 20x as many resources as before

MENTORSHIP

Practice Exams

28,000+ books

15,000+ Books24x7 titles

STAFF

EditorLee Garber

Contributing Staff

Christine Anthony, Brian Brannon, Lori Cameron, Cathy Martin, Chris Nelson, Meghan O’Dell, Dennis Taylor, Rebecca Torres, Bonnie Wylie

Production & DesignCarmen Flores-Garvey and Jennie Zhu-Mai

Manager, Editorial ContentCarrie Clark

Senior Manager, Editorial ServicesRobin Baldwin

Director, Products and ServicesEvan Butterfield

Senior Advertising CoordinatorDebbie Sims

Circulation: ComputingEdge (ISSN 2469-7087) is published monthly by the IEEE Computer Society. IEEE Headquarters, Three Park Avenue, 17th Floor, New York, NY 10016-5997; IEEE Computer Society Publications Office, 10662 Los Vaqueros Circle, Los Alamitos, CA 90720; voice +1 714 821 8380; fax +1 714 821 4010; IEEE Computer Society Headquarters, 2001 L Street NW, Suite 700, Washington, DC 20036.

Postmaster: Send address changes to ComputingEdge-IEEE Membership Processing Dept., 445 Hoes Lane, Piscataway, NJ 08855. Periodicals Postage Paid at New York, New York, and at additional mailing offices. Printed in USA.

Editorial: Unless otherwise stated, bylined articles, as well as product and service descriptions, reflect the author’s or firm’s opinion. Inclusion in ComputingEdge does not necessarily constitute endorsement by the IEEE or the Computer Society. All submissions are subject to editing for style, clarity, and space.

Reuse Rights and Reprint Permissions: Educational or personal use of this material is permitted without fee, provided such use: 1) is not made for profit; 2) includes this notice and a full citation to the original work on the first page of the copy; and 3) does not imply IEEE endorsement of any third-party products or services. Authors and their companies are permitted to post the accepted version of IEEE-copyrighted material on their own Web servers without permission, provided that the IEEE copyright notice and a full citation to the original work appear on the first scree n of the posted copy. An accepted manuscript is a version which has been revised by the author to incorporate review suggestions, but not the published version with copy-editing, proofreading, and formatting added by IEEE. For more information, please go to: http://www.ieee .org/publications_standards/publications/rights/paperversionpolicy.html. Permission to reprint/republish this material for commercial, advertising, or promotional purposes or for creating new collective works for resale or redistribution must be obtained from IEEE by writing to the IEEE Intellectual Property Rights Office, 445 Hoes Lane, Piscataway, NJ 08854-4141 or pubs-permissions@ieee .org. Copyright © 2017 IEEE. All rights reserved.

Abstracting and Library Use: Abstracting is permitted with credit to the source. Libraries are permitted to photocopy for private use of patrons, provided the per-copy fee indicated in the code at the bottom of the first page is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923.

Unsubscribe: If you no longer wish to receive this ComputingEdge mailing, please email IEEE Computer Society Customer Service at [email protected] and type “unsubscribe ComputingEdge” in your subject line.

IEEE prohibits discrimination, harassment, and bullying. For more information, visit www.ieee.org/web/aboutus/whatis/policies/p9-26.html.

IEEE COMPUTER SOCIETY http://computer.org • +1 714 821 8380

www.computer.org/computingedge 1

IEEE Computer Society Magazine Editors in Chief

ComputerSumi Helal, Lancaster University

IEEE SoftwareDiomidis Spinellis, Athens University of Economics and Business

IEEE Internet ComputingM. Brian Blake, University of Miami

IT ProfessionalSan Murugesan, BRITE Professional Services

IEEE Security & PrivacyAhmad-Reza Sadeghi, Technical University of Darmstadt

IEEE MicroLieven Eeckhout, Ghent University

IEEE Computer Graphics and ApplicationsL. Miguel Encarnação, ACT, Inc.

IEEE Pervasive ComputingMaria Ebling, IBM T.J. Watson Research Center

Computing in Science & EngineeringJim X. Chen, George Mason University

IEEE Intelligent SystemsV.S. Subrahmanian, University of Maryland

IEEE MultiMediaYong Rui, Lenovo Research and Technology

IEEE Annals of the History of ComputingNathan Ensmenger, Indiana University Bloomington

IEEE Cloud ComputingMazin Yousif, T-Systems International

SEPTEMBER 2017 • VOLUME 3, NUMBER 9

THEME HERE

10Smartwatches:

Digital Handcuff s or Magic

Bracelets?

16Immense Power

in a Tiny Package: Wearables Based

on Electrical Muscle Stimulation

28Do Not Capture:

Automated Obscurity for

Pervasive Imaging

39Bystanders’

Privacy

Subscribe to ComputingEdge for free at www.computer.org/computingedge.

8 Editor’s Note: Wearable and Wireless Technologies

10 Smartwatches: Digital Handcuffs or Magic Bracelets? MARTA E. CECCHINATO AND ANNA L. COX

16 Immense Power in a Tiny Package: Wearables Based on Electrical Muscle Stimulation

PEDRO LOPES AND PATRICK BAUDISCH

21 Genteel Wearables: Bystander-Centered Design IVO FLAMMER

28 Do Not Capture: Automated Obscurity for Pervasive Imaging

MOO-RYONG RA, SEUNGJOON LEE, EMILIANO MILUZZO, AND ERIC ZAVESKY

34 On-Device Mobile Phone Security Exploits Machine Learning

NAYEEM ISLAM, SAUMITRA DAS, AND YIN CHEN

39 Bystanders’ Privacy ALFREDO J. PEREZ, SHERALI ZEADALLY, AND SCOTT GRIFFITH

45 Interacting with Large 3D Datasets on a Mobile Device CHRIS SCHULTZ AND MIKE BAILEY

Departments

4 Magazine Roundup 51 Computing Careers: Careers in Wireless Technology 53 Career Opportunities

4 September 2017 Published by the IEEE Computer Society 2469-7087/17/$33.00 © 2017 IEEE

CS FOCUS

The IEEE Computer Society’s lineup of 13 peer-reviewed techni-

cal magazines covers cutting-edge topics ranging from soft-ware design and computer graph-ics to Internet computing and security, from scientifi c applica-tions and machine intelligence to cloud migration and microchip design. Here are highlights from recent issues.

Computer

Since 1989, the not-for-profi t FIRST (For Inspiration and Recognition of Science and Technology) organization has encouraged kids of all ages to

pursue science and technology, especially at a time when stu-dents tend to disengage from STEM (science, technology, engineering, and mathematics) classes. FIRST also helps youths develop the critical thinking skills and confi dence to become the next science and technology leaders. By engaging youth in hands-on, real-world problems, FIRST has made its participants nearly three times more likely to show interest in science and technology courses and to pur-sue a STEM career. In her article “Inspiring the Next Generation of Scientists and Engineers: K–12 and Beyond,” from the July 2017 issue of Computer magazine,

Nancy Boyer says, “Industry values the role challenge-based learning plays in developing its current and future workforce.”

IEEE Annals of the History of Computing

Historian Michael Mahoney has urged researchers to probe deeper into computer software’s past in order to “reveal the roots of that software in the earlier period.” The research can reveal just how much software’s history has established development practices, standards, computer groups, and associations. In response, Zbigniew Stachniak of York University in Canada suggests in his article “MCM on Personal Software,” appearing in the January–March 2017 issue of IEEE Annals, that research-ers begin by looking at software policies adopted by the earliest PC makers, among them Micro Computer Machines (MCM).

IEEE Internet Computing

Since 2014, Apple has provided seamless end-to-end encryption

Magazine Roundupby Lori Cameron

www.computer.org/computingedge 5

for its iMessage and FaceTime ser-vices. While its services are vulner-able to man-in-the-middle attacks and law-enforcement warrants, it is still the envy of digital giants like Google and Yahoo, which are developing similar approaches to email encryption. “iMessage remains perhaps the best usable covert communication channel available today if your adversary can’t compromise Apple,” say researchers who recently pub-lished “Balancing Security and Usability in Encrypted Email” in the May/June 2017 issue of IEEE Internet Computing. Between August 2015 and February 2016, researchers from Clemson Univer-sity, the University of Maryland, and the University of New Mex-ico asked 52 mostly young male subjects which encryption model was easier to use: exchange or registration. Exchange requires users to create and then exchange locks with people each time they want to communicate. Registra-tion requires users to register their locks publicly into a registry of other users. Because the partici-pants considered the registration model less tedious, they found it considerably easier to use. They also expressed strong interest in exploring, using, and understand-ing the encryption models, giving providers even more incentive to make complex encryption pro-cesses easier to comprehend.

IEEE Cloud Computing

While some elderly people end up in nursing homes, the overwhelm-ing majority continue to lead

independent lives, even well past retirement age. Most elderly people continue to live in their own homes or, in some cases, a minimal-care facility where they’re provided with meals and housekeeping services. However, according to the US Cen-ters for Disease Control and Pre-vention, the average 75-year-old has three chronic conditions and uses fi ve prescription drugs. As the number of elderly people continues to grow, researchers are looking for ways to improve health monitoring and care for senior citizens without requiring them to make constant trips to the doctor. University of Missouri scientists have developed a healthcare platform called Elder-Care-as-a-SmartService (ECaaS), which has two key components. One is an in-home health-alert system that gathers information from sensors that monitor the patient’s movement; changes in gait, such as limping; and sleep patterns. This information allows doctors and therapists to analyze a patient’s health and off er treatment remotely. The second component is an in-home remote physical-therapy application that lets physi-cal therapists use video to assess the patient’s gait and balance. Read more about this new cloud-based service for the elderly in the January/February 2017 issue of IEEE Cloud Computing.

IEEE Computer Graphics and Applications

The fi rst minimally invasive sur-gery—the laparoscopic removal of a gallbladder—was performed 30 years ago. Since then, robotics

have taken minimally invasive sur-gery to a new level. Through tiny incisions, surgeons can insert smaller robotic instruments than in the past to perform procedures with very little disruption to sur-rounding tissue. Recovery time is minimal, the risk of infection is greatly reduced, and patients expe-rience considerably less pain and scarring. Advances in robotic sur-gery techniques and equipment continue at an astonishing rate. However, until now, the technol-ogy hasn’t included haptic feed-back—the ability for surgeons to “feel” tissue as they perform surgi-cal cuts. The authors of “Effi cient Surgical Cutting with Position-Based Dynamics,” which appears in the May/June 2017 issue of IEEE CG&A, have introduced a skinning scheme that provides haptic feed-back, especially for surgeons oper-ating on soft, deformable tissue. Learn more about how their algo-rithm is making complex, delicate surgery more precise.

IEEE Intelligent Systems

In the Volkswagen emissions scandal from 2015, the company was charged with violating clean-air standards in cars they sold from 2008 to 2015. Volkswagen built emissions systems that were designed to pass emissions test-ing but that emitted up to 40 times more nitrous oxides in actual use. Organizations hate these kinds of scandals because they can injure a brand name and cut into profi ts for years. Companies that want to recover from such bad publicity must pay attention to social-media

6 Computing Edge September 2017

MAGAZINE ROUNDUP

streams and the information they reveal about how the public feels about the scandal. Researchers who analyzed Internet chatter about the Volkswagen scandal found that while the predominant feeling toward the company was negative, consumers still had positive things to say about VW’s gearbox and seat quality. It takes a sophisticated algorithm to extract and analyze communications for this kind of information. Read more about how researchers are developing better methods for mining public knowl-edge bases to analyze consumer attitudes in the May/June 2017 issue of IEEE Intelligent Systems.

IEEE Software

People hate weeding through an avalanche of spam in their inboxes just to get to the good stuff . We want to see notes from friends and family, important business docu-ments, relevant information, and critical notices. Leo Hatton and Alan John, cofounders of SendFo-rensics and authors of “Delivering Genuine Emails in an Ocean of Spam,” from IEEE Software’s July/August 2017 issue, off er advice to companies about how to get cus-tomers to read their emails. This is the basis of their company’s eff orts to develop new types of email deliverability, compliance, and security systems. Authenti-cation protocols such as Trans-port Layer Security; DomainKeys Identifi ed Mail; Sender Policy Framework; and Domain-Based Message Authentication, Report-ing, and Conformance are already in place. Beyond that, companies

must create consistently high-quality legitimate email. They also must send email that doesn’t just look pure but actually is pure. Send-Forensics has developed metrics that will analyze an email’s quality and purity. The software combines forensic algorithms and statistical models built by continual analy-sis of large amounts of email over many years. In spite of the growth of social media and instant-messaging systems, email is still the preferred means of communication by indus-try. It behooves industry to make that email worth reading.

Computing in Science & Engineering

Because they are less likely to pur-sue computer science as a career or, at the very least, as a signifi cant part of their high-school and college coursework, women and minorities have become the latest subjects of study for researchers wanting to learn how to increase their inter-est in technology. An examination of UCLA’s Exploring Computer Sci-ence (ECS) program reveals inter-esting results about how to inspire student interest in computing and how to convince students to enroll in more computer courses. Students from Germany, Greece, and the US were questioned about their inter-est in computer science and their thoughts on the ECS course. The study debunked stereotypes about the level of women’s interest in computing, how likely Asians are to pursue more courses in computing, and Hispanic and African-American attitudes about the relevance of computing to their lives. Read more

about the study’s results in the May/June 2017 issue of Computing in Science & Engineering.

IT Professional

“When we think of industry sec-tors driven by high tech … bank-ing is not the fi rst that comes to mind,” says Jennifer Q. Trelewicz of Deutsche Bank Technology Centre. However, no industry could ben-efi t more from big-data technology, especially because of the volume of information that banks process each day, the speed with which they must conduct transactions, and the multiple formats and data sources that fi nancial institutions use. The New York Stock Exchange alone writes more than 1,000 Gbytes of data daily. How can this data be analyzed? How can market trends be predicted? Many banking sys-tems can process 105 transactions per second. How can we make it faster and more effi cient? How can big data algorithms be created that work with the multiple formats found in reference data, trade and market data, requests from clients, and many other sources? In her article “Big Data and Big Money: The Role of Data in the Financial Sector,” from the May/June 2017 issue of IT Pro, Trelewicz addresses these questions, and outlines both the challenges of big data in bank-ing and future opportunities for technology development.

IEEE Micro

Researchers say that the “digital universe” will grow in 2017 to more than 16 zettabytes (16 × 1021 bytes)

www.computer.org/computingedge 7

of data. And as big data and the Internet of Things grow, we will see an even greater explosion of information. Where we will store it all? One proposed answer is DNA. A DNA system would synthesize DNA molecules to represent data and store them in pools. To read the data, the system would select molecules from the pool, amplify them, and sequence them back to digital data. A single gram of DNA could hold 215 petabytes (215 × 1015 bytes). Because of sil-icon-based storage’s limitations, researchers from Microsoft and the University of Washington formed the Molecular Information Sys-tems Lab to explore using hybrid silicon–biochemical systems for high-capacity storage. Read more about their work in the May/June 2017 issue of IEEE Micro.

IEEE Security & Privacy

Controversy still simmers about Russia’s hacking during the recent US presidential election. However, it was the controversial Bush–Gore election of 2000 that led to an upsurge in voting-system research. The goal then was to provide a foolproof computerized verification system that would let voters know that their vote was tallied correctly. Virtually all systems had two com-ponents: individual verifiability (given to the individual) and uni-versal verifiability (provided to any outside person or entity needing to verify election results). With this system, however, anyone could see whom you voted for after several decades. That’s why Jeroen van de Graaf, author of “Long-Term

Threats to Ballot Privacy” in the May/June 2017 issue of IEEE S&P, has developed a system to provide long-term voter privacy protection. The key is an encryption method that conceals voter identity but allows for private or public verifica-tion of the vote by using homomor-phic tallying to compute and verify results. It’s mathematically impos-sible to change the result without being caught.

IEEE Pervasive Computing

Many major cities in Latin Ameri-can face ongoing challenges of violence and economic inequality. That’s why researchers from Swit-zerland and Mexico have created SenseCityVity, a program to help high school students ages 16 to 18 from Guanajuato, Mexico, define, document, and reflect on their city’s problems. Ten teams of 10 mem-bers each documented the condi-tions of their city—many of which centered on piles of uncollected garbage, power lines installed too close to rooftops, vandalism, drugs, and insufficient transportation. As one participant noted, “Transport does not come in time or is very scarce, or things like that, which affect us [in getting] to school; this problem affects the majority of us.” Another participant said, “The problem here is that there is a lot of insecurity in the alleys, outside the downtown area. At night, there are people drinking and smoking marijuana in the street alleys. We used to play by the Hidalgo market every day. But now we are limited because of insecurity.” Read more about identifying and addressing

the complex issues of those living in Latin American inner cities in the April–June 2017 issue of IEEE Pervasive Computing.

IEEE MultiMedia

For a long time, the conventional wisdom about tagging images was to use clear, precise words that inter-pret the picture exactly. For exam-ple, with the ubiquitous cat photo, you might use the following: cat, cats, kitten, cat lover, tabby. How-ever, researchers propose that per-sonalized tags can produce better search queries and recommenda-tions. For example, a user might tag a cat photo as follows: monty, pet, buddy, cat, chum. The terms have personal meaning and can speak to what the user might really want to see in advertising and search results. Researchers say the order in which the tags are ranked is also important. Currently, most search engines offer no direct way to learn and personalize a user’s prefer-ences. Read more about efforts to change this in the April–June 2017 issue of IEEE MultiMedia.

Computing Now

The Computing Now website (computingnow.computer.org) features up-to-the-minute com-puting news and blogs, along with articles ranging from peer-reviewed research to opinion pieces by industry leaders.

Read your subscriptions through the myCS publications portal at http://mycs.computer.org.

8 September 2017 Published by the IEEE Computer Society 2469-7087/17/$33.00 © 2017 IEEE

EDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTE

Wearable and Wireless Technologies

W earable and wireless technologies—the two themes of this ComputingEdgeissue—add mobility and conve-

nience to users’ lives and improve many business- and research-related capabilities.

For example, mobile devices can provide com-munication, Internet-access, information-gathering, transaction-processing, and other services. Wear-ables—including activity trackers, digital clothing, and enhanced glasses—make technology easier to work with by incorporating it into items that users wear.

Smartwatches can be powerful tools that reduce the time we spend using other devices, enabling us to better manage our digital lives, according to Computer’s “Smartwatches: Digital Handcuff s or Magic Bracelets?”

Since the 1960s, doctors have used electrical muscle stimulation (EMS) devices in rehabilita-tive medicine to regenerate patients’ lost motor functions. More recently, researchers have exper-imented with using EMS to create interactive mobile and wearable systems, note the authors of IEEE Pervasive Computing’s “Immense Power in a Tiny Package: Wearables Based on Electrical Mus-cle Stimulation.”

Including bystanders’ privacy concerns in the design process will make wearables more “ gen-teel,” explains “Genteel Wearables: Bystander-Centered Design,” from IEEE Security & Privacy.

The pervasive use of smartphones and wear-ables can compromise privacy by making unaware individuals the subjects of pictures and videos, according to the authors of IEEE Internet Comput-ing’s “Do Not Capture: Automated Obscurity for Pervasive Imaging.” The Do Not Capture technol-ogy removes unwilling subjects from media when the image or video is captured.

“On-Device Mobile Phone Security Exploits Machine Learning,” from IEEE Pervasive Comput-ing, presents an approach to protecting mobile devices from malware that could exploit vulner-abilities or leak victims’ private information. The approach prevents devices from connecting to malicious access points, uses learning techniques to analyze apps’ runtime behavior, and monitors the way devices associate with Wi-Fi access points.

The growing adoption of Internet-connected devices has raised signifi cant privacy issues for bystanders. The authors of IT Professional’s “Bystanders’ Privacy” explore these concerns, present a taxonomy of solutions found in the literature, and look at issues that must be addressed in the future.

“Interacting with Large 3D Datasets on a Mobile Device,” from IEEE Computer Graphics and Applications, discusses a scheme that lets users explore an entire dataset at its native resolution while simultaneously constraining the texture size being rendered to a dimension that doesn’t exceed the portable device’s processing capabilities.

FOR DIRECT LINKS TO THESE RESOURCES, VISIT

www.computer.org/edge-resourcesThe Community for Technology Leaders

Move Your Career ForwardIEEE Computer Society Membership

• Ethernet • Wireless LAN • Wireless PAN • Wireless MAN • Wireless RAN

• Bridging and virtual bridged LANs • Wireless coexistence • Media-independent handover services

LAN/MAN (IEEE 802) Standards CommitteeAs chartered by the IEEE Computer Society Standards Activities Board, the LAN/MAN committee works to develop, maintain, and advocate for networking standards and recommended practices for local, metropolitan, and other area networks, using an open and accredited process. The most widely used standards are for:

IEEE Transactions on Mobile ComputingTMC is a monthly journal that publishes mature research, particularly on issues at the link layer and above in wireless communications, as well as other topics explicitly or plausibly related to mobile systems. TMC has seven key areas of technical focus: architectures, support services, algorithm/protocol design and analysis, mobile environments, mobile communication systems, applications, and emerging technologies.

Explore These Wireless Resources

13th IEEE International Conference on Wireless and Mobile Computing, Networking and Communications9–11 October 2017, Rome, Italy

For the past 10 years, the IEEE WiMob conference has provided unique opportunities for researchers and developers to interact, share new results, show live demonstrations, and discuss emerging directions in wireless communications, mobile networking, and ubiquitous computing.

10 September 2017 Published by the IEEE Computer Society 2469-7087/17/$33.00 © 2017 IEEE106 C O M P U T E R P U B L I S H E D B Y T H E I E E E C O M P U T E R S O C I E T Y 0 0 1 8 - 9 1 6 2 / 1 7 / $ 3 3 . 0 0 © 2 0 1 7 I E E E

INDISTINGUISHABLE FROM MAGIC

According to BI Intelligence, global smartwatch shipments are expected to reach 70 million by 2021.1 These devices o� er users the bene� ts of an activity tracker together with quick and

easy access to smartphone functionalities such as viewing and responding to messages and remote access to music controls. Because smartwatches are worn, they enable people to receive noti� cations in situations where phones are out of reach in pockets and bags. These wearables therefore o� er the promise of instantaneous delivery of timely information straight to the wrist while the user is on the move, reducing fear of missing out (FOMO) on im-portant information.

To avoid FOMO, we risk infor-mation overload as unprecedented amounts of content are delivered to our devices throughout the day, resulting in a constant barrage of interruptions. The challenges of shifting and dividing our attention across a range of devices were dis-cussed in a previous article of this column.2 But maintaining focus and concentration aren’t the only di� -culties we face. The negative impli-cations of being “always online” are frequently recounted in the media as people � nd it increasingly di� cult

to disconnect from work and focus on other parts of their life when work-related content is so readily available. In addition, it seems possible—and perhaps even likely—that smartwatches might increase our expectations of be-ing both reachable and responsive, and subsequently also increase the feeling of being tethered to our smartphone.

Research shows that users attend to more than 60 smart-phone noti� cations per day, often within minutes.3 Other work highlights the addictive nature of checking smart-phones for messages,4 even while on the toilet.5 There are therefore concerns that by increasing access to noti-� cations, smartwatches might exacerbate this behavior, especially if they’re considered as an extra phone screen.6

Smartwatches: Digital Handcuffs or Magic Bracelets?Marta E. Cecchinato and Anna L. Cox, University College London

Some regard the smartwatch as little more

than an extra phone screen, but it can be a

powerful tool that reduces the time we spend

using other devices, enabling us to better

manage our digital lives without missing out on

important information.

www.computer.org/computingedge 11A P R I L 2 0 1 7 107

EDITOR ANTTI OULASVIRTA Aalto University; [email protected]

Indeed, recent research suggests that users are just as likely to interact fre-quently with their smartwatches as they are with their smartphones.7

Is there any evidence that smart-watches have these negative impacts on users? Do they really exacerbate ex-pectations of being online? Or are they a useful tool to keep us connected with what’s important and, by obviating the need to extract and unlock our phone for every noti� cation, actually create a sense of distance from the phone?

At University College London In-teraction Centre (UCLIC), we’ve inves-tigated the use of smartwatches for communications—including email and social media noti� cations and text messages—across di� erent studies8,9

to understand whether users perceive them to be more like digital handcu� s that increase information overload and aggravate the work–life challenge or magic bracelets that help ward o� distractions from other devices.

SMARTWATCHES AS DIGITAL HANDCUFFSA common assumption among non-users is that smartwatches make the wearer more readily available. For example, a colleague sent a message

to one of our study participants while they were both in a meeting with a client reminding him to mention something, thus capitalizing on the participant’s ability to receive the no-ti� cation in a subtle way.

This assumption can have nega-tive consequences when there’s a mis-match between users’ and nonusers’ mental models of how a smartwatch is used. In another example from one of our studies, a nonuser expected a prompt reply to a trivial message (“I know you have this watch and you see my message!”) without even consider-ing whether the wearer had enabled noti� cations from that channel or was even wearing the watch at the time. This instance suggests that mental models associated with smartphones are transferred to the smartwatch, de-spite being di� erent devices with only some functionalities in common.

The degree to which one is tethered to smartwatch technology therefore depends on other people as well as the user. Even with careful management of device availability—for example, by not wearing it or disabling certain noti� cations— there’s always the risk of it being a source of distraction if the user succumbs to nonusers’ expectations.

Smartwatches can distract not only the wearer but also others in the vi-cinity. When receiving noti� cations, study participants were occasionally asked by friends or colleagues what was happening. Some silenced or hid their device in response to curious glances from bystanders who noticed a noti� cation, and one participant even stopped wearing a smartwatch at work precisely because he didn’t want oth-ers to read his messages.

These reactions could be a novelty e� ect that wears o� in time as smart-watches become more popular. How-ever, in an environment already per-meated with digital distractions, their continuous presence on the wrist com-bined with societal expectations of ready availability and responsiveness could leave users feeling handcu� ed by the technology (see Figure 1).

SMARTWATCHES AS MAGIC BRACELETSOur study participants also found many bene� ts in using a smartwatch. Smartphones often bombard us with noti� cations from communication and social media apps, general software updates, and games. Much like Won-der Woman’s bullet-de� ecting magic

Figure 1. Contrasting conceptions of the impact of smartwatch usage. Smartwatches can make us feel handcuffed to our phone through their continuous physical presence and societal expectations of ready availability and responsiveness (left), but they can also serve as “magic bracelets” that defl ect information bombarding us from the online world (right).

106 C O M P U T E R P U B L I S H E D B Y T H E I E E E C O M P U T E R S O C I E T Y 0 0 1 8 - 9 1 6 2 / 1 7 / $ 3 3 . 0 0 © 2 0 1 7 I E E E

INDISTINGUISHABLE FROM MAGIC

According to BI Intelligence, global smartwatch shipments are expected to reach 70 million by 2021.1 These devices o� er users the bene� ts of an activity tracker together with quick and

easy access to smartphone functionalities such as viewing and responding to messages and remote access to music controls. Because smartwatches are worn, they enable people to receive noti� cations in situations where phones are out of reach in pockets and bags. These wearables therefore o� er the promise of instantaneous delivery of timely information straight to the wrist while the user is on the move, reducing fear of missing out (FOMO) on im-portant information.

To avoid FOMO, we risk infor-mation overload as unprecedented amounts of content are delivered to our devices throughout the day, resulting in a constant barrage of interruptions. The challenges of shifting and dividing our attention across a range of devices were dis-cussed in a previous article of this column.2 But maintaining focus and concentration aren’t the only di� -culties we face. The negative impli-cations of being “always online” are frequently recounted in the media as people � nd it increasingly di� cult

to disconnect from work and focus on other parts of their life when work-related content is so readily available. In addition, it seems possible—and perhaps even likely—that smartwatches might increase our expectations of be-ing both reachable and responsive, and subsequently also increase the feeling of being tethered to our smartphone.

Research shows that users attend to more than 60 smart-phone noti� cations per day, often within minutes.3 Other work highlights the addictive nature of checking smart-phones for messages,4 even while on the toilet.5 There are therefore concerns that by increasing access to noti-� cations, smartwatches might exacerbate this behavior, especially if they’re considered as an extra phone screen.6

Smartwatches: Digital Handcuffs or Magic Bracelets?Marta E. Cecchinato and Anna L. Cox, University College London

Some regard the smartwatch as little more

than an extra phone screen, but it can be a

powerful tool that reduces the time we spend

using other devices, enabling us to better

manage our digital lives without missing out on

important information.

12 Computing Edge September 2017108 C O M P U T E R W W W . C O M P U T E R . O R G / C O M P U T E R

INDISTINGUISHABLE FROM MAGIC

bracelets, the smartwatch can serve as a micro boundary device10 that shields us from this bombardment and therefore gives us a greater feel­ing of control over our digital lives (see Figure 1).

A major advantage of smart wear­ables is that they keep us up to date with messages with minimal disruption to our current task. In an instant we can see who a message is from and the gist of that message, and decide whether to respond. This enables us to stay mean­ingfully connected to others without being trapped in the online world.

Smartwatches extend this ability by ensuring swift notification of only priority messages from smartphones—whether from a particular app (for ex­ample, WhatsApp if only used for com­municating with family members) or specific people regardless of channel. In our studies, we observed that par­ticipants either relied on automated settings to enable and disable notifica­tions (such as muting the smartwatch at night) or manually enforced rules to receive more contextualized notifica­tions (such as turning alerts off when dining out with friends).

Rather than exacerbating respon­siveness, we found that smartwatch use elicits slower responses to non­urgent notifications because the bur­den of pulling out the phone and un­locking it isn’t justified. This selective responsiveness across devices helps users align their behavior to their val­ues (for example, not being constantly available), such as delaying a reply to a more appropriate time. Our findings are supported by a quantitative study of smartwatch use, which found that wearers had fewer unprompted inter­actions than with smartphones.7

Some study participants also val­ued the opportunity to read messages on their smartwatches, without send­ing the other person awareness cues11 such as read receipts or notices show­ing when they were last online. Apps like WhatsApp or Facebook Messen­ger automatically enable these cues to create the illusion of having real­time

conversations, but from the user’s per­spective, noted one participant, they can serve as “added pressure.” “If you don’t reply to [messages from other us­ers], that’s a bad thing to do socially,” he added. By escaping these features—but still making the user aware of incoming messages—smartwatches mitigate the compulsion to reply straight away and thus help avoid po­tential social faux pas. As one partici­pant put it, “so (laughing) you can read the message without them seeing that you’ve seen it so then they don’t feel offended that you are ignoring them. [So I can reply] when it’s convenient for me, rather than [feeling pressured].”

For some users, the smartwatch’s physical form affords a way to quickly and easily disconnect from all the de­vices that keep them online—simply take them off. For example, one study participant who had notifications en­abled on his smartwatch for all work and personal emails welcomed the dis­tractions throughout the day. When asked whether he minded having his wrist buzz constantly, he replied, “I was a bit worried, as was my wife, that it might be more of a distractor—but actually I think it’s less.” He explained that the moment he stepped foot at home in the evening, he took off his Moto360, turned off the Bluetooth connection, and started charging the phone. “The minute I come in the door I’m done with it,” he said. Physically re­moving the device appears to have been an important part of mentally discon­necting from work for the evening.

MORE PROS THAN CONSDigital technology is often criticized for creating an always­online culture that distracts us from meaningful face­to­face interactions and further blurs work–life boundaries. Yet the sheer popularity of mobile devices suggests that, on balance, users per­ceive more pros than cons to their use. Despite misgivings popularized by the media, our research has found that, al­though smartwatches bring some new challenges, overall the negatives are

outweighed by the benefits they bring in terms of helping people to manage their availability and responsiveness.

It’s important to move beyond thinking of the smartwatch as only an extra phone screen and recognize that it can be a powerful tool to reduce the time we spend on other devices while minimizing FOMO. Our findings sug­gest that smartwatches let people feel more in control of their digital lives, and might even help curb mobile ad­diction by creating some distance be­tween users and their phone. Smart­watch notifications are minimally disruptive, enabling users engaged in a conversation or task to determine with a quick glance at their wrist whether something is worthy of their immediate attention without having to dig their phone out of their pocket or a bag.

To answer our original question, we argue that smartwatches are more like magic bracelets than dig­ital handcuffs—or at least they can be, if developers appreciate their potential to keep us in touch with what really matters to us and less distracted by trivia.

Looking ahead, smartwatches are likely going to become stand­alone input and output devices

that connect to ever­growing ecologies of devices12—what Gregory Abowd calls “shrouds.”13 They’ll serve as an extension not only to our phone, but to any device we own or control, includ­ing Internet of Things devices in our homes and workspaces.

To avoid becoming digital hand­cuffs, smartwatches must be more than just wrist phones. They must be flexible enough to adapt in form and function to various needs and de­sires. Modular smartwatches, such as BLOCKS (www.chooseblocks.com), are already being developed, and it’s easy to foresee next­generation smart­watches with many interchangeable components to accommodate different user lifestyles and requirements.

www.computer.org/computingedge 13A P R I L 2 0 1 7 109

ACKNOWLEDGMENTSThis research is supported by the EPSRC DTG Studentship under grant number EP/L504889/1. Any opinions,   ndings, and conclusions or recommendations ex-pressed in this material are those of the authors and do not necessarily re� ect the views of the EPSRC.

REFERENCES1. L. Beaver, “The Smartwatch Report,”

Business Insider, 27 Sept. 2016; www.businessinsider.com/smartwatch-and-wearables-research-forecasts-trends-market-use-cases-2016-9.

2. A. Bulling, “Pervasive Attentive User Interfaces,” Computer, vol. 49, no. 1, 2016, pp. 94–98.

3. M. Pielot, K. Church, and R. de Oliveira, “An In-Situ Study of Mobile Phone Noti  cations,” Proc. 16th Int’l Conf. Human–Computer Interaction with Mobile Devices and Services (MobileHCI 14), 2014, pp. 233–242.

4. O. Turel and A. Serenko, “Is Mobile Email Addiction Overlooked?,” Comm. ACM, vol. 53, no. 5, 2010, pp. 41–43.

5. M.E. Cecchinato, A.L. Cox, and J. Bird, “‘I Check My Emails on the Toilet’: Email Practices and Work–Home Boundary Manage-ment,” Proc. Mobile HCI 14 Workshop

Socio-Technical Practices and Work–Home Boundaries, 2014; discovery.ucl.ac.uk/1451246.

6. S. Schirra and F. R. Bentley, “‘It’s Kind of Like an Extra Screen for My Phone’: Understanding Everyday Uses of Consumer Smartwatches,” Proc. 33rd Ann. ACM Conf. Extended Abstracts on Human Factors in Computing Systems(CHI EA 15), 2015, pp. 2151–2156.

7. A. Visuri et al., “Quantifying Sources and Types of Smartwatch Usage Sessions,” to be published in Proc. 35th Ann. ACM Conf. Human Factors in Computing Systems (CHI 17), 2017.

8. M.E. Cecchinato, A.L. Cox, and J. Bird, “Smartwatches: The Good, the Bad and the Ugly?,” Proc. 33rd Ann. ACM Conf. Extended Abstracts on Human Factors in Computing Systems(CHI EA 15), 2015, pp. 2133–2138.

9. M.E. Cecchinato, A.L. Cox, and J. Bird, “Always On(line)? User Expe-rience of Smartwatches and Their Role within Multi-Device Ecologies,” to be published in Proc. 35th Ann. ACM Conf. Human Factors in Comput-ing Systems (CHI 17), 2017.

10. A.L. Cox et al., “Design Frictions for Mindful Interactions: The Case for Microboundaries,” Proc. 2016 ACM

Conf. Extended Abstracts on Human Factors in Computing Systems (CHI EA 16), 2016, pp. 1389–1397.

11. A. Oulasvirta et al., “Interpreting and Acting on Mobile Awareness Cues,” Human–Computer Interaction, vol. 22, nos. 1–2, 2007, pp. 97–135.

12. T. Kubitza et al., “An IoT Infrastruc-ture for Ubiquitous Noti  cations in Intelligent Living Environments,” Proc. 2016 ACM Int’l Joint Conf. Pervasive and Ubiquitous Computing(UbiComp 16), 2016, pp. 1536–1541.

13. G.D. Abowd, “Beyond Weiser: From Ubiquitous to Collective Comput-ing,” Computer, vol. 49, no. 1, 2016, pp. 17–23.

MARTA E. CECCHINATO is a PhD student at University College London Interaction Centre (UCLIC). Contact her at [email protected] .ac.uk.

ANNA L. COX is a Reader in Human–Computer Interaction and Deputy Director of UCLIC. Contact her at [email protected].

Recognizing Excellence in High-Performance Computing

Nominations are Solicited for the

SEYMOUR CRAY, SIDNEY FERNBACH & KEN KENNEDY AWARDS

Deadline: 1 July 2017All nomination details available at

awards.computer.org

SEYMOUR CRAY COMPUTER ENGINEERING AWARDEstablished in late 1997 in memory of Seymour Cray, the Seymour Cray Award is awarded to recog-nize innovative contributions to high-performance computing systems that best exemplify the creative spirit demonstrated by Seymour Cray. The award consists of a crystal memento and honorarium of US$10,000. This award requires 3 endorsements.

ACM/IEEE-CS KEN KENNEDY AWARDThis award was established in memory of Ken Kennedy, the founder of Rice University’s nationally ranked computer science program and one of the world’s foremost experts on high- performance computing. A certificate and US$5,000 honorarium are awarded jointly by the ACM and the IEEE Computer Society for outstanding contributions to programmability or productivity in high- performance computing together with significant community service or mentoring contributions. This award requires 2 endorsements.

SIDNEY FERNBACH MEMORIAL AWARDEstablished in 1992 by the Board of Governors of the IEEE Computer Society, this award honors the memory of the late Dr. Sidney Fernbach, one of the pioneers on the development and application of high- performance computers for the solution of large computational problems. The award, which con-sists of a certificate and a US$2,000 honorarium, is presented annually to an individual for “an out-standing contribution in the application of high-performance computers using innovative approach-es.” This award requires 3 endorsements.

108 C O M P U T E R W W W . C O M P U T E R . O R G / C O M P U T E R

INDISTINGUISHABLE FROM MAGIC

bracelets, the smartwatch can serve as a micro boundary device10 that shields us from this bombardment and therefore gives us a greater feel­ing of control over our digital lives (see Figure 1).

A major advantage of smart wear­ables is that they keep us up to date with messages with minimal disruption to our current task. In an instant we can see who a message is from and the gist of that message, and decide whether to respond. This enables us to stay mean­ingfully connected to others without being trapped in the online world.

Smartwatches extend this ability by ensuring swift notification of only priority messages from smartphones—whether from a particular app (for ex­ample, WhatsApp if only used for com­municating with family members) or specific people regardless of channel. In our studies, we observed that par­ticipants either relied on automated settings to enable and disable notifica­tions (such as muting the smartwatch at night) or manually enforced rules to receive more contextualized notifica­tions (such as turning alerts off when dining out with friends).

Rather than exacerbating respon­siveness, we found that smartwatch use elicits slower responses to non­urgent notifications because the bur­den of pulling out the phone and un­locking it isn’t justified. This selective responsiveness across devices helps users align their behavior to their val­ues (for example, not being constantly available), such as delaying a reply to a more appropriate time. Our findings are supported by a quantitative study of smartwatch use, which found that wearers had fewer unprompted inter­actions than with smartphones.7

Some study participants also val­ued the opportunity to read messages on their smartwatches, without send­ing the other person awareness cues11 such as read receipts or notices show­ing when they were last online. Apps like WhatsApp or Facebook Messen­ger automatically enable these cues to create the illusion of having real­time

conversations, but from the user’s per­spective, noted one participant, they can serve as “added pressure.” “If you don’t reply to [messages from other us­ers], that’s a bad thing to do socially,” he added. By escaping these features—but still making the user aware of incoming messages—smartwatches mitigate the compulsion to reply straight away and thus help avoid po­tential social faux pas. As one partici­pant put it, “so (laughing) you can read the message without them seeing that you’ve seen it so then they don’t feel offended that you are ignoring them. [So I can reply] when it’s convenient for me, rather than [feeling pressured].”

For some users, the smartwatch’s physical form affords a way to quickly and easily disconnect from all the de­vices that keep them online—simply take them off. For example, one study participant who had notifications en­abled on his smartwatch for all work and personal emails welcomed the dis­tractions throughout the day. When asked whether he minded having his wrist buzz constantly, he replied, “I was a bit worried, as was my wife, that it might be more of a distractor—but actually I think it’s less.” He explained that the moment he stepped foot at home in the evening, he took off his Moto360, turned off the Bluetooth connection, and started charging the phone. “The minute I come in the door I’m done with it,” he said. Physically re­moving the device appears to have been an important part of mentally discon­necting from work for the evening.

MORE PROS THAN CONSDigital technology is often criticized for creating an always­online culture that distracts us from meaningful face­to­face interactions and further blurs work–life boundaries. Yet the sheer popularity of mobile devices suggests that, on balance, users per­ceive more pros than cons to their use. Despite misgivings popularized by the media, our research has found that, al­though smartwatches bring some new challenges, overall the negatives are

outweighed by the benefits they bring in terms of helping people to manage their availability and responsiveness.

It’s important to move beyond thinking of the smartwatch as only an extra phone screen and recognize that it can be a powerful tool to reduce the time we spend on other devices while minimizing FOMO. Our findings sug­gest that smartwatches let people feel more in control of their digital lives, and might even help curb mobile ad­diction by creating some distance be­tween users and their phone. Smart­watch notifications are minimally disruptive, enabling users engaged in a conversation or task to determine with a quick glance at their wrist whether something is worthy of their immediate attention without having to dig their phone out of their pocket or a bag.

To answer our original question, we argue that smartwatches are more like magic bracelets than dig­ital handcuffs—or at least they can be, if developers appreciate their potential to keep us in touch with what really matters to us and less distracted by trivia.

Looking ahead, smartwatches are likely going to become stand­alone input and output devices

that connect to ever­growing ecologies of devices12—what Gregory Abowd calls “shrouds.”13 They’ll serve as an extension not only to our phone, but to any device we own or control, includ­ing Internet of Things devices in our homes and workspaces.

To avoid becoming digital hand­cuffs, smartwatches must be more than just wrist phones. They must be flexible enough to adapt in form and function to various needs and de­sires. Modular smartwatches, such as BLOCKS (www.chooseblocks.com), are already being developed, and it’s easy to foresee next­generation smart­watches with many interchangeable components to accommodate different user lifestyles and requirements.

This article originally appeared in Computer, vol. 50, no. 4, 2017.

IEEE Software seeks practical,

readable articles that will appeal

to experts and nonexperts alike.

The magazine aims to deliver reliable

information to software developers

and managers to help them stay on

top of rapid technology change.

Author guidelines: www.computer.org/software/authorFurther details: [email protected]

www.computer.org/software

Call for Articles

Learn more at www.computer.org/membership.

And a better match for

your career goals. Now

IEEE Computer Society

lets you choose your

membership — and the

benefits it provides — to fit

your specific career needs.

With four professional

membership categories and

one student package, you

can select the precise

industry resources, offered

exclusively through the

Computer Society, that will

help you achieve your goals.

PREFERRED PLUS TRAINING & DEVELOPMENT

RESEARCH BASIC STUDENT

New Membership Options for a Better Fit

www.computer.org/membership

Explore your options below.

Achieve your career goals with the fit that’s right for you.

Select your membership

Preferred Plus Training & Development Research Basic Student

$60IEEE

Member

$126Affiliate Member

$55IEEE

Member

$115Affiliate Member

$55IEEE

Member

$115Affiliate Member

$40IEEE

Member

$99Affiliate Member

$8Does not include IEEE membership

Computer magazine (12 digital issues)*

ComputingEdge magazine (12 issues)

Members-only discounts on conferences and events

Members-only webinars

Unlimited access to Computing Now, computer.org, and the new mobile-ready myCS

Local chapter membership

Skillsoft’s Skillchoice™ Complete with 67,000+ books, videos, courses, practice exams and mentorship resources

Books24x7 on-demand access to 15,000 technical and business resources

Two complimentary Computer Society magazine subscriptions

myComputer mobile app 30 tokens 30 tokens 30 tokens

Computer Society Digital Library12 FREE

downloads Member pricing 12 FREE downloads Member pricing Included

Training webinars3 FREE

webinars3 FREE

webinars Member pricing Member pricing Member pricing

Priority registration to Computer Society events

Right to vote and hold office

One-time 20% Computer Society online store discount

* Print publications are available for an additional fee. See catalog for details.

16 September 2017 Published by the IEEE Computer Society 2469-7087/17/$33.00 © 2017 IEEE12 PERVASIVE computing Published by the IEEE CS n 1536-1268/17/$33.00 © 2017 IEEE

Human AugmentationEditor: Albrecht Schmidt n University of Stuttgart n [email protected]

Immense Power in a Tiny Package: Wearables Based on Electrical Muscle StimulationPedro Lopes and Patrick Baudisch, Hasso Plattner Institute

C reating small wearable devices full of sensors is becoming increas-

ingly easy, but how can we pack strong mechanical actuation into such tiny packages? Here, we argue that Electri-cal Muscle Stimulation (EMS) might be the way to go.

EMS devices use a signal generator and electrodes attached to the user’s skin to send electrical impulses to the user’s muscles. This causes the muscles to con-tract involuntarily, thereby letting the device actuate the user’s limbs. Although EMS devices have been used in rehabili-tation medicine since the 1960s to regen-erate lost motor functions,1 only in the last few years have researchers started to experiment with EMS to create interac-tive systems. For example, researchers have explored using EMS as a means for teaching users how to play a new musi-cal instrument,2 administering walking directions,3 receiving information from computing devices without a screen,4 and increasing realism and immersion in virtual experiences.5 Many of these projects exploit the fact that EMS min-iaturizes well, which is why it lends itself to pervasive computing use cases, particularly those involving mobile and wearable devices. Furthermore, as we discuss here, EMS provides research-ers with the technical means to create devices even smaller than current wear-able devices.

ELECTRICAL MUSCLE STIMULATIONFigure 1 shows an example of using EMS to add realism to an experience.6 In this case, we mounted a custom EMS signal generator to the back of a mobile phone. It connects to the user’s palm flexor muscles using one pair of elec-trodes per forearm. The resulting device

connects to the phone via Bluetooth, letting apps on the phone actuate the user’s wrists. When the electrode pair on the user’s left arm is activated, for example, the user’s left wrist contracts involuntarily and tilts the device in the user’s hand to the right (as shown in the figure). At all times, operation is comfortable and pain-free.

Figure 1. Our prototype electrically stimulates the user’s arm muscles via electrodes, causing the user to involuntarily tilt the device. As he is countering this force, he perceives force feedback.

www.computer.org/computingedge 17JULY–SEPTEMBER 2017 PERVASIVE computing 13

Muscles

Carbs

Skeleton

Motors

(a) (b)

Battery pack

Exoskeleton

Figure 2. While devices based on (a) mechanical actuation add mechanical components to the user’s body, (b) systems based on electrical muscle stimulation (EMS) instead borrow the user’s skeleton and muscles. (Exoskeleton image source: Université Libre de Bruxelles; used with permission.)

In this particular example, we use the device to add force feedback to a game. This game requires users to steer an airplane by tilting the device left and right, but there are strong side winds that threaten to push the plane off course. In the situation shown in Figure 1, the device renders winds coming from the left by stimulating the user’s left wrist muscles, tilting the mobile device to the right—against the user’s will. To stay on course, users must counter the “wind” forces by pushing back using their other wrist. Users are thus effectively fighting their left wrist, which is under the control of the application, using their right wrist.

In one of our experiments, we had participants compare the game as played with our EMS-based device versus with vibrotactile feedback (found in any smartphone). Users reported that the muscle stimulation effect of our device depicted a more realistic experience.

EMS ACTUATION VS. MECHANICAL ACTUATIONThis simple prototype shown in Figure 1 is just one of several projects

we built to explore creating interac-tive systems based on EMS. Here, we note how EMS compares to more tradi-tional approaches involving mechanical actuators.

EMS Is Considerably SmallerThe prototype we just presented focuses on what we view as a core benefit of EMS, which is that EMS miniaturizes well. The form factor of EMS-based devices tends to be considerably smaller than the more traditional approach of using mechanical actuators.

Figure 2 illustrates this point by showing how EMS-based devices eliminate the need for bulky hardware. While the mechanical approach tends to require not just actuators but also an exoskeleton that transmits forces to the right locations and with appropri-ate levers (Figure 2a), EMS-based sys-tems achieve a similar effect by instead leveraging the skeleton already “built into” the user’s body (Figure 2b).

This is the result of a single central and very unique aspect of EMS sys-tems: where mechanical solutions add

mechanical components to the user’s body, EMS-based devices instead bor-row “components” from the user—that is, the “mechanics” already contained in the human body. Ultimately, it is this ability to re-use parts of the human body that lets EMS-based devices lend themselves well to mobile and wearable applications.

EMS Closes the Haptic Loop in WearablesFigure 3 shows another EMS-based device.4 Its functionality is similar in that it also senses and actuates the user’s wrist. However, while the previ-ous device was designed for a mobile form factor, we designed this one for a wearable form factor: a self-contained armband that users wear under their sleeves.

In the spirit of wearable devices, this device is designed to let users focus on some other primary task, which means that, to keep distractions to a minimum, the device doesn’t feature a screen. We instead implement the device’s entire interaction based on haptics. We accomplish this as follows. First, in addition to flexing the user’s wrist, the device can also extend that same wrist using a second pair of elec-trodes. Second, we added the ability to sense the wrist’s position using an accel-erometer ring.

Figure 3a shows a simple use case of this device in which the user con-trols video playback. The position of the user’s wrist is tightly coupled with the position of the video play head. As shown in Figure 3b, as the video plays, users find their wrist continuously flex-ing upward. The device achieves this by actuating the user’s wrist using EMS. At the same time, users can set the posi-tion of the play head by posing their wrist (Figure 3c).

EMS Scales Well to Full-Body ExperiencesWhile the two prototypes just presented actuate only the user’s wrist, we also explored full body actuation to demon-strate that EMS scales well into more encompassing experiences. The proto-type shown in Figure 4 uses EMS to

12 PERVASIVE computing Published by the IEEE CS n 1536-1268/17/$33.00 © 2017 IEEE

Human AugmentationEditor: Albrecht Schmidt n University of Stuttgart n [email protected]

Immense Power in a Tiny Package: Wearables Based on Electrical Muscle StimulationPedro Lopes and Patrick Baudisch, Hasso Plattner Institute

C reating small wearable devices full of sensors is becoming increas-

ingly easy, but how can we pack strong mechanical actuation into such tiny packages? Here, we argue that Electri-cal Muscle Stimulation (EMS) might be the way to go.

EMS devices use a signal generator and electrodes attached to the user’s skin to send electrical impulses to the user’s muscles. This causes the muscles to con-tract involuntarily, thereby letting the device actuate the user’s limbs. Although EMS devices have been used in rehabili-tation medicine since the 1960s to regen-erate lost motor functions,1 only in the last few years have researchers started to experiment with EMS to create interac-tive systems. For example, researchers have explored using EMS as a means for teaching users how to play a new musi-cal instrument,2 administering walking directions,3 receiving information from computing devices without a screen,4 and increasing realism and immersion in virtual experiences.5 Many of these projects exploit the fact that EMS min-iaturizes well, which is why it lends itself to pervasive computing use cases, particularly those involving mobile and wearable devices. Furthermore, as we discuss here, EMS provides research-ers with the technical means to create devices even smaller than current wear-able devices.

ELECTRICAL MUSCLE STIMULATIONFigure 1 shows an example of using EMS to add realism to an experience.6 In this case, we mounted a custom EMS signal generator to the back of a mobile phone. It connects to the user’s palm flexor muscles using one pair of elec-trodes per forearm. The resulting device

connects to the phone via Bluetooth, letting apps on the phone actuate the user’s wrists. When the electrode pair on the user’s left arm is activated, for example, the user’s left wrist contracts involuntarily and tilts the device in the user’s hand to the right (as shown in the figure). At all times, operation is comfortable and pain-free.

Figure 1. Our prototype electrically stimulates the user’s arm muscles via electrodes, causing the user to involuntarily tilt the device. As he is countering this force, he perceives force feedback.

18 Computing Edge September 201714 PERVASIVE computing www.computer.org/pervasive

HUMAN AUGMENTATION

HUMAN AUGMENTATION

simulate the resistance of walls and the weight of heavy objects in virtual real-ity.5 By actuating the user’s limbs with EMS, our systems prevent the user’s hands from penetrating virtual objects, effectively recreating the resistance of obstacles.

This prototype uses four electrode pairs on one side of the body, allowing it to actuate wrists, biceps, triceps, and shoulders. As before, the use of EMS allows for a wearable form factor—unlike the more traditional actuation, using pulley systems or exoskeletons.

As Figure 4 shows, when the user lifts a virtual cube, our system lets the user feel the weight of the whole cube and resistance of the cube’s facets. The heavier the cube and the harder the user presses the cube, the stronger a counter-

force the system generates. Our system implements the physicality of the cube by actuating the user’s opposing muscles with EMS. By using this approach on different muscle groups, our system sim-ulates a wide range of objects, including walls, shelves, buttons, and projectiles.

In addition to actuating the user’s upper body, we created a prototype that can be attached to various parts of the body, such as the user’s legs. Figure 5 shows this prototype, which is called Impacto because it was designed to ren-der the haptic sensation of hitting and being hit.7 The key idea that allows the small and light Impacto device to simu-late a strong punch is that it decomposes the stimulus: Impacto renders the tactile aspect of being hit by tapping the skin using a solenoid; it adds impulse to the

hit by thrusting the user’s arm backwards using electrical muscle stimulation. Furthermore, as Figure 6 shows, because the device is a generic shape, users can also wear it on their legs to enhance the experience of kicking.

EMS Strengths and LimitationsThe actuation and force feedback we demonstrated using EMS have tradi-tionally been achieved using mechani-cal actuators.

In addition to miniaturizing well, one of the strengths of EMS-based sys-tems is that they can reach parts of the human body that would be hard to actu-ate using other means. For instance, the EMS-based application Vibrat-o-matic8 assists novice users in singing in vibrato by stimulating muscles of their abdo-men and larynx. Mechanical systems are hard to apply here, as they require at least two mounting points.

However, EMS-based actuation is subject to a range of limitations, as suggested by the fact that some appli-cation domains are predominantly approached using mechanical actua-tors. Teleoperation—the transmission of forces between two remote users—has been a flagship area for mechanical actuators since the early days of robot-ics. This is because the precision and speed required by these applications is currently only delivered by systems based on mechanical actuation. The

Electrodes for output

Microcontroller, bluetooth, batteryElectric muscle stimulation board

Accelerometer for input

Output

Systemactuates wrist

Usermoves wrist

Input

(a) (b) (c)

Figure 3. The Pose-IO EMS-based input-output device: (a) an interactive system based on electrical muscle stimulation that (b) continuously writes its output into the user’s wrist posture. At the same time, (c) users can set the position of the playhead by moving their wrist.

Figure 4. As this user lifts a virtual cube, our EMS-based system lets the user feel the weight and resistance of the cube.

www.computer.org/computingedge 19JULY–SEPTEMBER 2017 PERVASIVE computing 15

HUMAN AUGMENTATION

Figure 5. Impacto is a wearable that combines a solenoid and EMS to render the haptic sensation of being hit in a boxing simulator.

Electrodes

Recoil

Solenoid

Figure 6. By wearing the Impacto device on the leg and foot, the user experiences the impact of kicking a virtual football.

same holds for robotically assisted sur-gery and related applications. The rea-son that EMS-based systems tend to be much less precise is in part because of the layered nature of the human mus-cles. Because electrodes reach muscles only indirectly via the user’s skin, their position tends to shift when the user moves, making it hard to target spe-cific muscles without affecting nearby muscle tissues.

Another strength of mechanical actua-tors is that they allow for quasi-arbitrary force output, which has allowed for high-power applications, such as exo-skeletons that provide the wearer with super-human strength. EMS-based sys-tems, in contrast, are always bound by the physical strengths of the user.

T hese observations indicate the opportunities for future work

in EMS. On the hardware level, sev-eral improvements would be welcome to help apply EMS to high-precision applications: increase the robustness of electrode placement against variations in body posture, actuate with higher precision to generate more complex poses, and take into account the user’s voluntary motions by simultaneously sensing muscle tension while actuating. On a higher level, several utilities could help foster EMS research, such as auto-matic calibration methods, methods that simplify the placement of the elec-trodes, and techniques for embedding electrodes into textiles.

Finally, EMS-based systems are idio-syncratic cases of human-computer interfaces because the interface created by the EMS doesn’t become an exten-sion of the body but rather is the body itself. Consequently, EMS-based sys-tems entail a kind of human augmen-tation that is both invisible and well integrated with the user. Interacting through an EMS-based device feels “direct” because users simply move their bodies and feel their bodies being moved. This might open up new avenues for designing interactive systems to help people better learn physical tasks.

14 PERVASIVE computing www.computer.org/pervasive

HUMAN AUGMENTATION

HUMAN AUGMENTATION

simulate the resistance of walls and the weight of heavy objects in virtual real-ity.5 By actuating the user’s limbs with EMS, our systems prevent the user’s hands from penetrating virtual objects, effectively recreating the resistance of obstacles.

This prototype uses four electrode pairs on one side of the body, allowing it to actuate wrists, biceps, triceps, and shoulders. As before, the use of EMS allows for a wearable form factor—unlike the more traditional actuation, using pulley systems or exoskeletons.

As Figure 4 shows, when the user lifts a virtual cube, our system lets the user feel the weight of the whole cube and resistance of the cube’s facets. The heavier the cube and the harder the user presses the cube, the stronger a counter-

force the system generates. Our system implements the physicality of the cube by actuating the user’s opposing muscles with EMS. By using this approach on different muscle groups, our system sim-ulates a wide range of objects, including walls, shelves, buttons, and projectiles.

In addition to actuating the user’s upper body, we created a prototype that can be attached to various parts of the body, such as the user’s legs. Figure 5 shows this prototype, which is called Impacto because it was designed to ren-der the haptic sensation of hitting and being hit.7 The key idea that allows the small and light Impacto device to simu-late a strong punch is that it decomposes the stimulus: Impacto renders the tactile aspect of being hit by tapping the skin using a solenoid; it adds impulse to the

hit by thrusting the user’s arm backwards using electrical muscle stimulation. Furthermore, as Figure 6 shows, because the device is a generic shape, users can also wear it on their legs to enhance the experience of kicking.

EMS Strengths and LimitationsThe actuation and force feedback we demonstrated using EMS have tradi-tionally been achieved using mechani-cal actuators.

In addition to miniaturizing well, one of the strengths of EMS-based sys-tems is that they can reach parts of the human body that would be hard to actu-ate using other means. For instance, the EMS-based application Vibrat-o-matic8 assists novice users in singing in vibrato by stimulating muscles of their abdo-men and larynx. Mechanical systems are hard to apply here, as they require at least two mounting points.

However, EMS-based actuation is subject to a range of limitations, as suggested by the fact that some appli-cation domains are predominantly approached using mechanical actua-tors. Teleoperation—the transmission of forces between two remote users—has been a flagship area for mechanical actuators since the early days of robot-ics. This is because the precision and speed required by these applications is currently only delivered by systems based on mechanical actuation. The

Electrodes for output

Microcontroller, bluetooth, batteryElectric muscle stimulation board

Accelerometer for input

Output

Systemactuates wrist

Usermoves wrist

Input

(a) (b) (c)

Figure 3. The Pose-IO EMS-based input-output device: (a) an interactive system based on electrical muscle stimulation that (b) continuously writes its output into the user’s wrist posture. At the same time, (c) users can set the position of the playhead by moving their wrist.

Figure 4. As this user lifts a virtual cube, our EMS-based system lets the user feel the weight and resistance of the cube.

20 Computing Edge September 201716 PERVASIVE computing www.computer.org/pervasive

HUMAN AUGMENTATION

HUMAN AUGMENTATION

Pedro Lopes is a PhD can-

didate at the Hasso Plattner

Institute. Contact him at

[email protected].

Patrick Baudisch is a profes-

sor of computer science at the

Hasso Plattner Institute. Con-

tact him at patrick.baudisch@

hpi.de.

ACKNOWLEDGMENTS

We thank our colleagues Sijing You, Alexandra Ion,

Patrik Jonell, Lung-Pan Cheng, Sebastian Marwecki,

Willi Müller, and Daniel Hoffmann for fruitful col-

laborations in creating these prototypes. All work

involving voluntary participants was done with prior

written consent and following the standards of the

Hasso Plattner Institute.

REFERENCES

1. J. Moe and H. Post, “Functional Elec-trical Stimulation for Ambulation in Hemiplegia,” Lancet J., July 1962, pp. 285–288.

2. E. Tamaki, T. Miyaki, and J. Rekimoto, “PossessedHand: Techniques for Con-trolling Human Hands Using Electrical Muscles Stimuli,” Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI), 2011, pp. 543–552; doi: http://dx.doi.org/10.1145/1978942. 1979018.

3. M. Pfeiffer et al., “Cruise Control for Pedestrians: Controlling Walk-ing Direction Using Electrical Muscle Stimulation,” Proc. 33rd Ann. ACM Conf. Human Factors in Computing Systems (CHI), 2015, pp. 2505–2514; doi: 10.1145/2702123.2702190.

4. P. Lopes et al., “Proprioceptive Inter-action,” Proc. 33rd Ann. ACM Conf. Human Factors in Computing Sys-tems (CHI), 2015, pp. 939–948; doi: 10.1145/2702123.2702461.

5. P. Lopes et al., “Providing Haptics to Walls and Other Heavy Objects in Virtual Reality by Means of Elec-trical Muscle Stimulation,” Proc. Conf. Human Factors in Com-puting Systems (CHI), 2017; doi: 10.1145/3025453.3025600.

6. P. Lopes and P. Baudisch, “Muscle- Propelled Force Feedback: Bringing Force Feedback to Mobile Devices,” Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI), 2013, pp. 2577–2580; doi: 10.1145/2470654.2481355.

7. P. Lopes, A. Ion, and P. Baudisch, “Impacto: Simulating Physical Impact by Combining Tactile Stimulation with Electrical Muscle Stimulation,” Proc. 28th Ann. ACM Symp. User Interface Software and Technology (UIST), 2015, pp. 11–19.

8. R. Fushimi et al., “Vibrat-o-Matic: Producing Vocal Vibrato Using EMS,” Proc. 8th Augmented Human Interna-tional Conf. (AH), 2017, article no. 24; doi: 10.1145/3041164.3041193.

Take the CS Library wherever you go!

IEEE Computer Society magazines and Transactions are now available to subscribers in the portable ePub format.

Just download the articles from the IEEE Computer Society Digital Library, and you can read them on any device that supports ePub. For more information, including a list of compatible devices, visit

www.computer.org/epub

Advertising Personnel

Debbie Sims: Advertising CoordinatorEmail: [email protected]: +1 714 816 2138 | Fax: +1 714 821 4010

Advertising Sales Representatives (display)

Central, Northwest, Southeast, Far East: Eric KincaidEmail: [email protected]: +1 214 673 3742Fax: +1 888 886 8599

Northeast, Midwest, Europe, Middle East: David SchisslerEmail: [email protected] Phone: +1 508 394 4026Fax: +1 508 394 1707

Southwest, California: Mike HughesEmail: [email protected]: +1 805 529 6790

Advertising Sales Representative (Classifieds & Jobs Board)

Heather BuonadiesEmail: [email protected]: +1 201 887 1703

Advertising Sales Representative (Jobs Board)

Marie ThompsonEmail: [email protected]: 714-813-5094

ADVERTISER INFORMATION • SEPTEMBER 2017

This article originally appeared in IEEE Pervasive Computing, vol. 16, no. 3, 2017

2469-7087/17/$33.00 © 2017 IEEE Published by the IEEE Computer Society September 2017 21

IN OUR ORBIT

1540-7993/16/$33.00 © 2016 IEEE Copublished by the IEEE Computer and Reliability Societies September/October 2016 73

Wearables are poised to change our world. Because

these devices are in direct and per-manent contact with their end users, they are the holy grail for all communicators. By feeding the correct content into this channel, you get “right person, right time, right place.” Technological barriers such as size and power autonomy are rapidly disappearing, and user-centered design is streamlining and facilitating the general public’s use of wearables.1–3 Thus, wearables’ enormous potential for economic1 and efficiency gains has encour-aged both large private companies and small start-ups to move into the field.

Despite their potential, suc-cessful wearables remain scarce. The reasons for this technology’s real-world failure are only par-tially understood.2 In this article, I discuss an obstacle that’s often underestimated in the ongoing wearable discussion: the bystand-er’s impact. Compared to users, bystanders are often considered a second-order phenomenon; in other words, “human-centered design” really applies only to the user. I argue that bystanders have a decisive effect on the behavior of public wearable users. Therefore, we must consider this bystander effect in detail—to show respect for bystanders who encounter wear-ables in public spaces but even more to reduce users’ unease while wear-ing these devices. Successful design

for wearables in public spaces must incorporate the needs and perspec-tives of the bystanders.

Public SpacesIn public spaces, social peace and comfort are guaranteed by law, social norms, and cultural frameworks. Surrounding bystanders exam-ine and counter our presentation and our public actions.5,6 In fact, bystanders are crucial in this public context—one might say that they’re the “public” in “public space.”

The public comprises various types of people: friends, family, col-leagues, and strangers. However, and most important, when we go out into public spaces, we don’t know who the public will be in advance.

Novelties such as wearables must be introduced delicately into this finely equilibrated system of social norms and customs. The reception and reaction to wear-ables are as diverse as the possible public. By wearing and operating wearables, users are introducing objects and actions that bystand-ers are unfamiliar with and might misunderstand. At best, bystand-ers might be supportive or curious; at worst, they might see users as strange, impolite, or even aggressive or threatening. Thus, not only are the people a wearable user meets in public spaces unknown in advance, but so are their reactions.

Currently, wearable designers aim to make both the wearables and the users’ actions invisible.7

Genteel Wearables:

Bystander-Centered Design

Ivo Flammer | XiLabs Urban Game Studio

22 Computing Edge September 2017

This might be acceptable as long as bystanders are truly unconcerned with the wearable; for example, a fully automatic heart-rate logging device that doesn’t provide user feedback. However, for the most interesting wearables, aiming for invisibility is actually equivalent to hiding what the users are up to from bystanders. For example, head-mounted cameras are becoming increasingly concealed and seam-less in appearance and user manip-ulation. However, once invisibility is reached, social friction won’t disappear; on the contrary, every-body will attempt to guess who could be currently filming them. This might be fun for an evening. But for ongoing wearable use, hid-ing is an unstable base for social interaction—not only because the wearable users are masking their agenda from the public, but also because they’re at risk of being revealed as masqueraders.

The engineering literature has tackled bystander privacy con-cerns,8–10 and as a solution, vari-ous privacy-enhancing technologies (PETs) have been presented (for more, see Katharina Krombholz and her colleagues’ review11). PETs include automatic blocking of wear-able actions by active or passive situ-ation assessment, automated data blurring, and offline data removal. Economic interest and rapid advances in head-mounted displays have led most studies to focus on two spe-cific types of bystander data: images and sound recordings. Although bystander protection and respect have been put forward by “informed consent” propositions,12,13 imple-menting such propositions is consid-ered difficult and not worthwhile.

For wearables to gain large pub-lic acceptance, a bystander- centered solution is a must. If we can help bystanders feel in control of their data, then users will feel more com-fortable wearing and operating devices in public.

Definition and Privacy IssuesA wearable is a device that users take seamlessly along with them. It has both local and global connec-tions. The local connection mea-sures local parameters and feeds local actuators. The global connec-tion links the wearable with other devices or remote content, usually the Internet.

Local ConnectionBy definition, the physical position of the wearable’s sensors and actua-tors is with and near the user. The sensors accompany the users on their journey, measuring and pos-sibly registering data about users or their environment. The data collected can be very diverse. For example, a wearable might collect bodily data, such as users’ insulin levels, number of steps walked, or physical coordinates. It might also track users’ environmental data, such as temperature, sound level, or territory topology. Other wearables might record bystander data as part of the wearable users’ environment; for example, they might capture bystanders in a picture, measure their distance to the user, or record fragments of their speech.

Bystander data might be col-lected both intentionally and acci-dentally. Although most devices today don’t register such data on purpose, we can easily imagine the usefulness of deliberate bystander measurements. Imagine the follow-ing scenarios: a wearable that mea-sures the heart rate not only of the user but also of a bystander sitting opposite to obtain a mood check, an infrared measurement that indi-cates the potential virus carriers among colleagues with whom the user is eating lunch, a lie detector that evaluates the truthfulness of a local shop manager, or an identity-detection tool to allow the user to access nearby commuters’ blog posts. All these deliberate bystander

measurements are technologically possible today.

Global ConnectionNot only do wearables have a local connection, but they’re also con-nected to Internet—that’s what makes them “smart.” It’s through this Internet connection that wear-able data multiplies its value. The data becomes part of the cloud and thus accessible to other devices, users, bots, and objects, allowing it to be cross-linked, correlated, ana-lyzed, and filtered by data-mining algorithms.

By uploading wearable data to the Internet, however, we trig-ger important privacy concerns—especially if the data is tagged not only to time and location but also to a specific individual, that is, the wearable’s owner. When the local data measured by the wearable is recorded and distributed on the Internet, access to and control of the data become “fuzzy.” Questions concerning data ownership, data access, and data modification are thus legitimate and important.

Social impact. Privacy is a top con-cern regarding wearables:14 42 per-cent of users think that their privacy is threatened.15 Privacy concerns not only potential criminals who want to hide their secret agendas but also general everyday users. The visibility equals truth argument16—that data sharing doesn’t harm users as long as they don’t under-take illegal activities—is popular but clearly wrong. Our daily life includes various social situations, and exchanging data among these situations can hamper our personal development and identity defini-tion.16 We constantly manage our profile impression in complex and subtle ways.6,17 The way we pres-ent ourselves depends on the social context: we act very differently when having lunch with a long-time friend versus participating in

74 IEEE Security & Privacy September/October 2016

IN OUR ORBIT

www.computer.org/computingedge 23

a formal meeting with a business client or attending a party abroad. Mixing up these situations not only endangers our self-image but also flattens our daily experiences, turn-ing them into indiscriminate, bor-ing mush. Upholding our personal liberty and personality depends on the ability to separate these spheres—and to be free to experi-ment, evolve, and perfect our real life.

Today, user attitudes seem contradictory regarding personal pri-vacy issues: users claim that protecting their pri-vacy is very important, but in practice, they often display indifference toward privacy-sensitive devices and functional-ities. This antagonism might reflect either users’ readiness to trade pri-vacy quite cheaply for other func-tionalities or their ignorance of the privacy issues at stake.16 Extending this to bystanders, we see that in the first case—in which wearable users willingly accept the deal to trade their privacy for some other good—bystanders lose their pri-vacy with no apparent deal. In the second case—user ignorance—bystanders are in a much worse situ-ation: whereas wearable users can, in principal, read the small print of the device contract to become informed, bystanders don’t even have a contract.

Legal issues. With social and tech-nological innovation in mobile ICT evolving rapidly, laws adapted to wearable functionalities are lagging behind. Most stakeholders delib-erately accept this lag to allow yet unknown usages and functions to emerge and to determine the most useful, efficient ways of incorpo-rating wearables into society with-out restraining experimentation by restrictive laws. The price for this legal fuzziness is paid for by the socially and technologically

innovative users, who have limited means of defending their privacy. In cases of disagreement between users and wearable providers, users can do little more than renounce the contract.

This “regulatory humility” approach is pursued mainly in the US.18 Europe has taken a more pro-active approach with its A29WP

legislation project, which requires that users and bystanders be granted freedom of “notice and choice.”19

Bystander impact. Bystanders’ pri-vacy is protected by the same fuzzy laws.13,20 But bystanders—rightly concerned about their data’s stor-age, access, and modification—have even more limited control and action options than wearable users. Bystanders can’t retreat from wear-able contracts. Their only relevant “contract” is the social conventions that exist between them and wear-able users. Thus, wearable users are held accountable for any disagree-ments with bystanders.

Bystander-Centered Design ImperativesDrawing on the importance of these bystander issues, I extracted four design imperatives for wear-able devices.

Be Concerned about the Worst-Case ScenarioUsing wearables in public spaces exposes users to a range of bystander reactions, from active support to physical aggression. When planning to take their wearable into public, users anticipate possible interaction scenarios with bystanders.

You think what they think that I think. Bystander reactions depend on their knowledge of the wear-able device and on the clues given by the wearable’s appearance and the user’s actions. However, wear-ables are still novel, so bystanders are often confronted with devices and user actions they aren’t famil-iar with. Thus, bystander reactions

often don’t correspond with the wearable’s functionalities and the user’s actions but rather with what the bystand-ers guess the wearable user might be doing and about how they them-selves might be involved.

The bystander’s situation is thus complex and opaque. But the situ-ation is even more complex and opaque for the wearable user who tries to anticipate what bystanders might think about what’s going on. So users’ decisions to sport wear-ables likewise doesn’t depend on the wearable’s functionalities but on what they think the bystanders will think.

Bad case magnification effect. Roberto Hoyle and his colleagues showed that wearable users have more concerns about bystanders than the bystanders have about the wearable situation.9 The wearable situation’s complexity and opaque-ness make rapid analysis of privacy issues impossible. But even with more time, the analysis would have to incorporate a multitude of scenar-ios with poorly known probabilities. Being conservative and risk averse, especially in public spaces with high social and legal stakes, wearable users will tend to weigh distressing situations more heavily than pleas-ant ones, considering them more probable than they actually are.

Engage in Negotiated ActionBystanders want to ensure that an active wearable scenario isn’t

Successful design for wearables in

public spaces must incorporate the

needs and perspectives of bystanders.

www.computer.org/security 75

This might be acceptable as long as bystanders are truly unconcerned with the wearable; for example, a fully automatic heart-rate logging device that doesn’t provide user feedback. However, for the most interesting wearables, aiming for invisibility is actually equivalent to hiding what the users are up to from bystanders. For example, head-mounted cameras are becoming increasingly concealed and seam-less in appearance and user manip-ulation. However, once invisibility is reached, social friction won’t disappear; on the contrary, every-body will attempt to guess who could be currently filming them. This might be fun for an evening. But for ongoing wearable use, hid-ing is an unstable base for social interaction—not only because the wearable users are masking their agenda from the public, but also because they’re at risk of being revealed as masqueraders.

The engineering literature has tackled bystander privacy con-cerns,8–10 and as a solution, vari-ous privacy-enhancing technologies (PETs) have been presented (for more, see Katharina Krombholz and her colleagues’ review11). PETs include automatic blocking of wear-able actions by active or passive situ-ation assessment, automated data blurring, and offline data removal. Economic interest and rapid advances in head-mounted displays have led most studies to focus on two spe-cific types of bystander data: images and sound recordings. Although bystander protection and respect have been put forward by “informed consent” propositions,12,13 imple-menting such propositions is consid-ered difficult and not worthwhile.

For wearables to gain large pub-lic acceptance, a bystander- centered solution is a must. If we can help bystanders feel in control of their data, then users will feel more com-fortable wearing and operating devices in public.

Definition and Privacy IssuesA wearable is a device that users take seamlessly along with them. It has both local and global connec-tions. The local connection mea-sures local parameters and feeds local actuators. The global connec-tion links the wearable with other devices or remote content, usually the Internet.

Local ConnectionBy definition, the physical position of the wearable’s sensors and actua-tors is with and near the user. The sensors accompany the users on their journey, measuring and pos-sibly registering data about users or their environment. The data collected can be very diverse. For example, a wearable might collect bodily data, such as users’ insulin levels, number of steps walked, or physical coordinates. It might also track users’ environmental data, such as temperature, sound level, or territory topology. Other wearables might record bystander data as part of the wearable users’ environment; for example, they might capture bystanders in a picture, measure their distance to the user, or record fragments of their speech.

Bystander data might be col-lected both intentionally and acci-dentally. Although most devices today don’t register such data on purpose, we can easily imagine the usefulness of deliberate bystander measurements. Imagine the follow-ing scenarios: a wearable that mea-sures the heart rate not only of the user but also of a bystander sitting opposite to obtain a mood check, an infrared measurement that indi-cates the potential virus carriers among colleagues with whom the user is eating lunch, a lie detector that evaluates the truthfulness of a local shop manager, or an identity-detection tool to allow the user to access nearby commuters’ blog posts. All these deliberate bystander

measurements are technologically possible today.

Global ConnectionNot only do wearables have a local connection, but they’re also con-nected to Internet—that’s what makes them “smart.” It’s through this Internet connection that wear-able data multiplies its value. The data becomes part of the cloud and thus accessible to other devices, users, bots, and objects, allowing it to be cross-linked, correlated, ana-lyzed, and filtered by data-mining algorithms.

By uploading wearable data to the Internet, however, we trig-ger important privacy concerns—especially if the data is tagged not only to time and location but also to a specific individual, that is, the wearable’s owner. When the local data measured by the wearable is recorded and distributed on the Internet, access to and control of the data become “fuzzy.” Questions concerning data ownership, data access, and data modification are thus legitimate and important.

Social impact. Privacy is a top con-cern regarding wearables:14 42 per-cent of users think that their privacy is threatened.15 Privacy concerns not only potential criminals who want to hide their secret agendas but also general everyday users. The visibility equals truth argument16—that data sharing doesn’t harm users as long as they don’t under-take illegal activities—is popular but clearly wrong. Our daily life includes various social situations, and exchanging data among these situations can hamper our personal development and identity defini-tion.16 We constantly manage our profile impression in complex and subtle ways.6,17 The way we pres-ent ourselves depends on the social context: we act very differently when having lunch with a long-time friend versus participating in

74 IEEE Security & Privacy September/October 2016

IN OUR ORBIT

24 Computing Edge September 2017

in opposition to their own prefer-ences. Wearable users’ and bystand-ers’ situational control and reaction possibilities differ significantly: whereas wearable users can stop a wearable service they disagree with, bystanders can’t act directly to halt active wearable situations.

Such situations can be highly uncomfortable for bystanders. Thus, the only option is to allow them to negotiate the terms of such scenarios. To ensure that bystanders don’t get offended or react aggres-sively, mutual negotiation with all present bystanders is needed—a process that’s laborious, time con-suming, and socially delicate.

With a single bystander, nego-tiation is rather simple: inform the bystander of how he or she might be involved in the planned wear-able action and obtain his or her consent to proceed. With several bystanders, the negotiation process becomes more complex: different bystanders might propose different solutions, and interactions among bystanders might affect individual bystander negotiation.

Provide an Opt-Out SolutionFinding a solution acceptable for all is a complicated task. As I’ll discuss, the privacy profile dashboard suc-cessfully tackles this problem. How-ever, until such a global solution is set up, I propose an individual bystander opt-out solution called peacock design: the wearable owner informs bystanders of an upcom-ing wearable action and every bystander can choose whether to be involved or not. This isn’t a sym-metric negotiation process in which bystanders propose solutions too, but at least they have partial con-trol over their participation in the situation. Although not ideal, this solution is much better than the current state of trapping bystanders

in a situation they aren’t informed about or haven’t agreed to. Today’s bystanders are put in a fait accompli in which their data is already taken and possibly sent to the Internet. All they can do is exit the situation

quickly and reclaim their rights a posteriori from the wearable user—a situation with high potential for social friction.

Avoid the Panopticon EffectCurrently, bystanders to public wearable use are either uninformed or poorly informed about what’s going on, which might cause them unease. Wearable users should not only indicate when an action will be launched but also when a possible wearable action is currently inactive. The situation is comparable to Foucault’s Panopticon, in which a detainee’s behavior doesn’t cor-respond to the actual surveillance setup, but to an imagined, poten-tially maximal surveillance. Fou-cault’s Panopticon works because a threat toward the detainee is cred-ible and the presence or absence of the actual surveillance status is deliberately obscure; thus, the detainee feels like he or she is under surveillance even if the surveillance system is inactive. This is precisely the current situation with wear-ables: if not informed about the current state of sensitive wearable actions, bystanders feel like their privacy is threatened, even when the wearable device is inactive. In this view, current wearable designs aiming for invisibility and unobtru-siveness actually magnify the Pan-opticon effect.

Tackling Bystander IssuesI discuss two possible solutions to achieve genteel wearables: peacock design and the privacy profile dash-board. Although the privacy profile dashboard solution is superior in

most respects, I still pres-ent the peacock design solution because wear-able producers can imple-ment it right away.

Peacock DesignIn the peacock design, information about wear-ables and user actions

are materialized and announced to bystanders, either by actuators or physical gestures and signs. This design can be applied immediately and doesn’t require societal changes or specific equipping of bystanders. It might include PETs to enhance the wearable’s native privacy, such as bystander image blurring or lim-ited data lifetime. The only require-ment is to follow the four design imperatives.

Informing bystanders of wear-able functionalities and user actions is a socially delicate task. As dis-cussed, everybody manages their public impression. By adopting a wearable, users also adopt clues and signs about the device’s pres-ence and actions. And these clues and signs might be unusual, ugly, or simply not fitting with the current impression the wearable user is try-ing to uphold.

In principle, one could design a very visible, persistently buzz-ing and flashing device such that everybody around is clearly and consistently informed of its presence and actions. However, nobody would wear this object as it completely compromises the users’ impression management.

An optimal design would actively support users’ impression management. Because the targeted public and social situations are likely diverse, no single design will

Using wearables in public spaces exposes

users to a range of bystander reactions,

from active support to physical aggression.

76 IEEE Security & Privacy September/October 2016

IN OUR ORBIT

www.computer.org/computingedge 25

correspond to all the impression management requirements of dif-ferent users. The customary solu-tion is to design a neutral device, one that doesn’t actively support user impression management but at least doesn’t harm it. A neutral design, however, conflicts with the four design imperatives, which in the case of materialized informa-tion, demand high signal percepti-bility by the bystanders.

The tradeoff between users’ and bystanders’ concerns is cur-rently shifted too much toward users. Neglecting bystanders’ con-cerns leads to counterproductive results. Improving wearables’ suc-cess requires shifting the focus onto bystanders. This will demand cre-ative solutions for novel actuator uses, new gestures, and signs. This is quite a challenge, but to achieve sustainable wearable devices in public spaces, the genteel bystander approach is more promising than the current “invisible and unobtru-sive” approach.

Privacy Profile DashboardIn this solution, every potential bystander—thus every person in a public space—has a personal pri-vacy dashboard that gives them access to their privacy profile so that they can edit its parameters. Bystanders carry their profiles with them at all times. Actually, most of us already have digital privacy profiles on our Web browsers and smartphones. So the most obvious way to implement a privacy profile dashboard is to expand our smart-phones’ personal profile settings.

Before being used in public, a wearable device would contact any bystanders’ mobile phones to consult their privacy settings. The wearable functionality would be activated only if all bystander pri-vacy profile settings were compli-ant with the privacy needed for a certain action. Bystander consulta-tion would relaunch before any new

wearable functionality was activated and whenever a new bystander entered the wearable proximity.

Dashboard BenefitsThis digital solution has many attractive features:

■ it’s fine grained and situation adapted for every bystander;

■ profile structures, parameters, and options can be continuously updated;

■ bystanders define their profile set-tings in a calm and well-informed situation, not under time pressure in a public space;

■ lag time for wearable functionality launch is small because the user gets an immediate answer from all bystanders;

■ if certain bystanders block a wear-able functionality, the function-ality might be adapted to fit the greatest common denominator of all bystander privacy settings; and

■ a newly arrived bystander’s profile is treated in real time.

The privacy profile dashboard affords flexibility in the possible deals between wearable users and bystanders. For example, bystanders could restrict their data transfer and use to a specific community such as friends or fol-lowers, demand usage limitations such as mobile phone local use only or automatic destruction of data within 24 hours, or include a trading proposition such as having the wearable user subscribe to the bystander’s blog or give a “like” on an e-reputation site.

Because the privacy profile dashboard is linked to a physical entity, one can easily imagine not only equipping bystanders but also specific locations with it, such as bathrooms, restaurants, theaters, museums, and beaches. These loca-tions would have their own privacy settings, obstructing certain wear-able functionalities.

Informed consent. By using the pri-vacy profile dashboard, bystanders don’t have to be explicitly informed every time they enter a wearable situation, because they have set previously agreed-on conditions for certain privacy functionalities in their privacy profile settings. This greatly reduces any bystander “spamming” effect, which might become important with wear-ables’ increasing success. If certain bystanders want to be informed in real time about all active wear-able actions, this could be done by changing the privacy parameters on their mobile phone.

With this solution, situational control is clearly handed over to the bystanders, fulfilling the notice and choice requirement of the planned European A29WP legislation.

Wearable device design. Because the negotiation process happens digi-tally, there’s no need to materialize information about a wearable’s pres-ence and action. Thus, privacy pro-file dashboards solve the peacock design’s visibility dilemma: designs can now include visual neutrality and even invisibility.

Dashboard ImplementationSeveral issues must be addressed to implement the privacy dashboard.

Bystander recognition. Determin-ing bystanders’ presence doesn’t require an absolute location mea-surement such as GPS; a relative position measurement is enough. All we need to know is whether two devices are near each other. This can be done by any medium-range wireless network, such as Bluetooth or Wi-Fi. The specifications for the network are low power consump-tion and bidirectional communi-cation. Latency and data transfer rate aren’t critical. The Bluetooth low energy (BLE) network fulfills these specifications and is already well implemented: all new iPhone

www.computer.org/security 77

in opposition to their own prefer-ences. Wearable users’ and bystand-ers’ situational control and reaction possibilities differ significantly: whereas wearable users can stop a wearable service they disagree with, bystanders can’t act directly to halt active wearable situations.

Such situations can be highly uncomfortable for bystanders. Thus, the only option is to allow them to negotiate the terms of such scenarios. To ensure that bystanders don’t get offended or react aggres-sively, mutual negotiation with all present bystanders is needed—a process that’s laborious, time con-suming, and socially delicate.

With a single bystander, nego-tiation is rather simple: inform the bystander of how he or she might be involved in the planned wear-able action and obtain his or her consent to proceed. With several bystanders, the negotiation process becomes more complex: different bystanders might propose different solutions, and interactions among bystanders might affect individual bystander negotiation.

Provide an Opt-Out SolutionFinding a solution acceptable for all is a complicated task. As I’ll discuss, the privacy profile dashboard suc-cessfully tackles this problem. How-ever, until such a global solution is set up, I propose an individual bystander opt-out solution called peacock design: the wearable owner informs bystanders of an upcom-ing wearable action and every bystander can choose whether to be involved or not. This isn’t a sym-metric negotiation process in which bystanders propose solutions too, but at least they have partial con-trol over their participation in the situation. Although not ideal, this solution is much better than the current state of trapping bystanders

in a situation they aren’t informed about or haven’t agreed to. Today’s bystanders are put in a fait accompli in which their data is already taken and possibly sent to the Internet. All they can do is exit the situation

quickly and reclaim their rights a posteriori from the wearable user—a situation with high potential for social friction.

Avoid the Panopticon EffectCurrently, bystanders to public wearable use are either uninformed or poorly informed about what’s going on, which might cause them unease. Wearable users should not only indicate when an action will be launched but also when a possible wearable action is currently inactive. The situation is comparable to Foucault’s Panopticon, in which a detainee’s behavior doesn’t cor-respond to the actual surveillance setup, but to an imagined, poten-tially maximal surveillance. Fou-cault’s Panopticon works because a threat toward the detainee is cred-ible and the presence or absence of the actual surveillance status is deliberately obscure; thus, the detainee feels like he or she is under surveillance even if the surveillance system is inactive. This is precisely the current situation with wear-ables: if not informed about the current state of sensitive wearable actions, bystanders feel like their privacy is threatened, even when the wearable device is inactive. In this view, current wearable designs aiming for invisibility and unobtru-siveness actually magnify the Pan-opticon effect.

Tackling Bystander IssuesI discuss two possible solutions to achieve genteel wearables: peacock design and the privacy profile dash-board. Although the privacy profile dashboard solution is superior in

most respects, I still pres-ent the peacock design solution because wear-able producers can imple-ment it right away.

Peacock DesignIn the peacock design, information about wear-ables and user actions

are materialized and announced to bystanders, either by actuators or physical gestures and signs. This design can be applied immediately and doesn’t require societal changes or specific equipping of bystanders. It might include PETs to enhance the wearable’s native privacy, such as bystander image blurring or lim-ited data lifetime. The only require-ment is to follow the four design imperatives.

Informing bystanders of wear-able functionalities and user actions is a socially delicate task. As dis-cussed, everybody manages their public impression. By adopting a wearable, users also adopt clues and signs about the device’s pres-ence and actions. And these clues and signs might be unusual, ugly, or simply not fitting with the current impression the wearable user is try-ing to uphold.

In principle, one could design a very visible, persistently buzz-ing and flashing device such that everybody around is clearly and consistently informed of its presence and actions. However, nobody would wear this object as it completely compromises the users’ impression management.

An optimal design would actively support users’ impression management. Because the targeted public and social situations are likely diverse, no single design will

Using wearables in public spaces exposes

users to a range of bystander reactions,

from active support to physical aggression.

76 IEEE Security & Privacy September/October 2016

IN OUR ORBIT

26 Computing Edge September 2017

devices have it and Android devices are adapting rapidly to support BLE uniformly.

Detecting bystanders using local network presence is done isotropically: anybody within a certain distance is con-sidered a bystander. The bystander search might thus include bystand-ers who aren’t directly impacted by the wear-able action, for exam-ple, bystanders behind a wearable user taking a picture in the opposite direction. Bystander spatial inclusion range is defined by the local network reach—typically 20 m for Blue-tooth—and might be too large or small to correspond exactly to the bystanders implicated in the wear-able action. It’s best to take a con-servative approach and include rather too many bystanders to start with and only subsequently zero in on the relevant bystanders by including smarter technology, for instance, using local network signal strength as supplementary data, communication between different bystanders to obtain tri-angulation data, or context-aware data so as to more precisely local-ize wearables.

Internet connection. Bystander rec-ognition and data communication between the wearable device and bystanders occurs via a medium-range wireless connection, so no Internet connection is necessary.

Technology penetration rate. Not everybody has a smartphone and not every smartphone has low-power and medium-range net-working capabilities. However, smartphone penetration rates are rising, with some countries already having very high rates; according to Pew Research Center data, South Korea and Singapore have about 90 percent, the US about 70 percent,

China about 60 percent, and France about 50 percent.

The most promising medium-range network, BLE, is expected to reach 90 percent penetration on all smartphones by 2018.

Although 100 percent smart-phone penetration isn’t a reasonable target, the onus of action will fall on the remaining unequipped citizens to act to protect themselves from unwanted privacy intrusions. The protection entity doesn’t have to be a full-fledged smartphone; it could be a cheap and discreet device such as a simple BLE bracelet.

Dashboard DisadvantagesOne potential issue is that people’s privacy profiles could be collected and misused. Some profile obfus-cation and holdback mechanisms could hinder rapid, complete, and user-identified data collection. However, total blocking is very dif-ficult. I propose dealing with these parameters in the same way as with all other bystander data: by having bystanders define the cor-responding parameters in their pri-vacy profile dashboard. In this way, bystanders decide how their profile data is handled.

There’s the risk of people pur-posely setting their privacy profile to maximum protection, with the aim of actively obstructing the pen-etration of any wearable devices into society. If wearables truly ben-efit users, however, these protesters will become less frequent.

People might also set their parameters to maximum protection not to obstruct wearable use but to avoid giving their data away for

nothing. In this case, more attractive deals for bystanders must be found.

To prevent being measured by other people’s wearables, bystand-ers would always need to carry their telephone with them. How-

ever, most people already do so. For those who don’t want to, there’s the option to use BLE privacy bracelets. Or specific locations could install BLE privacy bea-cons that would prevent certain privacy-sensitive

actions independent of bystanders’ smartphones.

S takes are high and trust is low for today’s wearables. To help

pave a thriving and innovative road to wearable success, different stakeholder interests must be con-sidered. The most neglected stake-holder is the bystander. Instead of waiting for a comprehensive legal framework to protect bystander privacy, it’s in the best interest of wearable producers and users to act now and limit public wearables to the genteel sort.

References1. J. Manyika et al., Disruptive Tech-

nologies: Advances that Will Trans-form Life, Business, and the Global Economy, tech. report, McKin-sey Global Inst., May 2013; www .mckinsey.com/business-functions /business-technology/our-insights /disruptive-technologies.

2. D. Bothum et al., “The Wearable Future,” PWC Consumer Intelli-gence Series, Oct. 2014; www.pwc .com/mx/es/industrias/archivo / 2 0 1 4 - 1 1 - p w c - t h e - w e a r a b l e -future.pdf.

3. The Wearables Privacy Report, tech. report, Imperial College London, Zeno Group, Oct. 2014; https://workspace.imperial.ac.uk/business -school/Public/research/ZENO % 2 0 G R O U P % 2 0 P U L % 2 0

Bystanders would have a personal privacy

dashboard that lets them access their

privacy profile and edit its parameters.

78 IEEE Security & Privacy September/October 2016

IN OUR ORBIT

www.computer.org/computingedge 27

Framework %20with%20foreword .pdf.

4. K.E.C. Levy, “Intimate Surveil-lance,” Idaho Law Rev., vol. 51, no. 3, 2015, pp. 679–692.

5. E. Paolos and E. Goodman, “The Familiar Stranger: Anxiety, Com-fort, and Play in Public Places,” Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI 04), 2004, pp. 223–230.

6. E. Goffman, The Presentation of Self in Everyday Life, Anchor Books, 1959.

7. S. Mann, “Smart Clothing: Making Multimedia Computers and Wireless Communication More Personal—A Paradigm Shift in Wearable Com-puting,” Comm. ACM, vol. 39, no. 8, 1996, pp. 23–24.

8. S. Pidcock et al., “NotiSense: An Urban Sensing Notification System to Improve Bystander Privacy,” Proc. 2nd Int’l Workshop Sensing Appli-cations on Mobile Phones (Phone-Sense 11), 2011; http://research . m i c r o s o f t . c o m / e n - u s / u m /redmond /events/phonesense2011 /papers/NotiSense.pdf.

9. R. Hoyle et al., “Privacy Behav-iors of Lifeloggers Using Wear-able Cameras,” Proc. ACM Int’l Joint Conf. Pervasive and Ubiquitous Computing (UbiComp 13), 2013, pp. 571–582.

10. J.R. Reidenberg, “Privacy in Public,” Univ. of Miami Law Rev., vol. 69, no. 1, 2014, pp. 142–152.

11. K. Krombholz et al., “Ok Glass, Leave Me Alone: Towards a Sys-tematization of Privacy Enhancing Technologies for Wearable Com-puting,” M. Brenner et al., eds., Financial Cryptography and Data Security, LNCS 8976, Springer, 2015, pp. 274–280.

12. M. Langheinrich, “A Privacy Aware-ness System for Ubiquitous Com-puting Environments,” Proc. 4th Int’l Conf. Ubiquitous Computing (UbiComp 02), 2002, pp. 237–245.

13. M.L. Jones, “Privacy without Screens & the Internet of Other People’s Things,” Idaho Law Rev., vol. 51, 2015, pp. 639–659.

14. V.G Motti and K. Caine, “Users’ Privacy Concerns about Wear-ables: Impact of Form Factor, Sen-sors and Type of Data Collected,” M. Brenner et al., eds., Finan-cial Cryptography and Data Secu-rity, LNCS 8976, Springer, 2015, pp. 231–244.

15. “Apadmi’s Wearable Tech Study: Do Potential Customers Think Wear-able Tech Poses a Privacy Risk?,” Apadmi, Jan. 2015; www.apadmi .com/wearable-technology-trends /wearable-tech-privacy.

16. J.E. Cohen, Configuring the Net-worked Self: Law, Code, and the Play of Everyday Practice, Yale Univ. Press, 2012.

17. I. Altman, The Environment and Social Behavior: Privacy, Personal

Space, Territory, Crowding, Brooks/Cole, 1975.

18. J.D. Wright, “The Internet of Things: Privacy and Security in a Connected World,” staff report, US Federal Trade Commission, Jan. 2015; www .ftc.gov/system/files/documents /reports/ federal-trade-commission - staf f -repor t-november-2013 -workshop-entitled-internet-things -privacy/150127iotrpt.pdf.

19. “Opinion 8/2014 on the Recent Developments on the Internet of Things,” Article 29 Data Pro-tection Working Party, 2014; www.dataprotection.ro/ser vlet /ViewDocument?id=1088.

20. L.J. Camp, “Designing for Trust,” Proc. 2002 Int’l Conf. Trust, Reputa-tion, and Security: Theories and Prac-tice (AAMAS 02), 2002, pp. 15–29.

Ivo Flammer is CEO of XiLabs Urban Game Studio. Contact him at [email protected].

Selected CS articles and columns are also available for free at

http://ComputingNow.computer.org.

Want to know more about the Internet?This magazine covers all aspects of Internet computing, from programming and standards to security and networking.

www.computer.org/internet

no

vem

ber

• d

ecem

ber

201

5

IC-19-06-c1 Cover-1 October 9, 2015 3:26 PM

IEEE INTERN

ET COM

PUTIN

G

NO

vEMbER • D

ECEMbER 2015

ThE IN

TERNET O

f YOU

vO

l. 19, NO

. 6 w

ww

.COM

PUTER

.ORG

/INTERN

ET/

ma

rc

h •

ap

ril

201

6

IC-20-02-c1 Cover-1 February 11, 2016 10:30 PM

IEEE INTERN

ET COM

PUTIN

G

Ma

RCh • a

PRIl 2016 ExPlO

RING

TOM

ORRO

w’s IN

TERNET

vOl. 20, N

O. 2

ww

w.CO

MPU

TER.O

RG/IN

TERNET/

jan

ua

ry

• fe

br

ua

ry

201

6

IC-20-01-c1 Cover-1 December 7, 2015 1:45 PM

IEEE INTERN

ET COM

PUTIN

G

jaN

Ua

RY • fEbRUa

RY 2016 IN

TERNET ECO

NO

MICs

vOl. 20, N

O. 1

ww

w.CO

MPU

TER.O

RG/IN

TERNET/

ma

y •

jun

e 2

016

IC-20-03-c1 Cover-1 April 13, 2016 8:45 PM

IEEE INTERN

ET COM

PUTIN

G

MaY • jU

NE 2016

ClOU

D sTOR

aGE

vOl. 20, N

O. 3

ww

w.CO

MPU

TER.O

RG/IN

TERNET/

july

• a

ug

us

t 2

016IEEE IN

TERNET CO

MPU

TING

jU

lY • aU

GU

sT 2016 M

EasU

RING

ThE IN

TERNET

vOl. 20, N

O. 4

ww

w.CO

MPU

TER.O

RG/IN

TERNET/

www.computer.org/security 79

devices have it and Android devices are adapting rapidly to support BLE uniformly.

Detecting bystanders using local network presence is done isotropically: anybody within a certain distance is con-sidered a bystander. The bystander search might thus include bystand-ers who aren’t directly impacted by the wear-able action, for exam-ple, bystanders behind a wearable user taking a picture in the opposite direction. Bystander spatial inclusion range is defined by the local network reach—typically 20 m for Blue-tooth—and might be too large or small to correspond exactly to the bystanders implicated in the wear-able action. It’s best to take a con-servative approach and include rather too many bystanders to start with and only subsequently zero in on the relevant bystanders by including smarter technology, for instance, using local network signal strength as supplementary data, communication between different bystanders to obtain tri-angulation data, or context-aware data so as to more precisely local-ize wearables.

Internet connection. Bystander rec-ognition and data communication between the wearable device and bystanders occurs via a medium-range wireless connection, so no Internet connection is necessary.

Technology penetration rate. Not everybody has a smartphone and not every smartphone has low-power and medium-range net-working capabilities. However, smartphone penetration rates are rising, with some countries already having very high rates; according to Pew Research Center data, South Korea and Singapore have about 90 percent, the US about 70 percent,

China about 60 percent, and France about 50 percent.

The most promising medium-range network, BLE, is expected to reach 90 percent penetration on all smartphones by 2018.

Although 100 percent smart-phone penetration isn’t a reasonable target, the onus of action will fall on the remaining unequipped citizens to act to protect themselves from unwanted privacy intrusions. The protection entity doesn’t have to be a full-fledged smartphone; it could be a cheap and discreet device such as a simple BLE bracelet.

Dashboard DisadvantagesOne potential issue is that people’s privacy profiles could be collected and misused. Some profile obfus-cation and holdback mechanisms could hinder rapid, complete, and user-identified data collection. However, total blocking is very dif-ficult. I propose dealing with these parameters in the same way as with all other bystander data: by having bystanders define the cor-responding parameters in their pri-vacy profile dashboard. In this way, bystanders decide how their profile data is handled.

There’s the risk of people pur-posely setting their privacy profile to maximum protection, with the aim of actively obstructing the pen-etration of any wearable devices into society. If wearables truly ben-efit users, however, these protesters will become less frequent.

People might also set their parameters to maximum protection not to obstruct wearable use but to avoid giving their data away for

nothing. In this case, more attractive deals for bystanders must be found.

To prevent being measured by other people’s wearables, bystand-ers would always need to carry their telephone with them. How-

ever, most people already do so. For those who don’t want to, there’s the option to use BLE privacy bracelets. Or specific locations could install BLE privacy bea-cons that would prevent certain privacy-sensitive

actions independent of bystanders’ smartphones.

S takes are high and trust is low for today’s wearables. To help

pave a thriving and innovative road to wearable success, different stakeholder interests must be con-sidered. The most neglected stake-holder is the bystander. Instead of waiting for a comprehensive legal framework to protect bystander privacy, it’s in the best interest of wearable producers and users to act now and limit public wearables to the genteel sort.

References1. J. Manyika et al., Disruptive Tech-

nologies: Advances that Will Trans-form Life, Business, and the Global Economy, tech. report, McKin-sey Global Inst., May 2013; www .mckinsey.com/business-functions /business-technology/our-insights /disruptive-technologies.

2. D. Bothum et al., “The Wearable Future,” PWC Consumer Intelli-gence Series, Oct. 2014; www.pwc .com/mx/es/industrias/archivo / 2 0 1 4 - 1 1 - p w c - t h e - w e a r a b l e -future.pdf.

3. The Wearables Privacy Report, tech. report, Imperial College London, Zeno Group, Oct. 2014; https://workspace.imperial.ac.uk/business -school/Public/research/ZENO % 2 0 G R O U P % 2 0 P U L % 2 0

Bystanders would have a personal privacy

dashboard that lets them access their

privacy profile and edit its parameters.

78 IEEE Security & Privacy September/October 2016

IN OUR ORBIT

This article originally appeared in IEEE Security & Privacy, vol. 14, no. 5, 2016.

28 September 2017 Published by the IEEE Computer Society 2469-7087/17/$33.00 © 2017 IEEE

Beyond WiresEditor: Yih-Farn Robin Chen • [email protected]

82 Published by the IEEE Computer Society 1089-7801/17/$33.00 © 2017 IEEE IEEE INTERNET COMPUTING

Do Not Capture: Automated Obscurity for Pervasive ImagingMoo-Ryong Ra • AT&T Labs Research

Seungjoon Lee • Google

Emiliano Miluzzo • Sfara

Eric Zavesky • AT&T Labs Research

The pervasive use of smartphones and wearables can compromise individu-

als’ privacy, as they become unaware subjects of pictures and videos. Do Not

Capture is a novel technology that removes unwilling subjects from media at

capture time.

A lmost half a billion photos are uploaded each day on Facebook, Snapchat, Instagram, and Flickr alone.1 If we add the hundreds of mil-

lions of pictures and videos that are kept private to this number, the amount of media generated daily on smartphones and tablets denotes the staggering scale and penetration of this phe-nomenon. The camera quality on today’s mobile devices is comparable to high-end point-and-shoot cameras, and their high processing power makes the media capturing and editing experi-ence even more attractive. With increasing fre-quency, having a smart mobile device readily available has encouraged people to use these devices as recorders, capturing every moment of their lives, any time and everywhere, taking advantage of the large storage capabilities on these platforms. Wearables such as Google Glass are likely to accelerate this on-the-go media production by orders of magnitude with even more pervasive experiences.2 While some users embrace heavy use of mobile cameras in both public and private places, there’s evidence that concerns are increasing about possible voyeuris-tic consequences of ubiquitous media record-ing.3,4 In spite of the common ruling in many countries of no reasonable expectation of pri-

vacy in public places, lawmakers are respond-ing by promoting more aggressive regulations on the use of recording devices. Despite these regulations, privacy leakage through published digital media is difficult, if not impossible, to completely control for all appearances in third-party, and sometimes public, photos and videos without having expressed explicit consent. A person and their children in a crowded place — for example, Times Square — might end up being unaware subjects of hundreds of pictures taken from surrounding strangers. Similarly, a pic-ture captured at a party or a restaurant might be posted on a social network with no knowledge of the involved subjects. Whether due to a personal preference or the potential for picture misuse,5 individuals might wish to restrict their appear-ance in uncontrolled media (see Figure 1).

With this in mind, we propose Do Not Capture (DNC), a novel technology that takes a system-atic first step toward protecting people’s privacy from pervasive imaging with a simple, yet pow-erful idea. If a person (a subject) doesn’t want to be included in pictures taken by strangers, he or she enables DNC on his or her device. The device of the person taking a photo or video (the taker) performs a periodic scan over the short-range

www.computer.org/computingedge 29

Do Not Capture: Automated Obscurity for Pervasive Imaging

May/jUNE 2017 83

radio interface — for example, Wi-Fi Direct — to detect possible DNC sub-jects in the area. If detected, the subject exchanges with the taker’s device a feature vector of the sub-ject’s face (previously collected during an offline training session) and motion fingerprint. The taker’s device performs face and motion fingerprint matching with the peo-ple present in the picture at capture time. A face is blurred upon match-ing a subject’s feature vector through the support of computer vision algo-rithm analysis. By blurring faces at capture time, the subjects’ identity is obfuscated before the media is saved on the device or pushed to the Web.

While at first DNC appears to be a straightforward application of off-the-shelf vision algorithms, realizing the capability in the mobile computing setting poses a set of new challenges. Our scenario requires fully distributed operations, because the task will be per-formed across multiple devices, at least one feature per participating device. At the same time, the system needs to address the concerns of data privacy when exchanging features and scal-ability of traditional short-range radios available to mobile devices. Moreover, many vision techniques run on power-ful desktops or server machines. Thus, a careful choice of algorithms is neces-sary for resource-constrained mobile devices, especially due to the limited amount of battery capacity.

This article demonstrates how mod-ern, smart, mobile devices, equipped with advanced computing power and high-speed Wi-Fi Direct communica-tions, enable a fast completion of the DNC protocol with minimal energy expenditure over the baseline pic-ture-taking mechanism. This article also shows how to exploit the fusion of the vision algorithm with motion sensors on a mobile device to further boost system performance and reduce the misclassification error rate. DNC is fully distributed with no need of cloud support. DNC is implemented

on a number of Android smartphones (Samsung Galaxy S4 and Note 3, and Google Nexus) and was tested with 10 participants in different indoor and outdoor contexts. Despite some diffi-culties — lighting, distance, and mov-ing subjects — DNC is able to properly filter out subjects with up to 93.75 per-cent accuracy.

Motivation and ScopeIncreasingly user privacy is invaded and overwhelmed in the public sphere, and the impetus for protec-tion is left on the user. DNC is a nota-ble stride in rectifying this.

Use CasesThe following examples illustrate some of the scenarios and consequences of recording media with mobile devices when there are subjects who might not be willing to be included in such media. Unobtrusive media capturing, future camera-enabled wearable tech-nology such as smart goggles, and ubiquitous Internet access raise con-cerns about the possible breach of people’s privacy.

Social network sharing. Martha and Mila are best friends and tonight they’re having a drink at a local pub. While in the pub, Mila bumps into some high school classmates that she hasn’t seen in years. Excited, Mila snaps a picture of the group, which she posts right away on a social media

website. The problem is that Martha is in the picture too, and she doesn’t like social networks or the idea of pictures with her appearing on one of these sites without permission. Instead of preventing Mila from sharing the pic-ture, she points her to the DNC app, which Mila promptly downloads on her smartphone and they both enable DNC. As pictures are taken during the night and posted by Mila, Martha knows that a blurred region over her face obscures her identity every time she’s included in a photo.

Public places. Alice and her child are in New York City when a taxi pulls over to let a person out. This person is a famous singer and suddenly Alice is surrounded by paparazzi who are equipped with smart cameras and fighting to photograph the singer. Alice is uncomfortable knowing that she and her child might be present in a picture that will be published in newspapers. She enables DNC on her phone, which is also required for public photographers by law. Not surprisingly, the following week a tabloid reports pictures of the singer entering a restaurant in downtown New York and Alice is happy to see that neither she nor her child are rec-ognizable in the picture.

Scope and AssumptionsIn this article, we make the following assumptions. Each picture or video

Figure 1. Example of automated blurring for Do Not Capture (DNC) subject expressing privacy requirements: (a) original and (b) blurred. A DNC subject might wish to restrict their appearance in uncontrolled media.

(a) (b)

Beyond WiresEditor: Yih-Farn Robin Chen • [email protected]

82 Published by the IEEE Computer Society 1089-7801/17/$33.00 © 2017 IEEE IEEE INTERNET COMPUTING

Do Not Capture: Automated Obscurity for Pervasive ImagingMoo-Ryong Ra • AT&T Labs Research

Seungjoon Lee • Google

Emiliano Miluzzo • Sfara

Eric Zavesky • AT&T Labs Research

The pervasive use of smartphones and wearables can compromise individu-

als’ privacy, as they become unaware subjects of pictures and videos. Do Not

Capture is a novel technology that removes unwilling subjects from media at

capture time.

A lmost half a billion photos are uploaded each day on Facebook, Snapchat, Instagram, and Flickr alone.1 If we add the hundreds of mil-

lions of pictures and videos that are kept private to this number, the amount of media generated daily on smartphones and tablets denotes the staggering scale and penetration of this phe-nomenon. The camera quality on today’s mobile devices is comparable to high-end point-and-shoot cameras, and their high processing power makes the media capturing and editing experi-ence even more attractive. With increasing fre-quency, having a smart mobile device readily available has encouraged people to use these devices as recorders, capturing every moment of their lives, any time and everywhere, taking advantage of the large storage capabilities on these platforms. Wearables such as Google Glass are likely to accelerate this on-the-go media production by orders of magnitude with even more pervasive experiences.2 While some users embrace heavy use of mobile cameras in both public and private places, there’s evidence that concerns are increasing about possible voyeuris-tic consequences of ubiquitous media record-ing.3,4 In spite of the common ruling in many countries of no reasonable expectation of pri-

vacy in public places, lawmakers are respond-ing by promoting more aggressive regulations on the use of recording devices. Despite these regulations, privacy leakage through published digital media is difficult, if not impossible, to completely control for all appearances in third-party, and sometimes public, photos and videos without having expressed explicit consent. A person and their children in a crowded place — for example, Times Square — might end up being unaware subjects of hundreds of pictures taken from surrounding strangers. Similarly, a pic-ture captured at a party or a restaurant might be posted on a social network with no knowledge of the involved subjects. Whether due to a personal preference or the potential for picture misuse,5 individuals might wish to restrict their appear-ance in uncontrolled media (see Figure 1).

With this in mind, we propose Do Not Capture (DNC), a novel technology that takes a system-atic first step toward protecting people’s privacy from pervasive imaging with a simple, yet pow-erful idea. If a person (a subject) doesn’t want to be included in pictures taken by strangers, he or she enables DNC on his or her device. The device of the person taking a photo or video (the taker) performs a periodic scan over the short-range

30 Computing Edge September 2017

Beyond Wires

84 www.computer.org/internet/ IEEE INTERNET COMPUTING

taken in a private and/or a public place could be a possible threat to a person’s privacy. Thus, all photo-takers not known to a certain subject will be treated as an attacker to the subject. We assume that each person will enable and disable DNC on his or her own device given their personal awareness of the surroundings and context — for example, enable it in public places, and disable it at home or when there are known people around. The work also assumes that the underlying software and hard-ware of a mobile or wearable device is trusted and won’t be modified in a malicious way, with DNC rooted either inside the device’s operating system or implemented as an appli-cation that can be downloaded from a trusted app store. At the end of the day, we envision that DNC technology should be part of mobile operating systems, for example, by implement-ing hooks on low-level camera APIs, to make the technology applicable to other photo-taking apps, such as Ins-tagram, Snapchat, and so on.

Proper incentive schemes (or legal requirements) should be in place to promote large-scale adoption of the DNC technology. While the design of effective incentive mechanisms is an important part of the equation, this discussion is deferred to the future and the focus here is given to the feasibility, practicality, and effec-tiveness of the DNC technology. DNC is intended neither to advance state-of-the-art innovation in computer vision, for example, the performance of a face recognition algorithm, nor completely solve the privacy leakage problem. Still, lighting conditions, distance between subjects and taker, and mobility are all factors that could affect the DNC systems’ accuracy.

DNC in ActionAs mentioned, DNC identifies peo-ple who want to opt out by match-ing disclosed features from subject devices nearby with local features

observed by the camera and sensors on the photo-taker’s device.

Initially, a subject node needs to enable DNC on her device. Once enabled, the subject app registers a Wi-Fi Direct Service Discovery (WDSD) object on their Wi-Fi Direct subsystem and turns on the radio to advertise the information. Later, a taker approaches and starts the DNC app on their device. When the app is started, the viewfinder, which will be launched when taking a picture, will locate and track any faces available in the scene. Simultaneously the app triggers sensor sampling for rotation vectors. To announce the beginning of an activity, the taker node will register a WDSD service object with a capture flag. This action lets proxi-mal DNC nodes know that a capturing activity is about to start. By receiv-ing this information the subject’s device triggers the sensor data sam-pling. Immediately after the adver-tisement of a capture flag, the taker app starts to scan the environment. If there exist subject nodes in prox-imity, the taker node will collect face features (eigenfaces) from the subject nodes. These features were created on the subject node upon installation of the DNC application on the device through a training phase where the user is requested to record a video of their face of about 20 seconds. Frames are extracted from this video to train the user’s eigenface space.

Typically, it takes 3 to 10 seconds from the time the taker starts the DNC app to the time he presses a button to capture the scene. In the meantime, DNC continues to collect the sensor data, for example, orientation and accelerometer samples, and trace face trajectories. Wi-Fi Direct scan-ning is periodic so that, whenever subject nodes renew their advertised features, the taker node can update its information accordingly. When the taker presses the capture button to take a picture, the DNC matching algorithm will find candidates. Faces

that match are blurred to make the individuals not recognizable. After saving the edited picture to the local file system, the taker app updates the WDSD service object with a toggled capture flag so that subject nodes around can stop the sensor sampling to extract motion features.

DNC Algorithm DesignThe DNC algorithm combines mul-tiple feature distances to yield the best recognition performance. This subsection describes how the fea-tures and their distances are cal-culated, and how they’re combined. Modern mobile devices have a large number of sensors. In addition to the popular computer vision techniques, DNC takes advantage of those sensor readings to more accurately recog-nize a subject’s identity in a picture.

Face RecognitionTo recognize faces, we used an eigenface6 representation and matching system. To construct a common face domain, the AT&T face database (see www.cl.cam.ac.uk/research/dtg/attarchive/facedata-base.html) is used. Note that this is one of our design choices. We can use a dif-ferent face recognition algorithm with a different dataset for the same purpose. To train individual face models (for example, a model for Alice in the previ-ous example), the subject must capture a video when she installs an app. From the video, 20 frames are extracted uni-formly and faces are projected onto the common face domain. Subsequent pro-jected features are averaged to generate a single feature vector (face model) that represents the subject, and the model is stored as a persistent file on the device. Later, when a taker captures a scene, each of the detected faces will be pro-jected onto the same face domain and compared to the face feature vectors collected from subject nodes in proxim-ity. For simplicity, we use the Euclidean distance metric to calculate distance between face feature vectors. More advanced recognition algorithms exist

www.computer.org/computingedge 31

Do Not Capture: Automated Obscurity for Pervasive Imaging

May/jUNE 2017 85

and are optimized for various scenar-ios. But again, note that our goal isn’t to advance the state-of-the-art of com-puter vision technologies.

Relative OrientationSuppose that a taker can find out that a particular subject is facing the opposite direction of the taker’s camera. Then, the taker doesn’t need to remove the subject’s face in the picture. Device ori-entation is used to calculate the angle between the taker’s shooting direction and the direction of the subject’s face (assuming a wearable device).

Specifically, orientation sen-sor readings define two vectors in a reference coordinates (often called world coordinates); one for the taker’s camera and the other for the subject’s face. For ease of computation, the same local coordinate (y-axis) is used on both the taker and subject’s mobile devices. The angle between the two vectors is the resultant feature. In our experimental setting, with the sub-ject perfectly facing the taker’s cam-era, the angle will be 90 degrees; if a subject faces backward, the angle is 270 degrees. In DNC, if the degree is greater than 180 degrees, the sub-ject is excluded as a potential match, as the subject’s face won’t be visible in the picture. (The threshold angles might be counter-intuitive to some readers. This is because, in our exper-iments, we attached mobile devices to the subject’s head to minimize the impact on the face recognition algo-rithm — that is, on the side.) In areas of confusion, other features are used to distinguish the subject. Note that the subject can be on the opposite side of the shooting region, and the exclu-sion criterion is still correct.

Motion FingerprintingIn this technique, the taker compares a subject’s motion sensor readings with the subject’s motion trajectory observed in the taker’s viewfinder to determine a match. When taking a picture, the taker usually aims at a

target (for example, friends or land-scape) for a certain period of time, for example, 3 to 10 seconds in our exper-iments. If a subject’s face appears in the picture, then the taker can extract the face’s trajectory in the viewfinder during (part of) the preparation. Then, the taker collects the subject’s motion sensor readings during the time period and compares them against locally observed motion trajectories.

Figure 2, depicts a high-level diagram for motion fingerprinting (MFP) in DNC. On the subject side, ori-entation and accelerometer calculate the time series of acceleration readings in the 3D world coordinate system, which is sent to the taker. The taker uses his own orientation sensor reading to trans-late the subject’s acceleration values according to his coordinate system. In parallel, the taker analyzes the motion trajectory (horizontal and vertical axes) of each face shown in his viewfinder and matches it against the received acceleration values from the subject (projected onto the taker’s view plane). The motion trajectory of a face is a series of positions in the viewfinder. One chal-lenge is matching it against acceleration

values (recorded in m/s2) as measured by a subject’s accelerometer. To make a fair comparison, we normalize the mea-sured height/width of a detected face against those of average adult head size (for example, 22.5 cm in height), result-ing in values in meter units. Then, for each axis, a four-variable Kalman fil-ter estimates position, velocity, accelera-tion, and jerk along the axis.

The DNC algorithm incorporates all three previously mentioned features: face features (eigenface), motion fin-gerprint, and relative orientation, and maximizes their synergy to boost dis-criminative power. The pseudocode for the decision process is shown in Algorithm 1. Note that the eigen-face algorithm will be executed after checking device orientations and MFP values. If either of them shows strong misalignment, the system wouldn’t need to run face recognition. For instance, if two orientations deviate greatly, it can be interpreted that they can’t see each other. Similarly, if MFP values are very different, each of them is likely to behave differently in the examined time period (for example, static versus moving).

Figure 2. Motion fingerprinting (MFP) block diagram. On the subject side, orientation and accelerometer calculate the time series of acceleration readings in the 3D world coordinate system, which is sent to the taker. The taker uses his own orientation sensor reading to translate the subject’s acceleration values according to his coordinate system.

Subject

Taker

Send accelerationin world coordinate(A′)

Measuredaccelerationin local coordinate

Accelerationobserved bytaker

Face positiontrajectoryand size

Readorientation andacceleration (A)

Rotate Ato world

coordinate

Rotate A′to local

coordinate

Comparemotion

�ngerprints

Converttrajectory intoacceleration

Readorientation

Beyond Wires

84 www.computer.org/internet/ IEEE INTERNET COMPUTING

taken in a private and/or a public place could be a possible threat to a person’s privacy. Thus, all photo-takers not known to a certain subject will be treated as an attacker to the subject. We assume that each person will enable and disable DNC on his or her own device given their personal awareness of the surroundings and context — for example, enable it in public places, and disable it at home or when there are known people around. The work also assumes that the underlying software and hard-ware of a mobile or wearable device is trusted and won’t be modified in a malicious way, with DNC rooted either inside the device’s operating system or implemented as an appli-cation that can be downloaded from a trusted app store. At the end of the day, we envision that DNC technology should be part of mobile operating systems, for example, by implement-ing hooks on low-level camera APIs, to make the technology applicable to other photo-taking apps, such as Ins-tagram, Snapchat, and so on.

Proper incentive schemes (or legal requirements) should be in place to promote large-scale adoption of the DNC technology. While the design of effective incentive mechanisms is an important part of the equation, this discussion is deferred to the future and the focus here is given to the feasibility, practicality, and effec-tiveness of the DNC technology. DNC is intended neither to advance state-of-the-art innovation in computer vision, for example, the performance of a face recognition algorithm, nor completely solve the privacy leakage problem. Still, lighting conditions, distance between subjects and taker, and mobility are all factors that could affect the DNC systems’ accuracy.

DNC in ActionAs mentioned, DNC identifies peo-ple who want to opt out by match-ing disclosed features from subject devices nearby with local features

observed by the camera and sensors on the photo-taker’s device.

Initially, a subject node needs to enable DNC on her device. Once enabled, the subject app registers a Wi-Fi Direct Service Discovery (WDSD) object on their Wi-Fi Direct subsystem and turns on the radio to advertise the information. Later, a taker approaches and starts the DNC app on their device. When the app is started, the viewfinder, which will be launched when taking a picture, will locate and track any faces available in the scene. Simultaneously the app triggers sensor sampling for rotation vectors. To announce the beginning of an activity, the taker node will register a WDSD service object with a capture flag. This action lets proxi-mal DNC nodes know that a capturing activity is about to start. By receiv-ing this information the subject’s device triggers the sensor data sam-pling. Immediately after the adver-tisement of a capture flag, the taker app starts to scan the environment. If there exist subject nodes in prox-imity, the taker node will collect face features (eigenfaces) from the subject nodes. These features were created on the subject node upon installation of the DNC application on the device through a training phase where the user is requested to record a video of their face of about 20 seconds. Frames are extracted from this video to train the user’s eigenface space.

Typically, it takes 3 to 10 seconds from the time the taker starts the DNC app to the time he presses a button to capture the scene. In the meantime, DNC continues to collect the sensor data, for example, orientation and accelerometer samples, and trace face trajectories. Wi-Fi Direct scan-ning is periodic so that, whenever subject nodes renew their advertised features, the taker node can update its information accordingly. When the taker presses the capture button to take a picture, the DNC matching algorithm will find candidates. Faces

that match are blurred to make the individuals not recognizable. After saving the edited picture to the local file system, the taker app updates the WDSD service object with a toggled capture flag so that subject nodes around can stop the sensor sampling to extract motion features.

DNC Algorithm DesignThe DNC algorithm combines mul-tiple feature distances to yield the best recognition performance. This subsection describes how the fea-tures and their distances are cal-culated, and how they’re combined. Modern mobile devices have a large number of sensors. In addition to the popular computer vision techniques, DNC takes advantage of those sensor readings to more accurately recog-nize a subject’s identity in a picture.

Face RecognitionTo recognize faces, we used an eigenface6 representation and matching system. To construct a common face domain, the AT&T face database (see www.cl.cam.ac.uk/research/dtg/attarchive/facedata-base.html) is used. Note that this is one of our design choices. We can use a dif-ferent face recognition algorithm with a different dataset for the same purpose. To train individual face models (for example, a model for Alice in the previ-ous example), the subject must capture a video when she installs an app. From the video, 20 frames are extracted uni-formly and faces are projected onto the common face domain. Subsequent pro-jected features are averaged to generate a single feature vector (face model) that represents the subject, and the model is stored as a persistent file on the device. Later, when a taker captures a scene, each of the detected faces will be pro-jected onto the same face domain and compared to the face feature vectors collected from subject nodes in proxim-ity. For simplicity, we use the Euclidean distance metric to calculate distance between face feature vectors. More advanced recognition algorithms exist

32 Computing Edge September 2017

Beyond Wires

86 www.computer.org/internet/ IEEE INTERNET COMPUTING

Evaluation ResultsWe implemented DNC as Android applications and extensively evalu-ated it in various environments. We

collected 376 sensor traces and 324 pictures where we varied a num-ber of aspects such as the number of people (1 to 10), movement sce-

narios (static versus moving), and locations.

Due to space limitations, we only present a couple of important results in this article. We noticed that DNC’s recognition performance is highest when we combine the different fea-tures described in the previous sec-tions. We consider the following five feature combinations: face recognition (FR) only, MFP only, FR and orienta-tion (FR + ORI), MFP and ORI (MFP + ORI), and all three features combined. Overall, combining all (FR + ORI + MFP) outperformed other strategies and shows 93.75 percent accuracy in aggregation (FR shows 85.00, MFP was 75.00, FR + ORI was 91.25, and MFP + ORI was 76.25 percent). Figures 3 and 4 show the detailed results.

We first focus on the DNC perfor-mance when the subject is moving or when she is not. In Figure 3, we present the results. We observe that when a subject is moving, using all three features brings a clear benefit. Specifically, the overall recognition rate is 80 percent, while the recogni-tion rate of FR is only 33 percent. We further analyzed the cases where FR was incorrect but FR + ORI + MFP was correct. In some of the cases, MFP played a significant role by differen-tiating a moving subject from a non-subject who just stayed still. In some other cases, ORI helped the system exclude an out-of-sight subject (that is, behind the camera) as a match for a nonsubject node in front of the cam-era. On the other hand, when a subject is static, FR performs well, and sen-sor-based features (ORI, MFP) provide little benefit. DNC has limited benefit when all subject nodes don’t have any movement. In this case, we can mainly rely upon face recognition.

Figure 4 shows the recognition per-formance in different locations. In our experiments, the home environment shows the best performance, mainly because the room had a plain back-ground and nearly uniform light con-ditions. Thus, FR performed perfectly.

Figure 4. Recognition performance in different locations. Sensor-based features boost performance in office and outdoor settings with varying light conditions.

Home

Home

FR100.0075.0083.33

Of�ce

Of�ce

Outdoor

Outdoor

FR + ORI MFP + ORI FR + MFP + ORIMFP95.4564.2970.00

100.0089.2986.67

95.4564.2973.33

100.0092.8690.00

Cor

rect

ness

(%

)

1009080706050403020100

Figure 3. Recognition performance with different subject mobility. Face recognition (FR) performs well with a static subject. Sensor-based features (orientation or ORI, MFP) bring significant benefit when there a subject is in motion.

Cor

rect

ness

(%

)

100

90

80

70

60

50

40

30

20

10

0

StaticMoving

FR MFP FR + ORI MFP + ORI FR + MFP + ORI96.9233.33

89.2313.33

96.92 89.2320.00

96.9280.0066.67

Static Moving

Algorithm 1. The DNC algorithm.

1 if [0 < Orientation < 180] and2 [(MFP not exists ) or3 (MFP exists and MFP−dist < 0.3] then4 Use a result from eigenface algorithm.5 else6 No match.7 end

www.computer.org/computingedge 33

Do Not Capture: Automated Obscurity for Pervasive Imaging

May/jUNE 2017 87

In contrast, in offi ce and outdoor environments, FR doesn’t perform as well because of varying light condi-tions.7 In those cases, orientation and MFP help to boost overall recognition performance. For example, in the outdoors FR is correct for 83 percent of cases, and using the three features improves the accuracy to 90 percent. In this particular result, ORI provided the most benefi t over FR as it was able to correct almost all false positives in nonvisible subject scenarios.

Our results demonstrate that com-bining multiple features indeed helps improve the recognit ion per-formance of DNC, and the performance gain from sensor-based features is larger with a moving subject, which we believe will account for most DNC use cases.

M uch work remains, but we believe that DNC is a solid step forward

to protecting users’ privacy in the public sphere . In the future, we plan to design a proper incentive mecha-nism for the system and to develop more tightly integrated mobile oper-ating systems for providing users with better privacy guarantees.

References1. T.J. Donegan, “Smartphone Cameras

Are Taking Over,” USA Today, 6 June 2013;

www.usatoday.com/story/tech/2013/06/06/

reviewed-smartphones-replace-point-and-

shoots/2373375.

2. Z. Chen et al., “QuiltView: A Crowd-Sourced

Video Response System,” Proc. Workshop

Mobile Computing Systems and Applica-

tions, 2014; doi:10.1145/2565585.2565589.

3. A. Costill, “Top 10 Places That Have Banned

Google Glass,” Search Engine J., 7 Aug. 2013;

www.searchenginejournal.com/top-10-places-

that-have-banned- google-glass/66585.

4. R. Quinn, “Feds Yank Google Glass User from

Movie Theater,” USA Today, 22 Jan. 2014; www.

usatoday.com/story/news/nation/2014/01/22/

newser-google-glass/4772321.

5. P. Snyman, “Who Allowed the Speaker to

Use My Patient’s Photo?” South African J.

Child Health, vol. 6, no. 4, 2012, pp. 102−105;

www.sajch.org.za/index.php/SAJCH/

article/view/457/357.

6. M. Turk and A. Pentland, “Eigenfaces for

Recognition,” J. Cognitive Neuroscience,

vol. 3, no. 1, 1991, pp. 71–86.

7. W. Zhao et al., “Face Recognition: A Lit-

erature Survey,” ACM Computing Sur-

veys, vol. 35, no. 4, 2003, pp. 399–458.

Moo-Ryong Ra is a senior inventive scientist at

AT&T Labs Research. His current research

interests include software-defi ned stor-

age, and specifi cally quality of service

(QoS)-enhanced virtualization, common

cloud platform/infrastructure design for

AT&T’s integrated cloud, and high-perfor-

mance distributed cloud storage systems.

Ra has a PhD in computer science from the

University of Southern California. Contact

him at [email protected].

Seungjoon Lee is a software engineer at

Google. His research interests include

large-scale systems, network manage-

ment, content distribution, cloud comput-

ing, and mobile computing. Lee has a PhD

in computer science from the University

of Maryland, College Park. Contact him at

[email protected].

Emiliano Miluzzo is the vice president of engi-

neering at Sfara, a startup company apply-

ing AI and machine learning reasoning on

top of mobile devices’ sensor data to build

safety applications for enterprise and con-

sumer markets. He previously was a senior

inventive scientist at AT&T Labs Research.

His research interests include mobile and

intelligent sensing. Miluzzo has a PhD in

computer science from Dartmouth College.

Contact him at [email protected].

Eric Zavesky is a principle inventive scientist at AT&T

Labs Research. His research interests include

alternative query and retrieval representations

with video, including object-based queries,

near-duplicate retrieval, and biometrics.

Additionally, he cocreated a collaborative mar-

ketplace for machine learning algorithms and

innovative methods for machine-guided tasks

in mixed reality environments. Zavesky has

a PhD in electrical, electronics, and computer

engineering from Columbia University. Contact

him at [email protected].

2017 B. Ramakrishna Rau AwardCall for Nominations

Honoring contributions to the computer microarchitecture field

New Deadline: 1 May 2017

Established in memory of Dr. B. (Bob) Ramakrishna Rau, the award recognizes his distinguished career in promoting and expanding the use of innovative computer microarchitecture techniques, including his innovation in complier technology, his leadership in academic and industrial computer architecture, and his extremely high personal and ethical standards.

WHO IS ELIGIBLE?: The candidate will have made an outstanding innovative contribution or contributions to microarchitecture and use of novel microarchitectural techniques or compiler/architecture interfacing. It is hoped, but not required, that the winner will have also contributed to the computer microarchitecture community through teaching, mentoring, or community service.

AWARD: Certificate and a $2,000 honorarium

PRESENTATION: Annually presented at the ACM/IEEE International Symposium on Microarchitecture

NOMINATION SUBMISSION: This award requires 3 endorsements. Nominations are being accepted electronically: www.computer.org/web/awards/rau.

CONTACT US: Send any award-related questions to [email protected].

www.computer.org/web/awards/rau

Beyond Wires

86 www.computer.org/internet/ IEEE INTERNET COMPUTING

Evaluation ResultsWe implemented DNC as Android applications and extensively evalu-ated it in various environments. We

collected 376 sensor traces and 324 pictures where we varied a num-ber of aspects such as the number of people (1 to 10), movement sce-

narios (static versus moving), and locations.

Due to space limitations, we only present a couple of important results in this article. We noticed that DNC’s recognition performance is highest when we combine the different fea-tures described in the previous sec-tions. We consider the following five feature combinations: face recognition (FR) only, MFP only, FR and orienta-tion (FR + ORI), MFP and ORI (MFP + ORI), and all three features combined. Overall, combining all (FR + ORI + MFP) outperformed other strategies and shows 93.75 percent accuracy in aggregation (FR shows 85.00, MFP was 75.00, FR + ORI was 91.25, and MFP + ORI was 76.25 percent). Figures 3 and 4 show the detailed results.

We first focus on the DNC perfor-mance when the subject is moving or when she is not. In Figure 3, we present the results. We observe that when a subject is moving, using all three features brings a clear benefit. Specifically, the overall recognition rate is 80 percent, while the recogni-tion rate of FR is only 33 percent. We further analyzed the cases where FR was incorrect but FR + ORI + MFP was correct. In some of the cases, MFP played a significant role by differen-tiating a moving subject from a non-subject who just stayed still. In some other cases, ORI helped the system exclude an out-of-sight subject (that is, behind the camera) as a match for a nonsubject node in front of the cam-era. On the other hand, when a subject is static, FR performs well, and sen-sor-based features (ORI, MFP) provide little benefit. DNC has limited benefit when all subject nodes don’t have any movement. In this case, we can mainly rely upon face recognition.

Figure 4 shows the recognition per-formance in different locations. In our experiments, the home environment shows the best performance, mainly because the room had a plain back-ground and nearly uniform light con-ditions. Thus, FR performed perfectly.

Figure 4. Recognition performance in different locations. Sensor-based features boost performance in office and outdoor settings with varying light conditions.

Home

Home

FR100.0075.0083.33

Of�ce

Of�ce

Outdoor

Outdoor

FR + ORI MFP + ORI FR + MFP + ORIMFP95.4564.2970.00

100.0089.2986.67

95.4564.2973.33

100.0092.8690.00

Cor

rect

ness

(%

)

1009080706050403020100

Figure 3. Recognition performance with different subject mobility. Face recognition (FR) performs well with a static subject. Sensor-based features (orientation or ORI, MFP) bring significant benefit when there a subject is in motion.

Cor

rect

ness

(%

)

100

90

80

70

60

50

40

30

20

10

0

StaticMoving

FR MFP FR + ORI MFP + ORI FR + MFP + ORI96.9233.33

89.2313.33

96.92 89.2320.00

96.9280.0066.67

Static Moving

Algorithm 1. The DNC algorithm.

1 if [0 < Orientation < 180] and2 [(MFP not exists ) or3 (MFP exists and MFP−dist < 0.3] then4 Use a result from eigenface algorithm.5 else6 No match.7 end

www.computer.org/jobs

Looking for the BEST Tech Job for You?Come to the Computer Society Jobs Board to meet the best employers in the industry—Apple, Google, Intel, NSA, Cisco, US Army Research, Oracle, Juniper...

Take advantage of the special resources for job seekers—job alerts, career advice, webinars, templates, and resumes viewed by top employers.

This article originally appeared in IEEE Internet Computing, vol. 21, no. 3, 2017.

34 September 2017 Published by the IEEE Computer Society 2469-7087/17/$33.00 © 2017 IEEE92 PERVASIVE computing Published by the IEEE CS n 1536-1268/17/$33.00 © 2017 IEEE

SmartphonesEditor: Nayeem Islam n Qualcomm n [email protected]

On-Device Mobile Phone Security Exploits Machine LearningNayeem Islam, Saumitra Das, and Yin Chen, Qualcomm

Mobile devices have become an indispensable resource for popu-

lations around the globe, making critical personal and professional information accessible at all times and even replacing PCs in many cases. Unfortunately, they have also become the target of cyber-criminals searching for information such as credit card numbers, sensitive corporate data, social security numbers, and browsing history.

Cyberattacks can be launched by malware applications that have been downloaded to a device from an app store or side-loaded to the device. These attacks include

• downloaded malware that leaks information from the device to exploit vulnerabilities and gain privi-leged access to parts of the phone and a corporate network,

• a device inadvertently connecting to a malicious access port or base sta-tion, or

• a device or apps on the device con-necting to malicious application-level services that coax the device or user into leaking private information.

The number of new attacks per day continues to increase,1 so techniques to thwart these zero-day attacks are abso-lutely necessary.

To determine a good solution for mobile device security, it’s important to ensure that the device has not been compromised at the outset—that is,

hardware Roots of Trust (RoT). Such RoT techniques are important because they provide a starting point for build-ing more sophisticated techniques that can then measure and verify software, protect cryptographic keys, and pro-vide device authentication.2

Here, we describe Qualcomm’s Snapdragon Smart Protect, which offers a novel approach to protecting mobile devices from malware that might leak private information or exploit vulnerabilities. The approach, which can also keep devices from con-necting to malicious access points, uses learning techniques to statically analyze apps, analyze the behavior of apps at runtime, and monitor the way devices associate with Wi-Fi access points. This is the first commercially available always-on and low-power machine-learning-based security system on mobile devices (www.qualcomm.com/products/features/security/haven). The engine looks for anomalous behaviors using machine-learning models trained with large amounts of data, and it’s especially suited for defending zero-day threats, including new malware and targeted Wi-Fi access point attacks. The engine also provides detailed cause analysis for the detected alerts to help users assess the threat.

DEtECtIng MobIlE MalwarEMobile malware typically falls into three classes:3

1. malicious apps that steal a user’s per-sonal information, such as contacts, or that track the user through aggres-sive sensor polling; traditional botnet-style apps that use the user’s phone as a station for distributed denial-of-service attacks or spamming; and

2. toll-fraud attacks that aim to steal money from users or serve aggres-sive advertising (this last class is the most prevalent1).

State-of-the-art in-mobile malware detection relies on signature matching. An anti-virus service usually maintains static signatures of a large body of mobile malware. These signatures describe static characteristics of the app—that is, the existence of certain code snippets, some classes, native libraries, or a call graph associated with malicious activity. However, malware is progressing faster than any signature-based mechanism.

Malware employs techniques for code encryption and obfuscation and dynamic loading of malicious payload to evade signature-based mechanisms.4,5 Such mechanisms are especially vulner-able to zero-day attacks, where the anti-virus software, which has not seen the signature before, will fail to identify the threat.6–8

Machine-learning-based techniques can improve what is available on mobile phones for malware protection. Researchers have focused on making behavioral analysis techniques robust and capable of detecting malware with

www.computer.org/computingedge 35AprIl–juNE 2017 PERVASIVE computing 93

high accuracy. However, there has been little research on how to run behavioral analysis efficiently on a mobile device to preserve real-time experiences while keeping detection accuracy high. The only relevant work is by Jeffrey Bickford and his colleagues,9 who explore security energy tradeoffs for rootkit detection. They conclude that some level of security must be sacrificed to ensure energy-effi-cient detection. Our approach attempts to address this issue using static analysis and behavioral analysis at runtime.

Consider the lifetime of a typical app. The app is

• stored in an app store or an online repository,

• downloaded to a device,• executed on the device (and can

download other code fragments), and• connected to the network.

Given this app lifetime, Figure 1 shows the stages of protection for mobile devices.

Our proposed detection mechanism analyzes the downloaded app using static analysis and checks the app using behavioral analysis at runtime. When the app downloads code dynamically, the mechanism can also check the newly downloaded code statically. At runtime, the app might issue network connection

requests that our mechanism can also check to see if the device is connecting to a malicious access point.

ExPloItIng MaChInE lEarnIngMachine-learning techniques use data to create predictive models that can detect malware once deployed. These models are particularly good at catch-ing variants of known malware.

There are two important metrics: the true positive rate and the false positive rate. The goal of these models is to catch as much malware as possible, while also identifying benign apps. False alarms can quickly erode user’s trust in the solution and prevent the widespread deployment of machine-learning-based security solutions.

Static Analysis of AppsIn Android, an app is distributed as an Android application package (APK). The APK contains the app code as DEX byte code, and it also has a manifest that describes the apps.

Static analysis examines the APK to determine if executable code has exist-ing malicious behavior. If a user grants an app access to system resources, that app can then access resources on the device in a malicious manner. Static

analysis can detect misplaced trust using machine-learning techniques to analyze the code. This is different from signa-ture-based techniques, which are based on checking the hash code. Machine-learning techniques based on program properties have an opportunity to catch a wider range of malicious apps.

Static analysis takes APK files as input and generates a vector of integers (as a feature vector). The classifier reads feature vectors, generates a model to match the training data set, and then uses the model to predict labels of unknown APK files.

There are two challenges in implementation:

•Overhead: The model can be trained on servers, but prediction happens on the device. The prediction must be very efficient to avoid affecting the user experience.

• Causeanalysis: The model should be interpretable and offer backtracking to explain why a classification deci-sion was made.

The app can execute on the device once it passes the static analysis check. It then moves on to runtime behavioral analysis.

Runtime Behavioral AnalysisSignificant challenges exist in building a practical and effective runtime behavioral analysis system on a mobile platform.

To perform effective behavioral anal-ysis, the malware detection system must observe a multitude of behaviors where apps might be engaged. This involves monitoring all the interfaces between the app and the mobile operating sys-tem. Furthermore, the malware detec-tion system must continuously extract the behavior vectors from the API call event logs on the device. The sys-tem must also process every generated behavior vector and make a judgement.

Continuous monitoring, behavior extraction, and analysis are expensive, especially if the behavior vector con-tains a large number of features. For example, whenever the app performs an activity, such as request a service or poll

Figure 1. The various stages of protection for mobile devices: analyze the downloaded app using static analysis, check the app using behavioral analysis at runtime, and check network connection requests. Each stage presents a point of opportunity to detect and prevent malicious attacks.

App store

Download app Machine-learning-based static

analysis

Machine-learningbehavioralanalysis

Executeapp

Dynamicloading

Machine-learning-based networkaccess analysis Network

access

Aggregatedmachine-learning

classifier

36 Computing Edge September 201794 PERVASIVE computing www.computer.org/pervasive

SmartPhoneS

SmartPhoneS

for a sensor value, the malware detection system must generate a new behavioral vector and classify whether it represents benign or malicious activity.

Every feature corresponds to one or more actions, so monitoring numerous features causes the malware detection system to run more frequently, thus severely hindering device responsive-ness. A behavioral analysis system using hundreds or thousands of fea-tures might become too big to run on a mobile device at an acceptable level of performance and energy overhead. This problem is thwarting development of efficient, real-time behavioral analysis systems for mobile phones.

A typical behavioral analysis system works by observing and extracting the behaviors in which the app is engaged, and then applying a machine-learn-ing technique to classify whether the behavior is malicious or benign. The machine-learning technique involves offline binary classifier training and online execution of a known set of mali-cious and benign behaviors.

Figure 2 shows the typical behavioral analysis architecture, which comprises three main modules: the observer, extractor, and analyzer.

Observer. The observer continuously monitors all interactions between the app and the underlying operating sys-tem through instrumentation points in the mobile operating system. Each time the app performs a behavior of interest, the observer logs the API calls associ-ated with the app with a timestamp.

Behavior extractor. The behavior extrac-tor sifts through the API calls log and outputs a value for every feature of interest. A feature can be either a numerical or a categorical value associ-ated with one or more behaviors. For example, if a feature is the location access rate for the last five minutes, the extractor will count the location access events from the log associated with the app of interest. It then normalizes that count to time and fill in the relevant slot in the feature vector.

Analyzer. The analyzer determines whether the behavior vector the extrac-tor generates is malicious or benign by feeding it to a binary classifier. This binary classifier is usually trained offline and executed online. From a design per-spective, a decision must be made as to whether the classifier should be placed on the cloud or device. Because behav-ioral analysis is a continuous process, placing the classifier on the cloud makes little sense due to the amount of time and energy it will take to analyze every behavior vector and relay the decision back to the device.

Another important design consider-ation is user privacy, which might be compromised by continuously upload-ing detailed app and device behaviors. Alternatively, the classifier could reside on the device to ensure a real-time response with low power overhead and to reduce the impact on the user experience.

On-device machine learning. A unique aspect of our design is the algorithm to prune the classifier with a balanced accuracy and efficiency to run on mobile phones. Depending on the cur-rent device state, the behavioral analy-sis system determines the best set of features to monitor. The system then prunes the full classifier to obtain a lean classifier that has a smaller required number of features. Consequently, the observer can observe only the APIs related to the selected features, such that there is less than 1 percent energy overhead on the phone.

Combining Static and Dynamic AnalysisIt’s possible to combine static and dynamic analysis directly. When static analysis allows an app to run, it passes a vector to the runtime behavioral analy-sis system with a confidence score. This score can augment any feature vectors determined to dynamically improve malware prediction.

Although static analysis comple-ments dynamic behavioral analysis, there are a few things static analysis

Figure 2. The basic architecture of a behavioral analysis system. The three main modules are the observer, extractor, and analyzer. In addition, the actuator blocks the malicious activities.

Observer

Behaviorextractor

Analyzer

Actuator If the app is malicious, it blocksthe activity and alerts the user

Resultant app classification: malicious or benign

Online classification viamachine-learning models

Behavior vectors describing application activity

Generates behavior vectorsfrom observed API sequences

Syscalls, method invocations, app actions ...

Records all API calls, syscalls, andapp requests for private data

www.computer.org/computingedge 37AprIl–juNE 2017 PERVASIVE computing 95

SmartPhoneS

can’t catch. For example, it’s difficult for static analysis to infer the app run-time status, such as whether the app would be running in the foreground versus background when certain func-tions are called. This type of contex-tual information can be very useful in certain scenarios. For example, some malicious apps try to take photos without the user’s knowledge or con-sent by hiding in the background. Also, some functions have input-dependent behavior, so static analysis generally can’t analyze their behavior under real usage scenarios. Finally, static analysis can’t analyze encrypted or packed apps (although runtime analysis can still capture their executions).

Note that unlike other antivirus solutions, our static analysis doesn’t rely on signatures. Most malware from non-Google app stores use repackag-ing techniques that can easily defeat signature-based solutions. Bench-marking results on zero-day malware indicate superior performance of our solution against most leading commer-cial offerings.

Detecting Rogue Access Points in Wi-Fi NetworksMobile devices connect to the Inter-net through wireless protocols such as Wi-Fi or 3G/4G. The implementation of a wireless protocol introduces a gate-way or intermediary in the first step of accessing the Internet.

These intermediaries represent a point of vulnerability. A malicious per-son might choose to interpose an inter-mediary that steals information. An attacker trying to successfully inject his or her own access device must force, deceive, or convince the client that the malicious access device is the best option for network connectivity. On Wi-Fi, there are at least five scenarios that an attacker could use to create association with a mobile device to a gateway:

• open access point imposter,• denial of service or deauthentication,• access point summoning attack,

• frame crafting, or• impersonation attacks.

By observing and analyzing low-level system parameters, it’s possible to detect these attacks—such parameters include the accurate firmware clock, active measurements, and the round-trip time. These signals are tightly bound to the device hardware characteristics and are difficult to manipulate—even by the sophisticated attacker. In this case, machine-learning techniques become quite useful.

Specifically, it’s possible to detect open access point imposters by moni-toring the network and identifying the unusual existence of both secure and unsecure network configurations. Furthermore, the abnormal use of deauthentication messages is detect-able based on the rate and the network conditions in which those messages are received.

It’s also possible to detect an attacker running an access point summoning an attack by bringing up an access point that matches the requested net-work configuration, or by checking for anomalous network configurations. Management and control frames in the communication protocol often contain information that is tightly coupled to the device hardware and physical char-acteristics. An attacker might attempt to craft control of management frames to contain information that matches the target network, but it’s possible to detect inconsistent anomalies in frame content and transmission timing of the frames.

Finally, it’s possible to detect imper-sonation attacks with a machine-learn-ing model trained to detect deviation of observed access point characteristics compared to the expected behavior of the specified network.

Related WorkMost commercial antimalware solu-tions are based on matching code signature with proprietary malware signature databases.3 While effective for detecting known malware, such

mechanisms are especially vulnerable to zero-day attacks, where the antivi-rus software, which has not seen the signature before, will fail to identify the threat.4–6 Furthermore, malware authors often encrypt or obfuscate the malicious code to make the signature matching more difficult.4,10

Curated app stores typically perform security checks on the submitted apps. For example, the Android Bouncer uses dynamic analysis to detect malicious behaviors.4,5 However, sophisticated malware can turn it around and detect the presence of the Android Bouncer environment and refrain from conduct-ing the malicious attacks.11 Apps could also dynamically load code at runtime to evade Android Bouncer or any other signature-matching-based commercial solutions.11

Machine-learning techniques can identify patterns in the program prop-erties that distinguish between malware and normal apps.8 Our proposed detec-tion mechanism uses machine learning holistically throughout the app time-line—from the app downloading onto the device (static analysis), to execution (behavioral analysis) and dynamic code loading (static analysis), to network communications (behavioral analysis).

a s researchers continue to intro-duce machine-learning-based

techniques, the ongoing challenge will be dealing with attackers who cre-ate techniques to work around such algorithms. At some point, the future of cybersecurity and warfare will be between machine-learning attackers and defenders.

REFERENCES

1. R. Unuchek and V. Chebyshev, “Mobile Malware Evolution 2015,” Kaspersky Lab, 2016, https://press.kaspersky.com/files/2016/02/Mobile_virusology_ 2015_FINAL_eng_1902206-2.pdf.

2. A. Regenscheid, “Roots of Trust in Mobile Devices,” Nat’l Inst. Standards and Technology, Feb. 2012; http://csrc.nist.gov/groups/SMA/ispab/documents/

94 PERVASIVE computing www.computer.org/pervasive

SmartPhoneS

SmartPhoneS

for a sensor value, the malware detection system must generate a new behavioral vector and classify whether it represents benign or malicious activity.

Every feature corresponds to one or more actions, so monitoring numerous features causes the malware detection system to run more frequently, thus severely hindering device responsive-ness. A behavioral analysis system using hundreds or thousands of fea-tures might become too big to run on a mobile device at an acceptable level of performance and energy overhead. This problem is thwarting development of efficient, real-time behavioral analysis systems for mobile phones.

A typical behavioral analysis system works by observing and extracting the behaviors in which the app is engaged, and then applying a machine-learn-ing technique to classify whether the behavior is malicious or benign. The machine-learning technique involves offline binary classifier training and online execution of a known set of mali-cious and benign behaviors.

Figure 2 shows the typical behavioral analysis architecture, which comprises three main modules: the observer, extractor, and analyzer.

Observer. The observer continuously monitors all interactions between the app and the underlying operating sys-tem through instrumentation points in the mobile operating system. Each time the app performs a behavior of interest, the observer logs the API calls associ-ated with the app with a timestamp.

Behavior extractor. The behavior extrac-tor sifts through the API calls log and outputs a value for every feature of interest. A feature can be either a numerical or a categorical value associ-ated with one or more behaviors. For example, if a feature is the location access rate for the last five minutes, the extractor will count the location access events from the log associated with the app of interest. It then normalizes that count to time and fill in the relevant slot in the feature vector.

Analyzer. The analyzer determines whether the behavior vector the extrac-tor generates is malicious or benign by feeding it to a binary classifier. This binary classifier is usually trained offline and executed online. From a design per-spective, a decision must be made as to whether the classifier should be placed on the cloud or device. Because behav-ioral analysis is a continuous process, placing the classifier on the cloud makes little sense due to the amount of time and energy it will take to analyze every behavior vector and relay the decision back to the device.

Another important design consider-ation is user privacy, which might be compromised by continuously upload-ing detailed app and device behaviors. Alternatively, the classifier could reside on the device to ensure a real-time response with low power overhead and to reduce the impact on the user experience.

On-device machine learning. A unique aspect of our design is the algorithm to prune the classifier with a balanced accuracy and efficiency to run on mobile phones. Depending on the cur-rent device state, the behavioral analy-sis system determines the best set of features to monitor. The system then prunes the full classifier to obtain a lean classifier that has a smaller required number of features. Consequently, the observer can observe only the APIs related to the selected features, such that there is less than 1 percent energy overhead on the phone.

Combining Static and Dynamic AnalysisIt’s possible to combine static and dynamic analysis directly. When static analysis allows an app to run, it passes a vector to the runtime behavioral analy-sis system with a confidence score. This score can augment any feature vectors determined to dynamically improve malware prediction.

Although static analysis comple-ments dynamic behavioral analysis, there are a few things static analysis

Figure 2. The basic architecture of a behavioral analysis system. The three main modules are the observer, extractor, and analyzer. In addition, the actuator blocks the malicious activities.

Observer

Behaviorextractor

Analyzer

Actuator If the app is malicious, it blocksthe activity and alerts the user

Resultant app classification: malicious or benign

Online classification viamachine-learning models

Behavior vectors describing application activity

Generates behavior vectorsfrom observed API sequences

Syscalls, method invocations, app actions ...

Records all API calls, syscalls, andapp requests for private data

38 Computing Edge September 201796 PERVASIVE computing www.computer.org/pervasive

SmartPhoneS

SmartPhoneS

minutes/2012-02/feb1_mobility-roots-of-trust_regenscheid.pdf.

3. N. Idika and A.P. Mathur, ASurveyofMalwareDetectionTechniques, tech. report, Purdue Univ., 2007.

4. N.J. Percoco and S. Schulte, AdventuresinBouncerland:FailuresofAutomatedMalwareDetectionwithinMobileApplicationMarkets, Black Hat, 2012.

5. J. Oberheide and C. Miller, “Dissecting the Android Bouncer,” SummerCon, 2012.

6. T. Wang et al., “Jekyll on iOS: When Benign Apps Become Evil,” Proc.22ndUsenixSecuritySymp. (SEC), 2013; www.usenix.org/conference/usenixse-curity13/technical-sessions/presenta-tion/wang_tielei.

7. A. Kliarsky, RespondingtoZeroDayThreats, white paper, SANS Inst., June 2011; www.sans.org/reading-room/whitepapers/incident/responding-zero-day-threats-33709.

8. D. Arp et al., “DREBIN: Effective and Explainable Detection of Android Mal-ware in Your Pocket,” Proc.NetworkandDistributedSystemSecuritySymp.

(NDSS), 2014; www.sec.cs.tu-bs.de/pubs/2014-ndss.pdf.

9. J. Bickford et al., “Security versus Energy Tradeoffs in Host-Based Mobile Mal-ware Detection,” Proc.9thInt’lConf.MobileSystems,Applications,andSer-vices (MobiSys), 2011, pp. 225–238.

10. A.P. Felt et al., “A Survey of Mobile Malware in the Wild,” Proc.FirstACMWorkshopSecurityandPrivacyinSmartphonesandMobileDevices (SPSM), 2011, pp. 3–14.

11. S. Poeplau et al., “Execute This! Analyz-ing Unsafe and Malicious Dynamic Code Loading in Android Applications,” Proc.20thAnnualNetwork&DistributedSystemSecuritySymp. (NDSS), 2014; https://cs.ucsb.edu/~vigna/publica-tions/2014_NDSS_ExecuteThis.pdf.

nayeem Islam is a vice

president of Qualcomm

research Silicon Valley. Con-

tact him at nayeem.islam@

gmail.com.

read your subscriptions through the myCS publications portal at

http://mycs.computer.org.

Saumitra Das is a senior

staff engineer manager at

Qualcomm. Contact him

at saumitra.m.das@gmail.

com.

Yin Chen is a staff engineer

at Qualcomm. Contact him

at [email protected].

Advertising Personnel

Marian Anderson: Sr. Advertising CoordinatorEmail: [email protected]: +1 714 816 2139 | Fax: +1 714 821 4010

Sandy Brown: Sr. Business Development Mgr.Email [email protected]: +1 714 816 2144 | Fax: +1 714 821 4010

Advertising Sales Representatives (display)

Central, Northwest, Far East: Eric KincaidEmail: [email protected]: +1 214 673 3742Fax: +1 888 886 8599

Northeast, Midwest, Europe, Middle East: Ann & David SchisslerEmail: [email protected], [email protected]: +1 508 394 4026Fax: +1 508 394 1707

Southwest, California: Mike HughesEmail: [email protected]: +1 805 529 6790

Southeast: Heather BuonadiesEmail: [email protected]: +1 973 304 4123Fax: +1 973 585 7071

Advertising Sales Representatives (Classified Line)

Heather BuonadiesEmail: [email protected]: +1 973 304 4123Fax: +1 973 585 7071

Advertising Sales Representatives (Jobs Board)

Heather BuonadiesEmail: [email protected]: +1 973 304 4123Fax: +1 973 585 7071

ADVERTISER INFORMATION

This article originally appeared in IEEE Pervasive Computing, vol. 16, no. 2, 2017

PURPOSE: The IEEE Computer Society is the world’s largest association of computing professionals and is the leading provider of technical information in the field.MEMBERSHIP: Members receive the monthly magazine Computer, discounts, and opportunities to serve (all activities are led by volunteer members). Membership is open to all IEEE members, affiliate society members, and others interested in the computer field.OMBUDSMAN: Email [email protected] SOCIETY WEBSITE: www.computer.org

Next Board Meeting: 12–13 November 2017, Phoenix, AZ, USA

EXECUTIVE COMMITTEEPresident: Jean-Luc Gaudiot; President-Elect: Hironori Kasahara; Past President: Roger U. Fujii; Secretary: Forrest Shull; First VP, Treasurer: David Lomet; Second VP, Publications: Gregory T. Byrd; VP, Member & Geographic Activities: Cecilia Metra; VP, Professional & Educational Activities: Andy T. Chen; VP, Standards Activities: Jon Rosdahl; VP, Technical & Conference Activities: Hausi A. Müller; 2017–2018 IEEE Director & Delegate Division VIII: Dejan S. Milojičić; 2016–2017 IEEE Director & Delegate Division V: Harold Javid; 2017 IEEE Director-Elect & Delegate Division V-Elect: John W. Walz

BOARD OF GOVERNORSTerm Expiring 2017: Alfredo Benso, Sy-Yen Kuo, Ming C. Lin, Fabrizio Lombardi, Hausi A. Müller, Dimitrios Serpanos, Forrest J. ShullTerm Expiring 2018: Ann DeMarle, Fred Douglis, Vladimir Getov, Bruce M. McMillin, Cecilia Metra, Kunio Uchiyama, Stefano ZaneroTerm Expiring 2019: Saurabh Bagchi, Leila De Floriani, David S. Ebert, Jill I. Gostin, William Gropp, Sumi Helal, Avi Mendelson

EXECUTIVE STAFFExecutive Director: Angela R. Burgess; Director, Governance & Associate Executive Director: Anne Marie Kelly; Director, Finance & Accounting: Sunny Hwang; Director, Information Technology & Services: Sumit Kacker; Director, Membership Development: Eric Berkowitz; Director, Products & Services: Evan M. Butterfield; Director, Sales & Marketing: Chris Jensen

COMPUTER SOCIETY OFFICESWashington, D.C.: 2001 L St., Ste. 700, Washington, D.C. 20036-4928Phone: +1 202 371 0101 • Fax: +1 202 728 9614 • Email: [email protected] Alamitos: 10662 Los Vaqueros Circle, Los Alamitos, CA 90720Phone: +1 714 821 8380 • Email: [email protected]

MEMBERSHIP & PUBLICATION ORDERSPhone: +1 800 272 6657 • Fax: +1 714 821 4641 • Email: [email protected]/Pacific: Watanabe Building, 1-4-2 Minami-Aoyama, Minato-ku, Tokyo 107-0062, Japan • Phone: +81 3 3408 3118 • Fax: +81 3 3408 3553 • Email: [email protected]

IEEE BOARD OF DIRECTORSPresident & CEO: Karen Bartleson; President-Elect: James Jefferies; Past President: Barry L. Shoop; Secretary: William Walsh; Treasurer: John W. Walz; Director & President, IEEE-USA: Karen Pedersen; Director & President, Standards Association: Forrest Don Wright; Director & VP, Educational Activities: S.K. Ramesh; Director & VP, Membership and Geographic Activities: Mary Ellen Randall; Director & VP, Publication Services and Products: Samir El-Ghazaly; Director & VP, Technical Activities: Marina Ruggieri; Director & Delegate Division V: Harold Javid; Director & Delegate Division VIII: Dejan S. Milojičić

revised 31 May 2017

2469-7087/17/$33.00 © 2017 IEEE Published by the IEEE Computer Society September 2017 391520-9202/17/$33.00 © 2017 IEEE P u b l i s h e d b y t h e I E E E C o m p u t e r S o c i e t y computer.org/ITPro 61

Securing iTeDiTOrS: rick Kuhn, niST, [email protected]

Tim Weil, Scram Systems, [email protected]

Bystanders’ Privacy

The widespread adoption of systems that collect ubiquitous sensor data from people using devices

such as mobile phones, wearables, drones, and Internet-connected devices presents significant pri-vacy challenges. Among these challenges is bystanders’ pri-vacy—that is, how to protect the privacy of third parties who could be affected when a sensing device is used in their surroundings.1 By-standers’ privacy arises when a de-vice that collects sensor data (such as photos, sound, or video) can be used to identify third parties when they have not given consent to be part of the collection.

It is not difficult to find examples in which bystanders were identified in photos taken by strangers, espe-cially with the ubiquity of camera-enabled smartphones (there are more than 3.9 billion smartphones in the world, according to a recent Ericsson report; www.ericsson.com/mobility-report) and the avail-ability of identifying information on social networks and the Inter-net. A recent Business Insider news

report describes a photographer’s experiment of taking smartphone photos of bystanders at a subway station.2 He identified people in the photos through free software, and bystanders identified in this experiment learned of their identi-fication through news reports. Ex-amples like this one have brought bystanders’ privacy to the fore, even though this issue has been a longstanding challenge.

Human AspectsPrivacy in mobile, wearable, and connected devices usually focuses on either attacks and solutions that protect a user’s private space from unauthorized access, or the protec-tion of private data on social net-working sites and other services. With bystanders’ privacy, however, there is a social aspect that extends the user’s private space: when photos, videos, and sound are col-lected in shared or public spaces, a conflict of space ownership arises between the user and bystand-ers. Using devices that can collect identifiable data creates the per-ception of ownership of the space

surrounding the device (by device users), which can encroach on the space surrounding bystanders.3

The issue of bystanders’ privacy is not new. Its origins can be traced to the invention of consumer- oriented cameras in the late 19th century. However, over the past few decades, this issue has risen in im-portance because of the ubiquity of mobile and wearable Internet- connected devices, and the pro-liferation of social networks that allow photos to be instantly shared with the world instead of secluded in a physical album (as was the case only a few decades ago).

In the early 2000s, research on human-computer interaction found that cellphone use in public spaces was offensive to some people;4 these devices presented a conflict of social spaces in which a user was simulta-neously in the physical space that he or she occupied and the virtual space of the cellphone conversation. To-day, wearable devices such as smart glasses also include cameras and microphones that engender strong privacy concerns by collecting and sharing data over the Internet

Alfredo J. Perez, Columbus State University

Sherali Zeadally, University of Kentucky

Scott Griffith, Columbus State University

40 Computing Edge September 201762 IT Pro May/June 2017

securing it

without permission, thereby di-rectly threatening bystanders’ space and autonomy.5 Table 1 outlines and explains bystanders’ fears and concerns in greater detail.

Does the general public care about bystanders’ privacy? Results from a recent survey that explored users’ preferences when photo-graphed as bystanders showed that more than 95 percent of re-sponders answered positively to this question.6 This survey also showed trends that indicated re-sponders were more aware and re-strictive about being photographed as bystanders. These trends were in venues such as beaches, gyms, and hospitals; with strangers in social situations; and when images are shared online.6

Current SolutionsNo current technological method has been widely adopted to protect bystanders’ privacy because many solutions exist only as prototype systems in the research stage (an exception is recording devices with LEDs that notify users and bystand-ers that data collection is being performed in their surround ings, but not all smartphones and wear ables have this feature). How -ever, the utilization of privacy- enhancing wearables could become popular because these devices give bystanders the choice of pro-tecting their own privacy rather

than trusting their protection to others, potentially creating a mar-ket for these devices.

The techniques proposed to protect bystanders’ privacy fit into two major categories: location-de-pendent methods, which deny user devices the opportunity to collect data; and obfuscation-dependent methods, which prevent by-standers’ identification. Figure 1 presents the taxonomy we use in this section to classify the meth-ods available to protect bystand-ers’ privacy.

Location-Dependent MethodsThe goal of location-dependent methods is to deny the collection of data in particular shared spaces (such as restaurants, casinos, or cafes). The implementation of this method usually entails restricting and banning devices’ use through warning signs, confiscating de-vices before users enter a shared space, or temporarily disabling devices in the shared space.

According to Jeff Jarvis’s book Public Parts: How Sharing in the Digital Age Improves the Way We Work and Live (Simon & Schuster, 2011), President Theodore Roos-evelt banned the use of cameras at public monuments in Wash-ington, DC, around 1903, and they were often banned at beach-es as well. Similar bans occurred in England during World War I.

In the US, using cameras and recording dev ices to col lect data about things that are plain-ly visible in public spaces is now treated as a constitutional right. For example, the ruling in Glik v. Cunniffe established a precedent in which citizens have a right to film police officers under the First Amendment in public spaces un-der certain reasonable limitations of time, place, and manner.7

Devices can be disabled in shared spaces using three approaches: sensor saturation, broadcasting commands, and context-based approaches. With sensor satura-tion, the goal is to make sensors in user devices sense an input signal that is greater than the maximum possible measurable input those sensors support (thereby making them unusable by saturation). This saturating signal is broadcast by fixed devices in shared spaces. By-standers’ privacy is protected be-cause when users’ device sensors are saturated, they will report data which do not provide or reveal any usable information to identify by-standers. An example in this cat-egory includes using near-infrared pulsating lights from fixed devices in shared spaces and directing them at the mobile device’s cam-era lens8 to saturate the charge-coupled device (CCD) sensor. This near-infrared light is invisible to the human eye, but causes the

Table 1. Bystanders’ privacy concerns.5

Privacy concern Description

Facial recognition Association and recognition of a bystander with a place or a situation in which the bystander would not wish to be recognized by others

social implications Lack of awareness by a network of friends regarding data being collected about them

social media sync immediate publishing or sharing without the bystander’s knowledge

user fears: surveillance and sousveillance

continuous tracking of activities that might make a user or bystander feel that no matter what he or she does, everything is recorded

speech disclosure capturing speech that a user or bystanders would not want to record or share

surreptitious A/V recording recording audio or video that might affect bystanders without their permission

Location disclosure Fear of inadvertently sharing a location to third parties that should not have access to the location information

www.computer.org/computingedge 41 computer.org/ITPro 63

CCD sensors to saturate. This sys-tem was implemented as a proof-of-concept, and no consumer version of this prototype exists on the market.

Broadcasting commands to temporarily deny user devices the chance to collect data in shared spaces can protect bystanders’ privacy. The goal in this category is to use communication protocols combined with fixed devices (for example, access points) to broad-cast commands that cause the us-er’s device software to disable the user device’s sensors. Examples in this category include Bluetooth-based protocols and infrared light-based protocols that can be used to send commands from fixed devices in shared spaces to disable users’ device sensors.9 By-standers’ privacy is protected be-cause data cannot be collected by user devices when these disabling commands are broadcast.

Apple patented the use of infra-red communication protocols to send disable commands for cell-phones,9 but no reports on pub-lic utilization or the availability of this technology exist. In the case of Bluetooth-based protocols that can disable sensors, no consum-er product has implemented this technology.

In the final category (context-based approaches), user devices perform some type of context recognition to trigger software ac-tions that will deny explicit data collection by disabling device sensors in shared spaces. An ex-ample in this category includes the virtual walls approach,10 in which the device uses contextual information (such as GPS location data) to trigger software actions that can temporarily disable its sensors based on preprogrammed contextual rules. In this case, by-standers’ privacy is preserved be-cause data cannot be collected by user devices when the user’s con-text is recognized and the device’s sensors are disabled. No commer-cial product currently implements context-based sensor-disabling mechanisms.

Obfuscation-Dependent MethodsObfuscation methods attempt to hide bystanders’ identity to avoid their identification. These meth-ods can be classified in two groups: bystander-based obfuscation and device-based obfuscation.

In bystander-based obfuscation, bystanders take action to avoid identification. This might be ac-complished by wearing some type

of hardware (or clothing) that hides or perturbs the identifiable features (such as facial features) needed to perform identification.11,12 Or, by-standers might perform some type of physical action (for example, leaving the shared space or ask-ing a user to stop using a device) to protect their privacy when they become aware of a device’s use in their surroundings that might in-fringe upon their privacy.1

Privacy Visor wearable glasses11 are an example of a device that performs bystander-based ob-fuscation. Worn by bystanders, these glasses use near-infrared light to block those facial features required by image-processing al-gorithms to perform facial recog-nition. Typical facial-recognition algorithms detect the difference in contrast between eyes and cheeks (the eye region is darker than the cheek region). By using near-infrared light (invisible to the human eye) emitted by LEDs to illuminate the nose region be-tween the eyes, this wearable causes an effect on CCD sensors in cameras similar to saturation. The result diminishes the con-trast that detects the difference between regions in a face, thereby preventing algorithms from de-tecting the bystander’s face.

Figure 1. Taxonomy of methods for bystanders’ privacy protection.

Bystanders’ privacysolutions

Location-dependent

Banning/confiscatingdevices Disabling devices

Sensorsaturation

Broadcastingcommands

Bystander-based Device-based

Default Selective Collaborative

Obfuscation-dependent

Context-based

62 IT Pro May/June 2017

securing it

without permission, thereby di-rectly threatening bystanders’ space and autonomy.5 Table 1 outlines and explains bystanders’ fears and concerns in greater detail.

Does the general public care about bystanders’ privacy? Results from a recent survey that explored users’ preferences when photo-graphed as bystanders showed that more than 95 percent of re-sponders answered positively to this question.6 This survey also showed trends that indicated re-sponders were more aware and re-strictive about being photographed as bystanders. These trends were in venues such as beaches, gyms, and hospitals; with strangers in social situations; and when images are shared online.6

Current SolutionsNo current technological method has been widely adopted to protect bystanders’ privacy because many solutions exist only as prototype systems in the research stage (an exception is recording devices with LEDs that notify users and bystand-ers that data collection is being performed in their surround ings, but not all smartphones and wear ables have this feature). How -ever, the utilization of privacy- enhancing wearables could become popular because these devices give bystanders the choice of pro-tecting their own privacy rather

than trusting their protection to others, potentially creating a mar-ket for these devices.

The techniques proposed to protect bystanders’ privacy fit into two major categories: location-de-pendent methods, which deny user devices the opportunity to collect data; and obfuscation-dependent methods, which prevent by-standers’ identification. Figure 1 presents the taxonomy we use in this section to classify the meth-ods available to protect bystand-ers’ privacy.

Location-Dependent MethodsThe goal of location-dependent methods is to deny the collection of data in particular shared spaces (such as restaurants, casinos, or cafes). The implementation of this method usually entails restricting and banning devices’ use through warning signs, confiscating de-vices before users enter a shared space, or temporarily disabling devices in the shared space.

According to Jeff Jarvis’s book Public Parts: How Sharing in the Digital Age Improves the Way We Work and Live (Simon & Schuster, 2011), President Theodore Roos-evelt banned the use of cameras at public monuments in Wash-ington, DC, around 1903, and they were often banned at beach-es as well. Similar bans occurred in England during World War I.

In the US, using cameras and recording dev ices to col lect data about things that are plain-ly visible in public spaces is now treated as a constitutional right. For example, the ruling in Glik v. Cunniffe established a precedent in which citizens have a right to film police officers under the First Amendment in public spaces un-der certain reasonable limitations of time, place, and manner.7

Devices can be disabled in shared spaces using three approaches: sensor saturation, broadcasting commands, and context-based approaches. With sensor satura-tion, the goal is to make sensors in user devices sense an input signal that is greater than the maximum possible measurable input those sensors support (thereby making them unusable by saturation). This saturating signal is broadcast by fixed devices in shared spaces. By-standers’ privacy is protected be-cause when users’ device sensors are saturated, they will report data which do not provide or reveal any usable information to identify by-standers. An example in this cat-egory includes using near-infrared pulsating lights from fixed devices in shared spaces and directing them at the mobile device’s cam-era lens8 to saturate the charge-coupled device (CCD) sensor. This near-infrared light is invisible to the human eye, but causes the

Table 1. Bystanders’ privacy concerns.5

Privacy concern Description

Facial recognition Association and recognition of a bystander with a place or a situation in which the bystander would not wish to be recognized by others

social implications Lack of awareness by a network of friends regarding data being collected about them

social media sync immediate publishing or sharing without the bystander’s knowledge

user fears: surveillance and sousveillance

continuous tracking of activities that might make a user or bystander feel that no matter what he or she does, everything is recorded

speech disclosure capturing speech that a user or bystanders would not want to record or share

surreptitious A/V recording recording audio or video that might affect bystanders without their permission

Location disclosure Fear of inadvertently sharing a location to third parties that should not have access to the location information

42 Computing Edge September 201764 IT Pro May/June 2017

securing it

A second example includes the use of perturbed eyeglass frames, which a bystander could use to impersonate other individu-als’ facial features or to deceive facial-recognition and identifica-tion algorithms.12 These eyeglass frames have a physical distribu-tion of colors on their surface that adds noise to the captured data in such a way that the algorithms either misclassify the bystander’s face as another individual (im-personation), or are confused so that they do not detect a face. This technology was recently de-veloped as a research project to undermine facial identification al-gorithms; no current commercial product exploits this idea yet.

Notification methods that alert bystanders to protect their privacy include the use of LEDs on wear-ables to notify bystanders of video or audio being recorded in their surroundings (such as Snap spec-tacles), and the use of short-range radio broadcasts and Wi-Fi-based communication protocols to notify bystanders about sensing activity being performed in their proximity (such as the NotiSense system1).

In the device-based obfuscation category, the software on users’ devices adds noise (such as blur-ring) on collected data to hide by-standers’ identifiable features (such as facial features or voice features, in cases of sound collection). Such software might perform obfusca-tion by default (for example, blur-ring all faces detected in a photo or video), might let users add noise to obfuscate bystanders selectively (se-lective obfuscation), or might access protocols over wireless networks to communicate bystanders’ privacy settings such that the software on the user’s device could automati-cally hide bystanders’ identifiable features based on these settings (collaborative obfuscation).6 The drawback of device-based obfusca-tion is that these methods rely on

devices controlled by the user, not the bystander.

Open IssuesMany of the methods we de-scribed are still being explored in research projects and have not been exploited commercially. Thus, the development of new products that incorporate features designed to protect bystanders’ privacy remains an open challenge and opportu-nity. Manufacturers can leverage the Privacy by Design framework to incorporate algorithms for blurring or protocols to disable cameras or notify bystanders’ smartphones, wearables, and other devices.

Recent advances in deep learn-ing combined with image process-ing can recreate images that were blurred by obfuscation methods, thereby weakening the effectiveness of some methods intended to pro-tect bystanders’ privacy. One good example is the pixel-recursive super-resolution method, which trans-forms low-resolution images with blurred faces into high-resolution images with the original facial fea-tures recovered. A possible solu-tion for managing reidentification can be achieved by substituting by-standers’ faces with fake, computer-generated faces, or faces taken from public-domain photos to obfuscate real images obtained of bystand-ers. Other methods, such as gait-identification techniques, or methods that use unique identifi-ers broadcast by mobile phones and wearables (that is, MAC addresses from network interfaces) could also be used to reidentify bystanders. More research is needed to protect bystanders from such techniques.

Finally, social acceptance of technology because of its benefits has fueled the adoption of many devices despite their drawbacks. It has been argued that this will be the case with devices that could potentially violate bystanders’ pri-vacy. Nevertheless, the Google

Glass scenario seems to tell another story: in May 2013, Google issued a statement saying that applications that incorporated facial recogni-tion in their Google Glass Explorer Program would not be accepted into the program because of strong public concerns. Indeed, recently, many news outlets have pointed to Google Glass privacy concerns as one reason for its demise. It re-mains an open issue how, in the fu-ture, the public will adopt and use devices that could violate both user privacy and that of bystanders.

A s new, more powerful Internet-connected and sensor-enabled devices

emerge (especially in the mobile and wearable market), it becomes easier to collect identifiable data about bystanders. As this trend continues, the issue of protecting bystanders’ privacy will come into even greater focus. We analyzed current solutions addressing this issue, but a great deal of work is needed to solve the outstanding issues we outlined.

AcknowledgmentsAlfredo J. Perez was supported by the US National Science Foundation under award 1560214. Sherali Zeadally’s work was support-ed by a University Research Professorship Award from the University of Kentucky in 2016.

References 1. S. Pidcock et al., “Notisense: An

Urban Sensing Notification System to Improve Bystander Privacy,” Proc. 2nd Int’l Workshop Sensing Applica-tions on Mobile Phones, 2011, pp. 1–5.

2. A. Heath, “This Russian Technolo-gy Can Identify You with Just a Pic-ture of Your Face,” Business Insider, 22 June 2016; read.bi/2p83hOU.

3. R. Mitchell, “Sensing Mine, Yours, Theirs, and Ours: Interpersonal Ubiquitous Interactions,” Proc. 2015 ACM Int’l Symp. Wearable Computers, 2015, pp. 933–938.

www.computer.org/computingedge 43 computer.org/ITPro 65

4. L. Palen et al., “Going Wireless: Behavior & Practice of New Mo-bile Phone Users,” Proc. ACM Conf. Computer Supported Cooperative Work, 2000, pp. 201–210.

5. V.G. Motti et al., “Users’ Privacy Con-cerns about Wearables: Impact of Form Factor, Sensors and Type of Data Collected,” Proc. 1st Workshop Wearable Security and Privacy, LNCS 8976, 2015.

6. P. Aditya et al., “I-Pic: A Platform for Privacy-Compliant Image Cap-ture,” Proc. 14th Ann. Int’l Conf. Mo-bile Systems, Applications, and Services, 2016, pp. 249–261.

7. Glik v. Cunniffe, Federal Reporter, 3rd series, vol. 655, 2011, p. 78 (US Court of Appeals for the First Circuit).

8. K.N. Truong et al., “Preventing Camera Recording by Designing a Capture-Resistant Environment,” Proc. Int’l Conf. Ubiquitous Computing, 2005, pp. 73–86.

9. V. Tiscareno, K. Johnson, and C. Lawrence, Systems and Methods for Receiving Infrared Data with a Camera Designed to Detect Images based on Visible Light, US patent 8,848,059, to Apple, Patent and Trademark Office, 2014.

10. A. Kapadia et al., “Virtual Walls: Protecting Digital Privacy in Perva-sive Environments,” Proc. Int’l Conf Pervasive Computing, LNCS 4480, 2007, pp. 162–179.

11. T. Yamada et al., “Use of Invisible Noise Signals to Prevent Privacy Invasion through Face Recognition from Camera Images,” ACM Multi-media, 2012, pp. 1315–1316.

12. M. Sharif et al., “Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recogni-tion,” Proc. ACM SIGSAC Conf. Com-puter and Communications Security, 2016, pp. 1528–1540.

Alfredo J. Perez is an assistant professor with the TSYS School of Computer Science at Columbus State University. Contact him at [email protected].

Sherali Zeadally is an associate profes-sor in the College of Communication and Information at the University of Ken-tucky. Contact him at [email protected].

Scott Griffith is a graduate assistant with the TSYS School of Computer Science at Columbus State University. Contact him at [email protected].

Read your subscriptions through the myCS publi-cations portal at http://mycs.computer.org.

Advertising Personnel

Marian Anderson: Sr. Advertising CoordinatorEmail: [email protected]: +1 714 816 2139 | Fax: +1 714 821 4010

Sandy Brown: Sr. Business Development Mgr.Email [email protected]: +1 714 816 2144 | Fax: +1 714 821 4010

Advertising Sales Representatives (display)

Central, Northwest, Far East: Eric KincaidEmail: [email protected]: +1 214 673 3742Fax: +1 888 886 8599

Northeast, Midwest, Europe, Middle East: Ann & David SchisslerEmail: [email protected], [email protected]: +1 508 394 4026Fax: +1 508 394 1707

Southwest, California: Mike HughesEmail: [email protected]: +1 805 529 6790

Southeast: Heather BuonadiesEmail: [email protected]: +1 973 304 4123Fax: +1 973 585 7071

Advertising Sales Representatives (Classified Line)

Heather BuonadiesEmail: [email protected]: +1 973 304 4123Fax: +1 973 585 7071

Advertising Sales Representatives (Jobs Board)

Heather BuonadiesEmail: [email protected]: +1 973 304 4123Fax: +1 973 585 7071

ADVERTISER INFORMATION

64 IT Pro May/June 2017

securing it

A second example includes the use of perturbed eyeglass frames, which a bystander could use to impersonate other individu-als’ facial features or to deceive facial-recognition and identifica-tion algorithms.12 These eyeglass frames have a physical distribu-tion of colors on their surface that adds noise to the captured data in such a way that the algorithms either misclassify the bystander’s face as another individual (im-personation), or are confused so that they do not detect a face. This technology was recently de-veloped as a research project to undermine facial identification al-gorithms; no current commercial product exploits this idea yet.

Notification methods that alert bystanders to protect their privacy include the use of LEDs on wear-ables to notify bystanders of video or audio being recorded in their surroundings (such as Snap spec-tacles), and the use of short-range radio broadcasts and Wi-Fi-based communication protocols to notify bystanders about sensing activity being performed in their proximity (such as the NotiSense system1).

In the device-based obfuscation category, the software on users’ devices adds noise (such as blur-ring) on collected data to hide by-standers’ identifiable features (such as facial features or voice features, in cases of sound collection). Such software might perform obfusca-tion by default (for example, blur-ring all faces detected in a photo or video), might let users add noise to obfuscate bystanders selectively (se-lective obfuscation), or might access protocols over wireless networks to communicate bystanders’ privacy settings such that the software on the user’s device could automati-cally hide bystanders’ identifiable features based on these settings (collaborative obfuscation).6 The drawback of device-based obfusca-tion is that these methods rely on

devices controlled by the user, not the bystander.

Open IssuesMany of the methods we de-scribed are still being explored in research projects and have not been exploited commercially. Thus, the development of new products that incorporate features designed to protect bystanders’ privacy remains an open challenge and opportu-nity. Manufacturers can leverage the Privacy by Design framework to incorporate algorithms for blurring or protocols to disable cameras or notify bystanders’ smartphones, wearables, and other devices.

Recent advances in deep learn-ing combined with image process-ing can recreate images that were blurred by obfuscation methods, thereby weakening the effectiveness of some methods intended to pro-tect bystanders’ privacy. One good example is the pixel-recursive super-resolution method, which trans-forms low-resolution images with blurred faces into high-resolution images with the original facial fea-tures recovered. A possible solu-tion for managing reidentification can be achieved by substituting by-standers’ faces with fake, computer-generated faces, or faces taken from public-domain photos to obfuscate real images obtained of bystand-ers. Other methods, such as gait-identification techniques, or methods that use unique identifi-ers broadcast by mobile phones and wearables (that is, MAC addresses from network interfaces) could also be used to reidentify bystanders. More research is needed to protect bystanders from such techniques.

Finally, social acceptance of technology because of its benefits has fueled the adoption of many devices despite their drawbacks. It has been argued that this will be the case with devices that could potentially violate bystanders’ pri-vacy. Nevertheless, the Google

Glass scenario seems to tell another story: in May 2013, Google issued a statement saying that applications that incorporated facial recogni-tion in their Google Glass Explorer Program would not be accepted into the program because of strong public concerns. Indeed, recently, many news outlets have pointed to Google Glass privacy concerns as one reason for its demise. It re-mains an open issue how, in the fu-ture, the public will adopt and use devices that could violate both user privacy and that of bystanders.

A s new, more powerful Internet-connected and sensor-enabled devices

emerge (especially in the mobile and wearable market), it becomes easier to collect identifiable data about bystanders. As this trend continues, the issue of protecting bystanders’ privacy will come into even greater focus. We analyzed current solutions addressing this issue, but a great deal of work is needed to solve the outstanding issues we outlined.

AcknowledgmentsAlfredo J. Perez was supported by the US National Science Foundation under award 1560214. Sherali Zeadally’s work was support-ed by a University Research Professorship Award from the University of Kentucky in 2016.

References 1. S. Pidcock et al., “Notisense: An

Urban Sensing Notification System to Improve Bystander Privacy,” Proc. 2nd Int’l Workshop Sensing Applica-tions on Mobile Phones, 2011, pp. 1–5.

2. A. Heath, “This Russian Technolo-gy Can Identify You with Just a Pic-ture of Your Face,” Business Insider, 22 June 2016; read.bi/2p83hOU.

3. R. Mitchell, “Sensing Mine, Yours, Theirs, and Ours: Interpersonal Ubiquitous Interactions,” Proc. 2015 ACM Int’l Symp. Wearable Computers, 2015, pp. 933–938.

This article originally appeared in IT Professional, vol. 19, no. 3, 2017.

IEEE-CS Charles Babbage Award CALL FOR AWARD NOMINATIONSDeadline 1 October 2017

ABOUT THE IEEE-CS CHARLES BABBAGE AWARDEstablished in memory of Charles Babbage in recognition of significant contributions in the field of parallel computation. The candidate would have made an outstanding, innovative contribution or contributions to parallel computation. It is hoped, but not required, that the winner will have also contributed to the parallel computation community through teaching, mentoring, or community service.

CRITERIAThis award covers all aspects of parallel computing including computational aspects, novel applications, parallel algorithms, theory of parallel computation, parallel computing technologies, among others.

AWARD & PRESENTATIONA certificate and a $1,000 honorarium presented to a single recipient. The winner will be invited to present a paper and/or presentation at the annual IEEE-CS International Parallel and Distributed Processing Symposium (IPDPS 2017).

NOMINATION SUBMISSIONOpen to all. Nominations are being accepted electronically at www.computer.org/web/awards/charles-babbage. Three endorsements are required. The award shall be presented to a single recipient.

NOMINATION SITEawards.computer.org

AWARDS HOMEPAGEwww.computer.org/awards

CONTACT [email protected]

The IEEE Computer Society is launching INTERFACE, a new communication tool to help members engage, collaborate and stay current on CS activities. Use INTERFACE to learn about member accomplishments and find out how your peers are changing the world with technology.

We’re putting our professional section and student branch chapters in the spotlight, sharing their recent activities and giving leaders a window into how chapters around the globe meet member expectations. Plus, INTERFACE will keep you informed on CS activities so you never miss a meeting, career development opportunity or important industry update.

Launching this spring. Watch your email for its debut.

PREPARE TO CONNECT

2469-7087/17/$33.00 © 2017 IEEE Published by the IEEE Computer Society September 2017 45 Published by the IEEE Computer Society 0272-1716/16/$33.00 © 2016 IEEE IEEE Computer Graphics and Applications 19

Editor: Theresa-Marie RhyneVisualization Viewpoints

Interacting with Large 3D Datasets on a Mobile DeviceChris SchultzNvidia

Mike BaileyOregon State University

Visualizing 3D datasets requires a high level of GPU performance due to the com-putational demands of volume render-

ing algorithms. In the past, the GPU hardware found on mobile devices was too underpowered for high-resolution datasets (512 × 512 × 512 and higher). Rendering algorithms that produce higher-quality images, such as ray-cast volume rendering, put even more pressure on those de-vices. To accommodate these performance short-comings, much of the previous work has been done with client-server-based implementations.1

Although powerful, these applications require a reliable network of suf� cient bandwidth to per-form interactively.

To avoid the additional complexities that a net-work introduces, we focus on local implementa-tions—rendering done on the device itself. Such applications typically use 2D texture-based al-gorithms, lower sample rates, and lower resolu-tion volumes.1 These methods have resulted in frame rates of up to 7 frames/second.1 However, they produce lower-quality images compared with desktop implementations. Additionally, resolution downsizing in a professional setting is frowned upon due to the data underrepresentation.

Today, graphics hardware in mobile devices has become more powerful, allowing rendering al-gorithms to produce higher-quality images more quickly. An example of such a device is the Nvidia Shield Tablet, which has a Tegra K1, a mobile chip with 192 GPU cores. This tablet can perform ray-cast volume rendering at interactive frame rates, with high sample rates, when the volume

resolution is approximately 256 × 256 × 256. We experimented with datasets ranging from 128 × 128 × 128 to 512 × 512 × 512 and found 256 × 256 × 256 to be the ideal resolution. Although the 512 × 512 × 512 test sample did run, the user interface was unresponsive and, at times, showed corruption.

One way to interact with a 512 × 512 × 512 (or larger) volume in such a situation is to lower the resolution. This works, but it is not optimum because much of the detail is lost. Instead, we propose an implementation that avoids downsiz-ing by implementing a detail-on-demand scheme using subvolumes to keep the data’s resolution intact, while constraining the texture size to per-form interactively.

Finding the Appropriate 3D Texture SizeOne of the main reasons that ray-casting has been a rarity in mobile devices was not only due to the lack of GPU power, but that most devices, up until recently, didn’t have 3D texture support. Fortu-nately, the Shield Tablet does have this support.

One in� uence on the ray-cast algorithm’s per-formance is the size of the 3D texture. For this project, we use a 512 × 512 × 894 scan of a dog’s head (see Figure 1). This dataset was provided by Sarah Nemanic of the Oregon State University College of Veterinary Medicine. It provides a good example because it is a representative use case within the � eld of veterinary medicine.

We � rst render the entire dog’s head dataset, non-downsized. Our application achieves a frame rate of 4 to 6 frames per second (fps) while rotating.

g5vis.indd 19 8/22/16 4:03 PM

46 Computing Edge September 201720 September/October 2016

Visualization Viewpoints

For testing reasons, we keep the sample rate at 500 steps per ray. In comparison, an iPad 2 ray-cast implementation achieves a frame rate of 0.8 fps with 80 steps per ray and uses a smaller dataset (512 × 512 × 384).2 Even though there is a sig-nificant increase in performance, 4 to 6 fps is not enough. While interacting with the application, whether it is adjusting render variable parameters or rotating the scene, there are noticeable delays between performing a gesture and the applica-tion’s reaction to it.

It is clear that the dataset’s resolution is too high for the GPU to handle directly. The next step is to scale the volume down by a factor of two in each dimension. Rendering the new volume with the size of 256 × 256 × 447 increases the rotating frame rate to 17 to 18 fps.

To compare our performance, we used a 2D textured-based implementation discussed in ear-lier work3 that uses a dataset of 256 × 256 × 256, sampled at a rate of 128 slices, that achieves 7.3 fps. When zoomed in (more pixels being rendered), our implementation reaches approxi-mately 8 fps. In other words, our worst case, us-ing a higher-quality algorithm, performs just as well as a 2D texture-based application. Also, this resolution gives a much more responsive user experience and the ability to view the dataset as a whole.

Nevertheless, we needed to resort to downsiz-ing, which we know is not an acceptable long-term solution. Additionally, lowering the resolution in-troduces visual artifacts and provides less accurate data, which Figure 2 shows. If the dog’s head da-taset was at a higher resolution (such as 1,024 × 1,024 × 1,024), more severe visual artifacts and data loss would occur.

Now that we have a texture size that we know suits the Shield Tablet, we need to address the challenge of not resorting to volume downsizing by experimenting with subvolumes.

Figure 1. Example slices from the dog’s head dataset. This 512 × 512 × 894 scan is a representative use case within the field of veterinary medicine.

(a)

(b)

Figure 2. Visual artifacts produced by downsizing: (a) 512 × 512 × 894 rendering and (b) 256 × 256 × 447 rendering. When zoomed in, more profound wood grain artifacts are visible in the lower resolution. The trade-off here is the increased frame rate, from 4–6 frames per second (fps) to 17–18 fps, for less accurate data.

g5vis.indd 20 8/22/16 4:03 PM

www.computer.org/computingedge 47 IEEE Computer Graphics and Applications 21

SubvolumesInstead of using the 256 × 256 × 447 downscaled volume, we can extract same-dimensioned subvol-umes from the original dataset. The effect is as if there is a magnifying glass inside the original da-taset, allowing the user to view the unaltered, full-resolution data. Figure 3 shows how much more detail exists for the number of pixels being ren-dered. The main drawback of subvolumes is that we aren’t able to explore the entire dataset without manually specifying what subvolume to load.

The Grid ViewTo address the issue of manually loading sub-volumes to maneuver within the dataset, we in-troduce a method that lets the user explore the various subvolumes. We first split the original da-

taset into chunks that are half the size in each di-mension. So, in the case of the dog’s head dataset, each chunk has a resolution of 128 × 128 × 223. We then render all eight of these chunks at once as if they are just one 3D texture. Figure 4 depicts how this is done within our application.

To keep track of which subvolumes to load, we use a data structure called grid points. Each grid point resides at the intersection of each slice and holds which subvolumes to load and how the subvolumes spatially relate to each other within the rendering algorithm (similar to the color coding in Figure 4). Additionally, each neighboring grid point shares at least one subvolume. This creates continuity when traversing the data. Figure 5 provides an example of such data continuity. Without the middle image shown in Figure 5b, the edges of the left and right frames will not be shown together, and the user must flip between the two outer frames to under-stand the relationship between the two edges.

(a)

(b)

Figure 3. Same-dimensioned subvolumes: (a) the 256 × 256 × 447 downsized volume and (b) the 256 × 256 × 447 native resolution subvolume. Both images are rendered with the same number of pixels, but the latter is rendered with native resolution data, which allows for data preservation while keeping interactive frame rates. The only drawback with the lower image is the inability to view the entire dataset.

S: [0., .5)T: [0., .5)P: [0., .5)

S: [0., .5)T: [.5, 1.]P: [0., .5)

S: [0.5, 1.]T: [0., .5)P: [.5, 1.]

S: [.5, 1.]T: [.5, 1.]P: [.5, 1.]

S: [0., .5)T: [0., .5)P: [.5, 1]

S: [0., .5)T: [.5, 1.]P: [.5, 1.]

S: [.5, 1.]T: [0., .5)P: [0., .5)

P: [0., .5)T: [.5, 1.]S: [.5, 1.]

S: [0., 1.]

T: [0., 1.]

P: [0., 1.]

Figure 4. Rendering eight small 3D textures as one. Given the position of the ray within the geometry, we page into the correct texture and offset accordingly. (S, T, P) are texture coordinates, similar to Cartesian (X, Y, Z).

g5vis.indd 21 8/22/16 4:03 PM

20 September/October 2016

Visualization Viewpoints

For testing reasons, we keep the sample rate at 500 steps per ray. In comparison, an iPad 2 ray-cast implementation achieves a frame rate of 0.8 fps with 80 steps per ray and uses a smaller dataset (512 × 512 × 384).2 Even though there is a sig-nificant increase in performance, 4 to 6 fps is not enough. While interacting with the application, whether it is adjusting render variable parameters or rotating the scene, there are noticeable delays between performing a gesture and the applica-tion’s reaction to it.

It is clear that the dataset’s resolution is too high for the GPU to handle directly. The next step is to scale the volume down by a factor of two in each dimension. Rendering the new volume with the size of 256 × 256 × 447 increases the rotating frame rate to 17 to 18 fps.

To compare our performance, we used a 2D textured-based implementation discussed in ear-lier work3 that uses a dataset of 256 × 256 × 256, sampled at a rate of 128 slices, that achieves 7.3 fps. When zoomed in (more pixels being rendered), our implementation reaches approxi-mately 8 fps. In other words, our worst case, us-ing a higher-quality algorithm, performs just as well as a 2D texture-based application. Also, this resolution gives a much more responsive user experience and the ability to view the dataset as a whole.

Nevertheless, we needed to resort to downsiz-ing, which we know is not an acceptable long-term solution. Additionally, lowering the resolution in-troduces visual artifacts and provides less accurate data, which Figure 2 shows. If the dog’s head da-taset was at a higher resolution (such as 1,024 × 1,024 × 1,024), more severe visual artifacts and data loss would occur.

Now that we have a texture size that we know suits the Shield Tablet, we need to address the challenge of not resorting to volume downsizing by experimenting with subvolumes.

Figure 1. Example slices from the dog’s head dataset. This 512 × 512 × 894 scan is a representative use case within the field of veterinary medicine.

(a)

(b)

Figure 2. Visual artifacts produced by downsizing: (a) 512 × 512 × 894 rendering and (b) 256 × 256 × 447 rendering. When zoomed in, more profound wood grain artifacts are visible in the lower resolution. The trade-off here is the increased frame rate, from 4–6 frames per second (fps) to 17–18 fps, for less accurate data.

g5vis.indd 20 8/22/16 4:03 PM

48 Computing Edge September 201722 September/October 2016

Visualization Viewpoints

The grid points not only hold information about what to load, but they act as a way to keep track of where the user is within the volume. The number-ing scheme for the grid points is chosen in such a way that simple arithmetic can be applied to the current grid point to arrive at a neighboring grid point. For example, in Figure 5a if we start at grid point 4 and want to move left, we need to sub-tract one from the current grid point and end up at grid point 3. If we want to go up to grid point 7, we add the width of the grid view (in this case, it has a width of three). We also limit the user from skipping over neighboring grid points in order to maintain the aforementioned continuity. For ex-ample, we don’t allow the user to go from grid point 0 to grid point 8.

Now that we have a method to keep track of what to load and how to traverse the dataset, the final piece is to understand which direction the user wants to move. We use the user’s view direc-tion and, from that, determine which neighboring grid point to visit next. For example, looking at Figure 5a, if we are at grid point 4 and are viewing straight up, we will end up at grid point 7.

The Bread Slice ViewWe found the dog’s head dataset to be intuitive to explore. We all know what a dog looks like, so it is easy to understand which subregion we are cur-rently viewing (such as a snout or lower jaw). But what if we are using the scan of the torso and it isn’t easy to tell what we are looking at or toward?

To address this concern, we introduce what we call the bread slice view. Essentially, we allow the user to flip through the volume as if it were a loaf of bread. Once an interesting slice has been cho-sen, tapping the image selects a subregion of the slice, as Figure 6 shows. The application then uses this selection to extract a subvolume containing the subregion selection and renders the image, re-sulting in something similar to Figure 3b.

To help increase the user’s spatial awareness, we use a highlighted version of the downsized volume as a frame of reference (see Figure 6c). This not only helps with understanding where the subvol-ume is located within the entire dataset, but it also helps show the current selection’s context and the user’s spatial orientation.

(a)

(c)

(b)

Figure 6. Bread slice view. (a) In the initial view, the red square represents the subvolume to be loaded. We also show (b) the resulting subvolume from the bread slice view and (c) a highlighted version of the downscaled view showing the selected subvolume as a frame of reference.

Take the CS Library wherever you go!

IEEE Computer Society magazines and Transactions are now available to subscribers in the portable ePub format.

Just download the articles from the IEEE Computer Society Digital Library, and you can read them on any device that supports ePub. For more information, including a list of compatible devices, visit

www.computer.org/epub

(a)

(b)

0 1 2

3 4 5

6 7 8

Figure 5. Using grid points for data continuity. (a) For the 2D example, grid point 1 shares two subvolumes with grid points 0, 4, and 2, but only one subvolume with 3 and 5. (b) The example from our application shows transitioning through the different grid points. (Imagine going from grid point 3 to 4 to 5.)

g5vis.indd 22 8/22/16 4:03 PM

www.computer.org/computingedge 49 IEEE Computer Graphics and Applications 23

Not only does constraining the texture size benefit the GPU’s performance, but it also

bounds the memory footprint of the volume being rendered. Although we show that the Shield Tablet was able to successfully load a 512 × 512 × 894 dataset, the voxel representation was only an 8-bit single channel. If the volume used a much higher voxel representation, as in 16-bit RGBA, the mem-ory footprint would have been much higher and the data wouldn’t have fit on the device, because Android severely limits the amount of memory for a given application. Our method is designed to account for devices with both performance and memory limitations.

References 1. J.M. Noguera and J.R. Jimenez, “Mobile Volume

Rendering: Past, Present, and Future,” IEEE Trans. Visualization and Computer Graphics, vol. 22, no. 2, 2016, pp. 1164–1178.

2. J. Noguera and J. Jimenez, “Visualization of Very Large 3D Volumes on Mobile Devices and WebGL,” Proc. WSCG Comm., 2012, pp. 105–112.

3. M.B. Rodriguez and P.V. Alcocer, “Practical Volume Rendering in Mobile Devices,” Proc. Int’l Symp. Visual Computing, 2012, pp. 708–718.

Chris Schultz is a software engineer at Nvidia and a former graduate student of Oregon State University. Contact him at [email protected].

Mike Bailey is a professor of computer science at Oregon State University. Contact him at [email protected].

Contact department editor Theresa-Marie Rhyne at [email protected].

Selected CS articles and columns are also available

for free at http://ComputingNow.computer.org.

Take the CS Library wherever you go!

IEEE Computer Society magazines and Transactions are now available to subscribers in the portable ePub format.

Just download the articles from the IEEE Computer Society Digital Library, and you can read them on any device that supports ePub. For more information, including a list of compatible devices, visit

www.computer.org/epub

g5vis.indd 23 8/22/16 4:03 PM

22 September/October 2016

Visualization Viewpoints

The grid points not only hold information about what to load, but they act as a way to keep track of where the user is within the volume. The number-ing scheme for the grid points is chosen in such a way that simple arithmetic can be applied to the current grid point to arrive at a neighboring grid point. For example, in Figure 5a if we start at grid point 4 and want to move left, we need to sub-tract one from the current grid point and end up at grid point 3. If we want to go up to grid point 7, we add the width of the grid view (in this case, it has a width of three). We also limit the user from skipping over neighboring grid points in order to maintain the aforementioned continuity. For ex-ample, we don’t allow the user to go from grid point 0 to grid point 8.

Now that we have a method to keep track of what to load and how to traverse the dataset, the final piece is to understand which direction the user wants to move. We use the user’s view direc-tion and, from that, determine which neighboring grid point to visit next. For example, looking at Figure 5a, if we are at grid point 4 and are viewing straight up, we will end up at grid point 7.

The Bread Slice ViewWe found the dog’s head dataset to be intuitive to explore. We all know what a dog looks like, so it is easy to understand which subregion we are cur-rently viewing (such as a snout or lower jaw). But what if we are using the scan of the torso and it isn’t easy to tell what we are looking at or toward?

To address this concern, we introduce what we call the bread slice view. Essentially, we allow the user to flip through the volume as if it were a loaf of bread. Once an interesting slice has been cho-sen, tapping the image selects a subregion of the slice, as Figure 6 shows. The application then uses this selection to extract a subvolume containing the subregion selection and renders the image, re-sulting in something similar to Figure 3b.

To help increase the user’s spatial awareness, we use a highlighted version of the downsized volume as a frame of reference (see Figure 6c). This not only helps with understanding where the subvol-ume is located within the entire dataset, but it also helps show the current selection’s context and the user’s spatial orientation.

(a)

(c)

(b)

Figure 6. Bread slice view. (a) In the initial view, the red square represents the subvolume to be loaded. We also show (b) the resulting subvolume from the bread slice view and (c) a highlighted version of the downscaled view showing the selected subvolume as a frame of reference.

Take the CS Library wherever you go!

IEEE Computer Society magazines and Transactions are now available to subscribers in the portable ePub format.

Just download the articles from the IEEE Computer Society Digital Library, and you can read them on any device that supports ePub. For more information, including a list of compatible devices, visit

www.computer.org/epub

(a)

(b)

0 1 2

3 4 5

6 7 8

Figure 5. Using grid points for data continuity. (a) For the 2D example, grid point 1 shares two subvolumes with grid points 0, 4, and 2, but only one subvolume with 3 and 5. (b) The example from our application shows transitioning through the different grid points. (Imagine going from grid point 3 to 4 to 5.)

g5vis.indd 22 8/22/16 4:03 PM

This article originally appeared in IEEE Computer Graphics and Applications, vol. 36, no. 5, 2016.

Call for Software Engineering Award NominationsEstablished in memory of Harlan D. Mills to recognize researchers and

practitioners who have demonstrated long-standing, sustained, and impactful contributions to software engineering practice and research through the

development and application of sound theory. The award consists of a $3,000 honorarium, plaque, and a possible invited talk during the week of the annual

International Conference on Software Engineering (ICSE), co-sponsored by the IEEE Computer Society Technical Council on Software Engineering.

The award nomination requires at least 3 endorsements. Self-nominations are not accepted. Nominees/nominators do not need

to be IEEE or IEEE Computer Society members.

HarlanD. MillsAward

Deadline for 2018 Nominations:1 October 2017

Nomination site:awards.computer.org

RICHARD E. MERWIN SCHOLARSHIP

IEEE COMPUTER SOCIETY RICHARD E. MERWIN STUDENT LEADERSHIP SCHOLARSHIP

$40,000 AVAILABLE FOR IEEE COMPUTER SOCIETY MEMBERS

Richard E. Merwin Scholarships are awarded to recognize and reward active student volunteer leaders who show promise in their academic and professional efforts.

u Scholarships from $1,000

u Recipients are recognized as Computer Society Ambassadors and receive mentoring opportunities

u Available to graduate and undergraduate students in their final two years

u Students must be enrolled in a program in electrical or computer engineering, computer science, information technology, or a well defined computer-related field

u IEEE Computer Society membership required at time of application

www.computer.org/merwin u DEADLINE: 30 SEPTEMBER

SOME OF OUR PAST RICHARD E. MERWIN SCHOLARSHIP WINNERS …Ambika Shivana Jagmohansingh University of the West Indies (Jamaica) | Amente Bekele Carleton University (Canada) | Atefeh Khosravi University of Melbourne (Australia) | Christen M. Corrado Rowan University (USA) | Irene Mathew Susan College of Engineering, Chengannur (India) | Josip Balen J.J. Strossmayer University of Osijek (Croatia) | Marios Bikos University of Patras (Greece)

2469-7087/17/$33.00 © 2017 IEEE Published by the IEEE Computer Society September 2017 51

COMPUTING CAREERSCOMPUTING CAREERSCOMPUTING CAREERSCOMPUTING CAREERSCOMPUTING CAREERSCOMPUTING CAREERSCOMPUTING CAREERSCOMPUTING CAREERSCOMPUTING CAREERSCOMPUTING CAREERS

Careers in Wireless Technology

F or this issue of ComputingEdge, we inter-viewed Jeff rey Reed—professor of elec-trical and computer engineering at

Virginia Tech University and president of PFP Cybersecurity—about careers in wireless technol-ogy. Reed’s expertise is in software-defi ned radios, cognitive radios, smart antennas, and ad hoc wire-less networks. He coauthored the article “A Com-munications Jamming Taxonomy” from IEEE Security & Privacy’s January/February 2016 issue.

ComputingEdge: What careers related to wireless technology will see the most growth in the next several years?

Reed: Careers in the security and information assurance of wireless systems will grow because an increasing number of mission-critical func-tions will depend on wireless technologies. Fur-thermore, as more wireless networks interconnect with one another in the future, more vulnerabili-ties will have to be addressed.

ComputingEdge: What would you tell college students to give them an advantage over the competition?

Reed: Try to incorporate something in your back-ground that makes you unique. This could be international experience, leadership experience, or special training. It doesn’t have to be technical, but it should show your ability to grow your career.

ComputingEdge: What should applicants keep in mind when applying for wireless-technology jobs?

Reed: Look for a position that provides you with global insight into a product’s development, rather than one that looks at just a small aspect of it. Try to fi nd a job that will give you a broad view of how the product’s various components fi t together to create a system. This means that your education, especially at the undergraduate level, should also be broad, incorporating classes in electrical-engineering fun-damentals. Engineering specifi cs are easy to pick up later in a career, but picking up fundamentals outside the classroom is diffi cult.

ComputingEdge: How can new hires make the strongest impression in a new position?

Reed: Work hard, and go the extra mile to learn. Understand the expectations, and exceed them.

52 Computing Edge September 2017

COMPUTING CAREERS

ComputingEdge: Name one critical mistake for young graduates to avoid when starting their careers.

Reed: Avoid working in an area that, while it might be important to your employer, doesn’t develop a skillset that would be valuable to the industry in general.

ComputingEdge: Do you have any other advice that could benefi t those starting out in their careers?

Reed: Continue to expand your education, and be prepared to do major career shifts every seven years if not sooner.

ComputingEdge’s Lori Cameron inter-viewed Reed for this article. Contact her at [email protected] if you would

like to contribute to a future ComputingEdgearticle on computing careers. Contact Reed at [email protected].

NOMINATE A COLLEAGUE FOR THIS AWARD!

DUE: 1 OCTOBER 2017

CALL FOR STANDARDS AWARD NOMINATIONS

IEEE COMPUTER SOCIETY HANS KARLSSON STANDARDS AWARD

Submit your nomination electronically: awards.computer.org | Questions: [email protected]

• Requires 3 endorsements.

• Self-nominations are not accepted.

• Do not need IEEE or IEEE Computer Society membership to apply.

A plaque and $2,000 honorarium is presented in recognition of outstanding skills and dedication to diplomacy, team facilitation, and joint achievement in the development or promotion of standards in the computer industry where individual aspirations, corporate competition, and organizational rivalry could otherwise be counter to the benefit of society.

Read your subscriptions through the myCS publications portal at http://mycs.computer.org.

CAREER OPPORTUNITIES

2469-7087/17/$33.00 © 2017 IEEE Published by the IEEE Computer Society September 2017 53

Announcement of an open position at the Faculty of Informatics,TU Wien, Austria

FULL PROFESSORSHIP of

COMPUTER AIDED VERIFICATION(Successor of Helmut Veith)

The TU Wien (Vienna University of Technology) invites applications for a full professorship at the Faculty of Informatics.

The applicant is required to have an outstanding academic record in the field of Computer Aided Verification (CAV). Correctness, safety, and reliability of electronic systems are paramount in today’s software-controlled world. The focus of the professorship on CAV will be on automated techniques to verify soft- and hardware. Besides a proven ability in CAV core methods (Computational Logic, Theoretical Computer Science), the candidate will also have a strong interdisciplinary background, especially in relation to Embedded Information Systems, Software Verification, Synthesis or Distributed Algorithms. This position will strengthen the area of Logic and Computation as well as form a link to other research foci of the faculty.

We offer excellent working conditions in an attractive research environment in a city with an exceptional quality of life.

For a more detailed announcement and information on how to apply, please go to: http://www.informatik.tuwien.ac.at/vacancies

Application Deadline: October 16, 2017

CLOUDERA, INC. IS RECRUITING FOR OUR PALO ALTO, CA OFFICE:

SOFTWARE ENGINEER #38590: Create test plans for new product features. Design and code automated tests and test infrastructure to support testing of new product com-ponents and features.

SOFTWARE ENGINEER #37173: Analyze, design, archi-tect, program and debug back-end systems and appli-cations for our support organization using cutting-edge technology.

SOFTWARE ENGINEER - DEVELOPMENT #37800: As a key member of the team, create & deliver our product stack deployment in Cloud environments.

SALES ENGINEER #36026: Ensure the success of existing customer relationships and expand the usage of Cloude-ra’s technology by discovering additional use cases within a customer’s organization.

MAIL RESUME WITH JOB CODE # TO

CLOUDERAATTN: HR

395 PAGE MILL ROAD, 3RD FLOORPALO ALTO, CA 94306

ORACLE DATABASE ADMINISTRATOR, Warren, MI, General Motors. Perform 24x7 production database administration &provide global support to apps such as Siebel, SAP, &COTS systems run on Or-acle database. Install &maintain 11g &12c GoldenGate, confi guring bi-directional replication &resolve day-to-day replica-tion issues. Upgrade &migrate Oracle 11g &12c databases. Implement code releases &resolve GoldenGate replica-tion issues across various environments across the enterprise. Set up infrastruc-ture for databases in huge data centers. Dvlp numerous PL/SQL procedures, packages, functions &triggers for vari-ous reqmts. Install, confi gure, implement 10g Real Application Cluster R1&R2/11g R1&R2, migrate large databases using Recovery Manager duplication. Perform EMC lun storage additions to cluster da-tabases with Automatic Storage Manage-ment fi le system &to Standard-Business Continuity Volume paired databases. Master, Computer Science, Information Technology or Computer Engrg. 12 mos exp as Database Administrator or related, installing &maintaining GoldenGate, confi guring bi-directional replication

&resolve day-to-day replication issues. Mail resume to Ref#14936, GM Global Mobility, 300 Renaissance Center, MC:482-C32-C66, Detroit, MI 48265.

GLOBAL WIRELESS FIELD TEST LEAD, Warren, MI, General Motors. Plan fi eld testing for North America, South Amer-ica, Europe &Asia-Pacifi c regions, dvlp test plans, &give technical instructions to 3rd party company to perform Global Wireless fi eld testing of passenger ve-hicle telematics module incldg various features. Prepare &execute fi eld test cases, verify certifi cation &validate Qual-comm &related 4G LTE &beyond de-vice chipsets, mobile device products, in-vehicle telematics module &apps for conventional &autonomous passenger vehicles. Plan &execute interoperability &fi eld trial tests according to Groupe Spé-ciale Mobile Association specs &guide-lines &Global Certifi cation Forum reqmts for testing handset software &hardware in live networks across North America, South America, Europe &Asia-Pacifi c regions in a mobile or stationary Global System for Mobile communication, Gen-eral Packet Radio Service, Universal

Mobile Telecommunications Service, Long-Term Evolution (LTE), &LTE Ad-vanced environment. Write test scripts, dvlp &improve automation tools to parse &analyze the modem diagnostic logs &application logs using Visual Basic (VB) &Practical Extraction &Report Language (PERL). Bachelor, Electronics &Commu-nication Engrg, Information Technology, or related. 60 mos exp as Software En-gineer, Telecommunication Engineer, or Wireless Engineer, preparing &executing fi eld test cases, verifying certifi cation &validating Intel or Qualcomm &related 3G or 4G device chipsets, mobile device products, handheld devices &/or in-ve-hicle telematics, &apps. Mail resume to Ref#36335, GM Global Mobility, 300 Re-naissance Center, MC:482-C32-C66, De-troit, MI 48265.

SR. ETL SOFFTWARE ENGINEER: cre-ate & implem. BI & ETL solutions; design test cases & data models. BS in CS, EE or related + 5 yrs ETL exp. Email: [email protected] w/ Job #23434BR in subj line. Laureate Education, Inc. 650 S Exeter St., Baltimore, MD 21202. EOE.

ce9cla.indd 53 8/24/17 2:23 PM

CAREER OPPORTUNITIES

72 ComputingEdge September 2017

The Department of Computer Science and Engineer-ing (CSE) at the University of Nevada, Reno (UNR)invites applications for a Tenure-track Faculty position starting July 1, 2018. The Department seeks highly qualified candidates in games, hardware, or in areas that extend or complement the Department’s existing strengths or fulfill Department needs. The position is at the Assistant or Associate Professor level. The new hire will work with existing faculty to strengthen research, attract research funding, teach courses, and enhance our graduate and undergraduate programs.

Applicants for this position must have a Ph.D. in Computer Science, Computer Engineering, or closely related field and must be strongly committed to excellence in research and teaching. Applicants at the Associate Professor level must have a strong record of external funding while applicants at the Assistant Professor level should demonstrate potential for developing robust externally funded programs.

The rapidly expanding and dynamic Department of Computer Science and En-gineering has added nine positions over the last five years and expects to add more. Several faculty have NSF CAREER awards and play lead roles in multiple state-wide and national multi-million dollar NSF awards. In addition to federal support from DoD, DHS, DoE, and NASA, companies like Google, Microsoft, Ford, AT&T, Nokia, and Honda support our research.

In the last five years, the College of Engineering has witnessed unprecedented growth in student enrollment and number of faculty positions. The College is posi-tioned to further enhance its growth of its students, faculty, staff, facilities as well as its research productivity and its graduate and undergraduate programs.

Thanks to this substantial growth in both student enrollment and tenure-track fac-ulty positions, the College of Engineering has received funding to build a new engineering building, scheduled to be completed in 2020. The new engineering building provides both additional space critically needed by the College and the modern facilities capable of supporting advanced research and laboratory space. This building will allow the College to pursue its strategic vision, serve Nevada and the nation, and educate future generations of engineering professionals.

The University of Nevada, Reno recognizes that diversity promotes excellence in education and research. We are an inclusive and engaged community and recog-nize the added value that students, faculty, and staff from different backgrounds bring to the educational experience.

Interested candidates must apply online to www.unrsearch.com/postings/25237 Application process includes: a detailed letter of application, curriculum vitae, statement of teaching philosophy, state of research and plans, and contact infor-mation for three professional references.

Review of applications will begin on November 1, 2017 and continue until the search closes on December 31, 2017. Inquiries should be directed to Ms. Lisa Cody, [email protected].

EEO/AA Women and under-represented groups, individuals with disabilities, and veterans are encouraged to apply.

gotflaws?

Find out more and get involved:

cybersecurity.ieee.org

ce9cla.indd 72 8/24/17 2:24 PM

Now there’s even more to

love about your membership...

Read all your IEEE Computer Society magazines and journals yourWAY on

►ON YOUR COMPUTER ►ON YOUR eREADER►ON YOUR SMARTPHONE ►ON YOUR TABLET

Read all your IEEE Computer Society on

ON YOUR COMPUTER ►ON YOUR eREADER

►LEARN MORE AT: mycs.computer.org

Introducing myCS, the digital magazine portal from IEEE Computer Society.

Finally…go beyond static, hard-to-read PDFs. Our go-to portal makes it easy to access and customize your favorite technical publications like Computer, IEEE Software, IEEE Security & Privacy, and more. Get started today for state-of-the-art industry news and a fully adaptive experience.

NOADDITIONAL

FEE

Authors are invited to submit manuscripts that present original unpublished research in all areas of parallel and distributed processing, including the development of experimental or commercial systems. Work focusing on emerging technologies is especially welcome. Topic areas include:

Parallel and distributed computing theory and algorithms (Algorithms): Design and analysis of novel numerical and combinatorial parallel algorithms; protocols for resource management; communication and synchronization on parallel and distributed systems; parallel algorithms handling power, mobility, and resilience.

Experiments and practice in parallel and distributed computing (Experiments): Design and experimental evaluation of applications of parallel and distributed computing in simulation and analysis; experiments in the use of novel commercial or research architectures, accelerators, neuromorphic architectures, and other non-traditional systems; algorithms for cloud computing; domain-specific parallel and distributed algorithms; performance modeling and analysis of parallel and distributed algorithms.

Programming models, compilers and run-times for parallel applications and systems (Programming Models): Parallel programming paradigms, models and languages; compilers, runtime systems, programming environments and tools for the support of parallel programming; parallel software development and productivity.

System software and middleware for parallel and distributed systems (System Software): System software support for scien-tific workflows; storage and I/O systems; System software for resource management, job scheduling, and energy-efficiency; frameworks targeting cloud and distributed systems; system software support for accelerators and heterogeneous HPC computing systems; interactions between the OS, runtime, compiler, middleware, and tools; system software support for fault tolerance and resilience; containers and virtual machines; system software supporting scalable data analytics, machine learning, and deep learning; OS and runtime system specialized for high performance computing and exascale systems; system software for future novel computing platforms including quantum, neuromorphic, and bio-inspired computing.

Architecture: Architectures for instruction-level and thread-level parallelism; memory technologies and hierarchies; exascale system designs; data center architectures; novel big data architectures; special-purpose architectures and accelerators; network and interconnect architectures; parallel I/O and storage systems; power-efficient and green computing systems; resilience and dependable architectures; performance modeling and evaluation.

Multidisciplinary: Papers that cross the boundaries of the previous tracks are encouraged and can be submitted to the multidisciplinary track. During submission of multidisciplinary papers, authors should indicate their subject areas that can come from any area. Contributions should either target two or more core areas of parallel and distributed computing where the whole is larger than sum of its components, or advance the use of parallel and distributed computing in other areas of science and engineering.

The five-day IPDPS program includes three days of contributed papers, invited speakers, industry participation, and student programs, framed by two days of workshops that complement and broaden the main program.

IMPORTANT DATES• Abstracts due October 17, 2017• Submissions due October 22, 2017• Author notification (1st round) December 8, 2017• Revisions due January 8, 2018• Author notification (final) January 22, 2018• Camera-ready due February 15, 2018

IPDPS 2018 VENUERising against a backdrop of majestic coastal mountains in Canada’s Pacific Northwest, the JW Marriott Parq Vancouver Hotel is located in the heart of downtown Vancouver’s urban entertainment and resort complex. IPDPS 2018 attendees will enjoy state of the art meeting facilities, with Vancouver as a jumping off point for some of the world’s grand sightseeing adventures.

For details, visit www.ipdps.org

Sponsored by IEEE Computer Society Technical Committee on Parallel Processing

CALL FOR PAPERS

GENERAL CHAIRBora Uçar (CNRS and ENS Lyon, France)PROGRAM CHAIR and VICE-CHAIRAnne Benoit (ENS Lyon, France) Ümit V. Çatalyürek (Georgia Institute of Technology, USA)

PROGRAM AREA CHAIRS and VICE CHAIRS• ALGORITHMS: Fredrik Manne (University of Bergen, Norway) Ananth Kalyanaraman (Washington State University, USA)

• EXPERIMENTS: Karen Devine (Sandia National Lab, USA) Christopher D. Carothers (Rensselaer Polytechnic Institute, USA)

• PROGRAMMING MODELS: Albert Cohen (Inria, France) Cosmin Oancea (University of Copenhagen, Denmark)

• SYSTEM SOFTWARE: Franck Cappello (Argonne National Lab, USA) Devesh Tiwari (Northeastern University, USA)

• ARCHITECTURE: Mahmut Kandemir (Penn State University, USA) Gokhan Memik (Northwestern University, USA)

• MULTIDISCIPLINARY: Daniel S. Katz (University of Illinois Urbana-Champaign, USA) Wei Tan (IBM T. J. Watson Research Center, USA)

In cooperation withACM SIGARCH & SIGHPC and IEEE TCCA & TCDP

32nd IEEEInternational Parallel and Distributed ProcessingSymposium

May 21-25, 2018Vancouver, British ColumbiaCANADA